datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
OGB/ogbg-molhiv | ---
license: mit
task_categories:
- graph-ml
---
# Dataset Card for ogbg-molhiv
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://ogb.stanford.edu/docs/graphprop/#ogbg-mol)**
- **[Repository](https://github.com/snap-stanford/ogb):**:
- **Paper:**: Open Graph Benchmark: Datasets for Machine Learning on Graphs (see citation)
- **Leaderboard:**: [OGB leaderboard](https://ogb.stanford.edu/docs/leader_graphprop/#ogbg-molhiv) and [Papers with code leaderboard](https://paperswithcode.com/sota/graph-property-prediction-on-ogbg-molhiv)
### Dataset Summary
The `ogbg-molhiv` dataset is a small molecular property prediction dataset, adapted from MoleculeNet by teams at Stanford, to be a part of the Open Graph Benchmark.
### Supported Tasks and Leaderboards
`ogbg-molhiv` should be used for molecular property prediction (aiming to predict whether molecules inhibit HIV or not), a binary classification task. The score used is ROC-AUC.
The associated leaderboards are here: [OGB leaderboard](https://ogb.stanford.edu/docs/leader_graphprop/#ogbg-molhiv) and [Papers with code leaderboard](https://paperswithcode.com/sota/graph-property-prediction-on-ogbg-molhiv).
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
ogbg_molhiv = load_dataset("graphs-datasets/ogbg-molhiv")
# For the train set (replace by valid or test as needed)
ogbg_molhiv_pg_list = [Data(graph) for graph in ogbg_molhiv["train"]]
ogbg_molhiv_pg = DataLoader(ogbg_molhiv_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | small |
| #graphs | 41,127 |
| average #nodes | 25.5 |
| average #edges | 27.5 |
| average node degree | 2.2 |
| average cluster coefficient | 0.002 |
| MaxSCC ratio | 0.993 |
| graph diameter | 12.0 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.
This information can be found back using
```python
from ogb.graphproppred import PygGraphPropPredDataset
dataset = PygGraphPropPredDataset(name = 'ogbg-molhiv')
split_idx = dataset.get_idx_split()
train = dataset[split_idx['train']] # valid, test
```
## Additional Information
### Licensing Information
The dataset has been released under MIT license.
### Citation Information
```
@inproceedings{hu-etal-2020-open,
author = {Weihua Hu and
Matthias Fey and
Marinka Zitnik and
Yuxiao Dong and
Hongyu Ren and
Bowen Liu and
Michele Catasta and
Jure Leskovec},
editor = {Hugo Larochelle and
Marc Aurelio Ranzato and
Raia Hadsell and
Maria{-}Florina Balcan and
Hsuan{-}Tien Lin},
title = {Open Graph Benchmark: Datasets for Machine Learning on Graphs},
booktitle = {Advances in Neural Information Processing Systems 33: Annual Conference
on Neural Information Processing Systems 2020, NeurIPS 2020, December
6-12, 2020, virtual},
year = {2020},
url = {https://proceedings.neurips.cc/paper/2020/hash/fb60d411a5c5b72b2e7d3527cfc84fd0-Abstract.html},
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. |
vwxyzjn/openhermes-dev__kaist-ai_prometheus-13b-v1.0__1707404986 | ---
dataset_info:
features:
- name: model
dtype: 'null'
- name: category
dtype: string
- name: language
dtype: string
- name: custom_instruction
dtype: bool
- name: id
dtype: string
- name: topic
dtype: string
- name: avatarUrl
dtype: 'null'
- name: idx
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: weight
dtype: 'null'
- name: system_prompt
dtype: string
- name: source
dtype: string
- name: model_name
dtype: string
- name: skip_prompt_formatting
dtype: bool
- name: title
dtype: string
- name: hash
dtype: 'null'
- name: views
dtype: 'null'
- name: prompt
dtype: string
- name: token_length
dtype: int64
- name: candidate0
list:
- name: content
dtype: string
- name: role
dtype: string
- name: candidate1
list:
- name: content
dtype: string
- name: role
dtype: string
- name: candidate0_policy
dtype: string
- name: candidate1_policy
dtype: string
- name: rejected
dtype: string
- name: rejected_policy
dtype: string
splits:
- name: train_prefs
num_bytes: 1454598
num_examples: 167
download_size: 859857
dataset_size: 1454598
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
|
olimakusuo/jose2 | ---
license: openrail
---
|
bigbio/pharmaconer |
---
language:
- es
bigbio_language:
- Spanish
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: PharmaCoNER
homepage: https://temu.bsc.es/pharmaconer/index.php/datasets/
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- TEXT_CLASSIFICATION
---
# Dataset Card for PharmaCoNER
## Dataset Description
- **Homepage:** https://temu.bsc.es/pharmaconer/index.php/datasets/
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,TXTCLASS
### Subtrack 1
PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
For further information, please visit https://temu.bsc.es/pharmaconer/ or send an email to encargo-pln-life@bsc.es
SUBTRACK 1: NER offset and entity type classification
The first subtrack consists in the classical entity-based or instanced-based evaluation that requires that system outputs match exactly the beginning and end locations of each entity tag, as well as match the entity annotation type of the gold standard annotations.
### Subtrack 2
PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
For further information, please visit https://temu.bsc.es/pharmaconer/ or send an email to encargo-pln-life@bsc.es
SUBTRACK 2: CONCEPT INDEXING
In the second subtask, a list of unique SNOMED concept identifiers have to be generated for each document. The predictions are compared to the manually annotated concept ids corresponding to chemical compounds and pharmacological substances.
### Full Task
PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
For further information, please visit https://temu.bsc.es/pharmaconer/ or send an email to encargo-pln-life@bsc.es
SUBTRACK 1: NER offset and entity type classification
The first subtrack consists in the classical entity-based or instanced-based evaluation that requires that system outputs match exactly the beginning and end locations of each entity tag, as well as match the entity annotation type of the gold standard annotations.
SUBTRACK 2: CONCEPT INDEXING
In the second subtask, a list of unique SNOMED concept identifiers have to be generated for each document. The predictions are compared to the manually annotated concept ids corresponding to chemical compounds and pharmacological substances.
## Citation Information
```
@inproceedings{gonzalez2019pharmaconer,
title = "PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",
author = "Gonzalez-Agirre, Aitor and
Marimon, Montserrat and
Intxaurrondo, Ander and
Rabal, Obdulia and
Villegas, Marta and
Krallinger, Martin",
booktitle = "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-5701",
doi = "10.18653/v1/D19-5701",
pages = "1--10",
}
```
|
shjwudp/chinese-c4 | ---
license: cc-by-4.0
language:
- zh
---
## Introduction
Chinese-C4 is a clean Chinese internet dataset based on Common Crawl. The dataset is 46.29GB and has undergone multiple cleaning strategies, including Chinese filtering, heuristic cleaning based on punctuation, line-based hashing for deduplication, and repetition removal.
The dataset is open source and free for commercial use, and you are welcome to use the data and the cleaning strategies provided and contribute your cleaning strategies.
You can find the cleaning script for the dataset on GitHub [c4-dataset-script](https://github.com/shjwudp/c4-dataset-script).
|
open-llm-leaderboard/details_automerger__ShadowYamshadow-7B | ---
pretty_name: Evaluation run of automerger/ShadowYamshadow-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [automerger/ShadowYamshadow-7B](https://huggingface.co/automerger/ShadowYamshadow-7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_automerger__ShadowYamshadow-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-02T17:45:27.844917](https://huggingface.co/datasets/open-llm-leaderboard/details_automerger__ShadowYamshadow-7B/blob/main/results_2024-04-02T17-45-27.844917.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6528419717947812,\n\
\ \"acc_stderr\": 0.03207705788666756,\n \"acc_norm\": 0.6519732021923718,\n\
\ \"acc_norm_stderr\": 0.03275294312307306,\n \"mc1\": 0.6242350061199511,\n\
\ \"mc1_stderr\": 0.016954584060214287,\n \"mc2\": 0.781007042164165,\n\
\ \"mc2_stderr\": 0.013591948129151305\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7150170648464164,\n \"acc_stderr\": 0.013191348179838793,\n\
\ \"acc_norm\": 0.726962457337884,\n \"acc_norm_stderr\": 0.013019332762635751\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7105158334993029,\n\
\ \"acc_stderr\": 0.004525960965551707,\n \"acc_norm\": 0.8898625771758614,\n\
\ \"acc_norm_stderr\": 0.00312421161719886\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621505,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621505\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n\
\ \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n\
\ \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n\
\ \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n\
\ \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.02815283794249387,\n\
\ \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.02815283794249387\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.57,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.57,\n\
\ \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n\
\ \"acc_stderr\": 0.03614665424180826,\n \"acc_norm\": 0.6589595375722543,\n\
\ \"acc_norm_stderr\": 0.03614665424180826\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.04897104952726367,\n\
\ \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.04897104952726367\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n\
\ \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5531914893617021,\n \"acc_stderr\": 0.0325005368436584,\n\
\ \"acc_norm\": 0.5531914893617021,\n \"acc_norm_stderr\": 0.0325005368436584\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n\
\ \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.47368421052631576,\n\
\ \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n\
\ \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42592592592592593,\n \"acc_stderr\": 0.025467149045469543,\n \"\
acc_norm\": 0.42592592592592593,\n \"acc_norm_stderr\": 0.025467149045469543\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.48412698412698413,\n\
\ \"acc_stderr\": 0.04469881854072606,\n \"acc_norm\": 0.48412698412698413,\n\
\ \"acc_norm_stderr\": 0.04469881854072606\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7741935483870968,\n\
\ \"acc_stderr\": 0.023785577884181015,\n \"acc_norm\": 0.7741935483870968,\n\
\ \"acc_norm_stderr\": 0.023785577884181015\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.035176035403610105,\n\
\ \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.035176035403610105\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\"\
: 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n\
\ \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.797979797979798,\n \"acc_stderr\": 0.02860620428922987,\n \"acc_norm\"\
: 0.797979797979798,\n \"acc_norm_stderr\": 0.02860620428922987\n },\n\
\ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033456,\n\
\ \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033456\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6615384615384615,\n \"acc_stderr\": 0.023991500500313036,\n\
\ \"acc_norm\": 0.6615384615384615,\n \"acc_norm_stderr\": 0.023991500500313036\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.32222222222222224,\n \"acc_stderr\": 0.028493465091028593,\n \
\ \"acc_norm\": 0.32222222222222224,\n \"acc_norm_stderr\": 0.028493465091028593\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \
\ \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3841059602649007,\n \"acc_stderr\": 0.03971301814719197,\n \"\
acc_norm\": 0.3841059602649007,\n \"acc_norm_stderr\": 0.03971301814719197\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8422018348623853,\n \"acc_stderr\": 0.015630022970092444,\n \"\
acc_norm\": 0.8422018348623853,\n \"acc_norm_stderr\": 0.015630022970092444\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.49537037037037035,\n \"acc_stderr\": 0.03409825519163572,\n \"\
acc_norm\": 0.49537037037037035,\n \"acc_norm_stderr\": 0.03409825519163572\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8431372549019608,\n \"acc_stderr\": 0.02552472232455335,\n \"\
acc_norm\": 0.8431372549019608,\n \"acc_norm_stderr\": 0.02552472232455335\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.810126582278481,\n \"acc_stderr\": 0.02553010046023349,\n \
\ \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.02553010046023349\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n\
\ \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n\
\ \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.034981493854624714,\n\
\ \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.034981493854624714\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070416,\n \"\
acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070416\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n\
\ \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n\
\ \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7914110429447853,\n \"acc_stderr\": 0.031921934489347235,\n\
\ \"acc_norm\": 0.7914110429447853,\n \"acc_norm_stderr\": 0.031921934489347235\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.41964285714285715,\n\
\ \"acc_stderr\": 0.04684099321077106,\n \"acc_norm\": 0.41964285714285715,\n\
\ \"acc_norm_stderr\": 0.04684099321077106\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n\
\ \"acc_stderr\": 0.021586494001281365,\n \"acc_norm\": 0.8760683760683761,\n\
\ \"acc_norm_stderr\": 0.021586494001281365\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8263090676883781,\n\
\ \"acc_stderr\": 0.013547415658662264,\n \"acc_norm\": 0.8263090676883781,\n\
\ \"acc_norm_stderr\": 0.013547415658662264\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7254335260115607,\n \"acc_stderr\": 0.02402774515526502,\n\
\ \"acc_norm\": 0.7254335260115607,\n \"acc_norm_stderr\": 0.02402774515526502\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4491620111731844,\n\
\ \"acc_stderr\": 0.016635838341631924,\n \"acc_norm\": 0.4491620111731844,\n\
\ \"acc_norm_stderr\": 0.016635838341631924\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7156862745098039,\n \"acc_stderr\": 0.02582916327275748,\n\
\ \"acc_norm\": 0.7156862745098039,\n \"acc_norm_stderr\": 0.02582916327275748\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6977491961414791,\n\
\ \"acc_stderr\": 0.02608270069539966,\n \"acc_norm\": 0.6977491961414791,\n\
\ \"acc_norm_stderr\": 0.02608270069539966\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.024922001168886335,\n\
\ \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.024922001168886335\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.49645390070921985,\n \"acc_stderr\": 0.02982674915328092,\n \
\ \"acc_norm\": 0.49645390070921985,\n \"acc_norm_stderr\": 0.02982674915328092\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47783572359843546,\n\
\ \"acc_stderr\": 0.012757683047716175,\n \"acc_norm\": 0.47783572359843546,\n\
\ \"acc_norm_stderr\": 0.012757683047716175\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6911764705882353,\n \"acc_stderr\": 0.02806499816704009,\n\
\ \"acc_norm\": 0.6911764705882353,\n \"acc_norm_stderr\": 0.02806499816704009\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6813725490196079,\n \"acc_stderr\": 0.01885008469646872,\n \
\ \"acc_norm\": 0.6813725490196079,\n \"acc_norm_stderr\": 0.01885008469646872\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.02812342933514278,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.02812342933514278\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454125,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454125\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n\
\ \"acc_stderr\": 0.03864139923699122,\n \"acc_norm\": 0.5602409638554217,\n\
\ \"acc_norm_stderr\": 0.03864139923699122\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.6242350061199511,\n\
\ \"mc1_stderr\": 0.016954584060214287,\n \"mc2\": 0.781007042164165,\n\
\ \"mc2_stderr\": 0.013591948129151305\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8484609313338595,\n \"acc_stderr\": 0.010077698907571776\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6997725549658832,\n \
\ \"acc_stderr\": 0.01262542315228303\n }\n}\n```"
repo_url: https://huggingface.co/automerger/ShadowYamshadow-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|arc:challenge|25_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|gsm8k|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hellaswag|10_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-02T17-45-27.844917.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-02T17-45-27.844917.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- '**/details_harness|winogrande|5_2024-04-02T17-45-27.844917.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-02T17-45-27.844917.parquet'
- config_name: results
data_files:
- split: 2024_04_02T17_45_27.844917
path:
- results_2024-04-02T17-45-27.844917.parquet
- split: latest
path:
- results_2024-04-02T17-45-27.844917.parquet
---
# Dataset Card for Evaluation run of automerger/ShadowYamshadow-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [automerger/ShadowYamshadow-7B](https://huggingface.co/automerger/ShadowYamshadow-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_automerger__ShadowYamshadow-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-02T17:45:27.844917](https://huggingface.co/datasets/open-llm-leaderboard/details_automerger__ShadowYamshadow-7B/blob/main/results_2024-04-02T17-45-27.844917.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6528419717947812,
"acc_stderr": 0.03207705788666756,
"acc_norm": 0.6519732021923718,
"acc_norm_stderr": 0.03275294312307306,
"mc1": 0.6242350061199511,
"mc1_stderr": 0.016954584060214287,
"mc2": 0.781007042164165,
"mc2_stderr": 0.013591948129151305
},
"harness|arc:challenge|25": {
"acc": 0.7150170648464164,
"acc_stderr": 0.013191348179838793,
"acc_norm": 0.726962457337884,
"acc_norm_stderr": 0.013019332762635751
},
"harness|hellaswag|10": {
"acc": 0.7105158334993029,
"acc_stderr": 0.004525960965551707,
"acc_norm": 0.8898625771758614,
"acc_norm_stderr": 0.00312421161719886
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.04153948404742398,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.04153948404742398
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.02815283794249387,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.02815283794249387
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03476590104304134,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03476590104304134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.03614665424180826,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.03614665424180826
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.04897104952726367,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.04897104952726367
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5531914893617021,
"acc_stderr": 0.0325005368436584,
"acc_norm": 0.5531914893617021,
"acc_norm_stderr": 0.0325005368436584
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42592592592592593,
"acc_stderr": 0.025467149045469543,
"acc_norm": 0.42592592592592593,
"acc_norm_stderr": 0.025467149045469543
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.48412698412698413,
"acc_stderr": 0.04469881854072606,
"acc_norm": 0.48412698412698413,
"acc_norm_stderr": 0.04469881854072606
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7741935483870968,
"acc_stderr": 0.023785577884181015,
"acc_norm": 0.7741935483870968,
"acc_norm_stderr": 0.023785577884181015
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5073891625615764,
"acc_stderr": 0.035176035403610105,
"acc_norm": 0.5073891625615764,
"acc_norm_stderr": 0.035176035403610105
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.02860620428922987,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.02860620428922987
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.021500249576033456,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.021500249576033456
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6615384615384615,
"acc_stderr": 0.023991500500313036,
"acc_norm": 0.6615384615384615,
"acc_norm_stderr": 0.023991500500313036
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32222222222222224,
"acc_stderr": 0.028493465091028593,
"acc_norm": 0.32222222222222224,
"acc_norm_stderr": 0.028493465091028593
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3841059602649007,
"acc_stderr": 0.03971301814719197,
"acc_norm": 0.3841059602649007,
"acc_norm_stderr": 0.03971301814719197
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8422018348623853,
"acc_stderr": 0.015630022970092444,
"acc_norm": 0.8422018348623853,
"acc_norm_stderr": 0.015630022970092444
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49537037037037035,
"acc_stderr": 0.03409825519163572,
"acc_norm": 0.49537037037037035,
"acc_norm_stderr": 0.03409825519163572
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8431372549019608,
"acc_stderr": 0.02552472232455335,
"acc_norm": 0.8431372549019608,
"acc_norm_stderr": 0.02552472232455335
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.02553010046023349,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.02553010046023349
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057221,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057221
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.034981493854624714,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.034981493854624714
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7603305785123967,
"acc_stderr": 0.03896878985070416,
"acc_norm": 0.7603305785123967,
"acc_norm_stderr": 0.03896878985070416
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7914110429447853,
"acc_stderr": 0.031921934489347235,
"acc_norm": 0.7914110429447853,
"acc_norm_stderr": 0.031921934489347235
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.41964285714285715,
"acc_stderr": 0.04684099321077106,
"acc_norm": 0.41964285714285715,
"acc_norm_stderr": 0.04684099321077106
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.021586494001281365,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.021586494001281365
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8263090676883781,
"acc_stderr": 0.013547415658662264,
"acc_norm": 0.8263090676883781,
"acc_norm_stderr": 0.013547415658662264
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7254335260115607,
"acc_stderr": 0.02402774515526502,
"acc_norm": 0.7254335260115607,
"acc_norm_stderr": 0.02402774515526502
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4491620111731844,
"acc_stderr": 0.016635838341631924,
"acc_norm": 0.4491620111731844,
"acc_norm_stderr": 0.016635838341631924
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7156862745098039,
"acc_stderr": 0.02582916327275748,
"acc_norm": 0.7156862745098039,
"acc_norm_stderr": 0.02582916327275748
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6977491961414791,
"acc_stderr": 0.02608270069539966,
"acc_norm": 0.6977491961414791,
"acc_norm_stderr": 0.02608270069539966
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.024922001168886335,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.024922001168886335
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.49645390070921985,
"acc_stderr": 0.02982674915328092,
"acc_norm": 0.49645390070921985,
"acc_norm_stderr": 0.02982674915328092
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.47783572359843546,
"acc_stderr": 0.012757683047716175,
"acc_norm": 0.47783572359843546,
"acc_norm_stderr": 0.012757683047716175
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6911764705882353,
"acc_stderr": 0.02806499816704009,
"acc_norm": 0.6911764705882353,
"acc_norm_stderr": 0.02806499816704009
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6813725490196079,
"acc_stderr": 0.01885008469646872,
"acc_norm": 0.6813725490196079,
"acc_norm_stderr": 0.01885008469646872
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.02812342933514278,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.02812342933514278
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454125,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454125
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699122,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699122
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.6242350061199511,
"mc1_stderr": 0.016954584060214287,
"mc2": 0.781007042164165,
"mc2_stderr": 0.013591948129151305
},
"harness|winogrande|5": {
"acc": 0.8484609313338595,
"acc_stderr": 0.010077698907571776
},
"harness|gsm8k|5": {
"acc": 0.6997725549658832,
"acc_stderr": 0.01262542315228303
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
gagan3012/CS | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: answer
dtype: string
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: test
num_bytes: 396825
num_examples: 240
download_size: 126940
dataset_size: 396825
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-html-103000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 656455
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_Severian__ANIMA-Phi-Neptune-Mistral-LoRa | ---
pretty_name: Evaluation run of Severian/ANIMA-Phi-Neptune-Mistral-LoRa
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Severian/ANIMA-Phi-Neptune-Mistral-LoRa](https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-LoRa)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Severian__ANIMA-Phi-Neptune-Mistral-LoRa\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-10-10T15:49:43.201517](https://huggingface.co/datasets/open-llm-leaderboard/details_Severian__ANIMA-Phi-Neptune-Mistral-LoRa/blob/main/results_2023-10-10T15-49-43.201517.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5216421138363554,\n\
\ \"acc_stderr\": 0.034984720641748575,\n \"acc_norm\": 0.5252295168080844,\n\
\ \"acc_norm_stderr\": 0.03497398634134131,\n \"mc1\": 0.412484700122399,\n\
\ \"mc1_stderr\": 0.017233299399571227,\n \"mc2\": 0.5938354841447588,\n\
\ \"mc2_stderr\": 0.015090386269121684\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5,\n \"acc_stderr\": 0.014611390804670088,\n \
\ \"acc_norm\": 0.5307167235494881,\n \"acc_norm_stderr\": 0.014583792546304037\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5656243776140211,\n\
\ \"acc_stderr\": 0.004946617138983521,\n \"acc_norm\": 0.7465644293965346,\n\
\ \"acc_norm_stderr\": 0.004340891673320502\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4074074074074074,\n\
\ \"acc_stderr\": 0.042446332383532286,\n \"acc_norm\": 0.4074074074074074,\n\
\ \"acc_norm_stderr\": 0.042446332383532286\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.4868421052631579,\n \"acc_stderr\": 0.04067533136309174,\n\
\ \"acc_norm\": 0.4868421052631579,\n \"acc_norm_stderr\": 0.04067533136309174\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.52,\n\
\ \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \
\ \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.5849056603773585,\n \"acc_stderr\": 0.03032594578928611,\n\
\ \"acc_norm\": 0.5849056603773585,\n \"acc_norm_stderr\": 0.03032594578928611\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5277777777777778,\n\
\ \"acc_stderr\": 0.04174752578923185,\n \"acc_norm\": 0.5277777777777778,\n\
\ \"acc_norm_stderr\": 0.04174752578923185\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.45,\n \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\": 0.45,\n\
\ \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.23,\n \"acc_stderr\": 0.042295258468165065,\n \
\ \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.042295258468165065\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5375722543352601,\n\
\ \"acc_stderr\": 0.0380168510452446,\n \"acc_norm\": 0.5375722543352601,\n\
\ \"acc_norm_stderr\": 0.0380168510452446\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.29411764705882354,\n \"acc_stderr\": 0.04533838195929777,\n\
\ \"acc_norm\": 0.29411764705882354,\n \"acc_norm_stderr\": 0.04533838195929777\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n\
\ \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.4553191489361702,\n \"acc_stderr\": 0.03255525359340355,\n\
\ \"acc_norm\": 0.4553191489361702,\n \"acc_norm_stderr\": 0.03255525359340355\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.35964912280701755,\n\
\ \"acc_stderr\": 0.04514496132873633,\n \"acc_norm\": 0.35964912280701755,\n\
\ \"acc_norm_stderr\": 0.04514496132873633\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482758,\n\
\ \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482758\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.35714285714285715,\n \"acc_stderr\": 0.024677862841332783,\n \"\
acc_norm\": 0.35714285714285715,\n \"acc_norm_stderr\": 0.024677862841332783\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.36507936507936506,\n\
\ \"acc_stderr\": 0.04306241259127153,\n \"acc_norm\": 0.36507936507936506,\n\
\ \"acc_norm_stderr\": 0.04306241259127153\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5838709677419355,\n\
\ \"acc_stderr\": 0.028040981380761547,\n \"acc_norm\": 0.5838709677419355,\n\
\ \"acc_norm_stderr\": 0.028040981380761547\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.3448275862068966,\n \"acc_stderr\": 0.033442837442804574,\n\
\ \"acc_norm\": 0.3448275862068966,\n \"acc_norm_stderr\": 0.033442837442804574\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\"\
: 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.036639749943912434,\n\
\ \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.036639749943912434\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.6212121212121212,\n \"acc_stderr\": 0.03456088731993747,\n \"\
acc_norm\": 0.6212121212121212,\n \"acc_norm_stderr\": 0.03456088731993747\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.7202072538860104,\n \"acc_stderr\": 0.03239637046735704,\n\
\ \"acc_norm\": 0.7202072538860104,\n \"acc_norm_stderr\": 0.03239637046735704\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.4641025641025641,\n \"acc_stderr\": 0.02528558599001784,\n \
\ \"acc_norm\": 0.4641025641025641,\n \"acc_norm_stderr\": 0.02528558599001784\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2740740740740741,\n \"acc_stderr\": 0.027195934804085626,\n \
\ \"acc_norm\": 0.2740740740740741,\n \"acc_norm_stderr\": 0.027195934804085626\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.47058823529411764,\n \"acc_stderr\": 0.03242225027115007,\n\
\ \"acc_norm\": 0.47058823529411764,\n \"acc_norm_stderr\": 0.03242225027115007\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2980132450331126,\n \"acc_stderr\": 0.037345356767871984,\n \"\
acc_norm\": 0.2980132450331126,\n \"acc_norm_stderr\": 0.037345356767871984\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.710091743119266,\n \"acc_stderr\": 0.0194530666092016,\n \"acc_norm\"\
: 0.710091743119266,\n \"acc_norm_stderr\": 0.0194530666092016\n },\n\
\ \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.37962962962962965,\n\
\ \"acc_stderr\": 0.03309682581119035,\n \"acc_norm\": 0.37962962962962965,\n\
\ \"acc_norm_stderr\": 0.03309682581119035\n },\n \"harness|hendrycksTest-high_school_us_history|5\"\
: {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.03308611113236435,\n\
\ \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.03308611113236435\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.6751054852320675,\n \"acc_stderr\": 0.030486039389105307,\n \
\ \"acc_norm\": 0.6751054852320675,\n \"acc_norm_stderr\": 0.030486039389105307\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6053811659192825,\n\
\ \"acc_stderr\": 0.03280400504755291,\n \"acc_norm\": 0.6053811659192825,\n\
\ \"acc_norm_stderr\": 0.03280400504755291\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.5877862595419847,\n \"acc_stderr\": 0.04317171194870255,\n\
\ \"acc_norm\": 0.5877862595419847,\n \"acc_norm_stderr\": 0.04317171194870255\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6198347107438017,\n \"acc_stderr\": 0.04431324501968432,\n \"\
acc_norm\": 0.6198347107438017,\n \"acc_norm_stderr\": 0.04431324501968432\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6388888888888888,\n\
\ \"acc_stderr\": 0.04643454608906275,\n \"acc_norm\": 0.6388888888888888,\n\
\ \"acc_norm_stderr\": 0.04643454608906275\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6134969325153374,\n \"acc_stderr\": 0.03825825548848607,\n\
\ \"acc_norm\": 0.6134969325153374,\n \"acc_norm_stderr\": 0.03825825548848607\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.6796116504854369,\n \"acc_stderr\": 0.04620284082280041,\n\
\ \"acc_norm\": 0.6796116504854369,\n \"acc_norm_stderr\": 0.04620284082280041\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.811965811965812,\n\
\ \"acc_stderr\": 0.02559819368665226,\n \"acc_norm\": 0.811965811965812,\n\
\ \"acc_norm_stderr\": 0.02559819368665226\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001974,\n \
\ \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001974\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7075351213282248,\n\
\ \"acc_stderr\": 0.016267000684598642,\n \"acc_norm\": 0.7075351213282248,\n\
\ \"acc_norm_stderr\": 0.016267000684598642\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.5404624277456648,\n \"acc_stderr\": 0.026830805998952236,\n\
\ \"acc_norm\": 0.5404624277456648,\n \"acc_norm_stderr\": 0.026830805998952236\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2759776536312849,\n\
\ \"acc_stderr\": 0.014950103002475349,\n \"acc_norm\": 0.2759776536312849,\n\
\ \"acc_norm_stderr\": 0.014950103002475349\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.5620915032679739,\n \"acc_stderr\": 0.028408302020332694,\n\
\ \"acc_norm\": 0.5620915032679739,\n \"acc_norm_stderr\": 0.028408302020332694\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5980707395498392,\n\
\ \"acc_stderr\": 0.027846476005930473,\n \"acc_norm\": 0.5980707395498392,\n\
\ \"acc_norm_stderr\": 0.027846476005930473\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6172839506172839,\n \"acc_stderr\": 0.027044538138402616,\n\
\ \"acc_norm\": 0.6172839506172839,\n \"acc_norm_stderr\": 0.027044538138402616\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.3617021276595745,\n \"acc_stderr\": 0.028663820147199492,\n \
\ \"acc_norm\": 0.3617021276595745,\n \"acc_norm_stderr\": 0.028663820147199492\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.38852672750977835,\n\
\ \"acc_stderr\": 0.012448817838292355,\n \"acc_norm\": 0.38852672750977835,\n\
\ \"acc_norm_stderr\": 0.012448817838292355\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.47794117647058826,\n \"acc_stderr\": 0.030343264224213542,\n\
\ \"acc_norm\": 0.47794117647058826,\n \"acc_norm_stderr\": 0.030343264224213542\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.511437908496732,\n \"acc_stderr\": 0.020222541515610863,\n \
\ \"acc_norm\": 0.511437908496732,\n \"acc_norm_stderr\": 0.020222541515610863\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6181818181818182,\n\
\ \"acc_stderr\": 0.04653429807913507,\n \"acc_norm\": 0.6181818181818182,\n\
\ \"acc_norm_stderr\": 0.04653429807913507\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6204081632653061,\n \"acc_stderr\": 0.03106721126287248,\n\
\ \"acc_norm\": 0.6204081632653061,\n \"acc_norm_stderr\": 0.03106721126287248\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7014925373134329,\n\
\ \"acc_stderr\": 0.03235743789355044,\n \"acc_norm\": 0.7014925373134329,\n\
\ \"acc_norm_stderr\": 0.03235743789355044\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.72,\n \"acc_stderr\": 0.045126085985421276,\n \
\ \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.045126085985421276\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.43373493975903615,\n\
\ \"acc_stderr\": 0.03858158940685517,\n \"acc_norm\": 0.43373493975903615,\n\
\ \"acc_norm_stderr\": 0.03858158940685517\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7017543859649122,\n \"acc_stderr\": 0.03508771929824563,\n\
\ \"acc_norm\": 0.7017543859649122,\n \"acc_norm_stderr\": 0.03508771929824563\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.412484700122399,\n\
\ \"mc1_stderr\": 0.017233299399571227,\n \"mc2\": 0.5938354841447588,\n\
\ \"mc2_stderr\": 0.015090386269121684\n }\n}\n```"
repo_url: https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-LoRa
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|arc:challenge|25_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hellaswag|10_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T15-49-43.201517.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T15-49-43.201517.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-10T15-49-43.201517.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-10T15-49-43.201517.parquet'
- config_name: results
data_files:
- split: 2023_10_10T15_49_43.201517
path:
- results_2023-10-10T15-49-43.201517.parquet
- split: latest
path:
- results_2023-10-10T15-49-43.201517.parquet
---
# Dataset Card for Evaluation run of Severian/ANIMA-Phi-Neptune-Mistral-LoRa
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-LoRa
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Severian/ANIMA-Phi-Neptune-Mistral-LoRa](https://huggingface.co/Severian/ANIMA-Phi-Neptune-Mistral-LoRa) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Severian__ANIMA-Phi-Neptune-Mistral-LoRa",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-10T15:49:43.201517](https://huggingface.co/datasets/open-llm-leaderboard/details_Severian__ANIMA-Phi-Neptune-Mistral-LoRa/blob/main/results_2023-10-10T15-49-43.201517.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5216421138363554,
"acc_stderr": 0.034984720641748575,
"acc_norm": 0.5252295168080844,
"acc_norm_stderr": 0.03497398634134131,
"mc1": 0.412484700122399,
"mc1_stderr": 0.017233299399571227,
"mc2": 0.5938354841447588,
"mc2_stderr": 0.015090386269121684
},
"harness|arc:challenge|25": {
"acc": 0.5,
"acc_stderr": 0.014611390804670088,
"acc_norm": 0.5307167235494881,
"acc_norm_stderr": 0.014583792546304037
},
"harness|hellaswag|10": {
"acc": 0.5656243776140211,
"acc_stderr": 0.004946617138983521,
"acc_norm": 0.7465644293965346,
"acc_norm_stderr": 0.004340891673320502
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.042446332383532286,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.042446332383532286
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.4868421052631579,
"acc_stderr": 0.04067533136309174,
"acc_norm": 0.4868421052631579,
"acc_norm_stderr": 0.04067533136309174
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5849056603773585,
"acc_stderr": 0.03032594578928611,
"acc_norm": 0.5849056603773585,
"acc_norm_stderr": 0.03032594578928611
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5277777777777778,
"acc_stderr": 0.04174752578923185,
"acc_norm": 0.5277777777777778,
"acc_norm_stderr": 0.04174752578923185
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.45,
"acc_stderr": 0.049999999999999996,
"acc_norm": 0.45,
"acc_norm_stderr": 0.049999999999999996
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.23,
"acc_stderr": 0.042295258468165065,
"acc_norm": 0.23,
"acc_norm_stderr": 0.042295258468165065
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5375722543352601,
"acc_stderr": 0.0380168510452446,
"acc_norm": 0.5375722543352601,
"acc_norm_stderr": 0.0380168510452446
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.29411764705882354,
"acc_stderr": 0.04533838195929777,
"acc_norm": 0.29411764705882354,
"acc_norm_stderr": 0.04533838195929777
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4553191489361702,
"acc_stderr": 0.03255525359340355,
"acc_norm": 0.4553191489361702,
"acc_norm_stderr": 0.03255525359340355
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.35964912280701755,
"acc_stderr": 0.04514496132873633,
"acc_norm": 0.35964912280701755,
"acc_norm_stderr": 0.04514496132873633
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482758,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482758
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.35714285714285715,
"acc_stderr": 0.024677862841332783,
"acc_norm": 0.35714285714285715,
"acc_norm_stderr": 0.024677862841332783
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.36507936507936506,
"acc_stderr": 0.04306241259127153,
"acc_norm": 0.36507936507936506,
"acc_norm_stderr": 0.04306241259127153
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.5838709677419355,
"acc_stderr": 0.028040981380761547,
"acc_norm": 0.5838709677419355,
"acc_norm_stderr": 0.028040981380761547
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3448275862068966,
"acc_stderr": 0.033442837442804574,
"acc_norm": 0.3448275862068966,
"acc_norm_stderr": 0.033442837442804574
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.036639749943912434,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.036639749943912434
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6212121212121212,
"acc_stderr": 0.03456088731993747,
"acc_norm": 0.6212121212121212,
"acc_norm_stderr": 0.03456088731993747
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7202072538860104,
"acc_stderr": 0.03239637046735704,
"acc_norm": 0.7202072538860104,
"acc_norm_stderr": 0.03239637046735704
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.4641025641025641,
"acc_stderr": 0.02528558599001784,
"acc_norm": 0.4641025641025641,
"acc_norm_stderr": 0.02528558599001784
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2740740740740741,
"acc_stderr": 0.027195934804085626,
"acc_norm": 0.2740740740740741,
"acc_norm_stderr": 0.027195934804085626
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.47058823529411764,
"acc_stderr": 0.03242225027115007,
"acc_norm": 0.47058823529411764,
"acc_norm_stderr": 0.03242225027115007
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2980132450331126,
"acc_stderr": 0.037345356767871984,
"acc_norm": 0.2980132450331126,
"acc_norm_stderr": 0.037345356767871984
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.710091743119266,
"acc_stderr": 0.0194530666092016,
"acc_norm": 0.710091743119266,
"acc_norm_stderr": 0.0194530666092016
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.37962962962962965,
"acc_stderr": 0.03309682581119035,
"acc_norm": 0.37962962962962965,
"acc_norm_stderr": 0.03309682581119035
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.03308611113236435,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.03308611113236435
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6751054852320675,
"acc_stderr": 0.030486039389105307,
"acc_norm": 0.6751054852320675,
"acc_norm_stderr": 0.030486039389105307
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6053811659192825,
"acc_stderr": 0.03280400504755291,
"acc_norm": 0.6053811659192825,
"acc_norm_stderr": 0.03280400504755291
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5877862595419847,
"acc_stderr": 0.04317171194870255,
"acc_norm": 0.5877862595419847,
"acc_norm_stderr": 0.04317171194870255
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6198347107438017,
"acc_stderr": 0.04431324501968432,
"acc_norm": 0.6198347107438017,
"acc_norm_stderr": 0.04431324501968432
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6388888888888888,
"acc_stderr": 0.04643454608906275,
"acc_norm": 0.6388888888888888,
"acc_norm_stderr": 0.04643454608906275
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6134969325153374,
"acc_stderr": 0.03825825548848607,
"acc_norm": 0.6134969325153374,
"acc_norm_stderr": 0.03825825548848607
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.6796116504854369,
"acc_stderr": 0.04620284082280041,
"acc_norm": 0.6796116504854369,
"acc_norm_stderr": 0.04620284082280041
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.811965811965812,
"acc_stderr": 0.02559819368665226,
"acc_norm": 0.811965811965812,
"acc_norm_stderr": 0.02559819368665226
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7075351213282248,
"acc_stderr": 0.016267000684598642,
"acc_norm": 0.7075351213282248,
"acc_norm_stderr": 0.016267000684598642
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5404624277456648,
"acc_stderr": 0.026830805998952236,
"acc_norm": 0.5404624277456648,
"acc_norm_stderr": 0.026830805998952236
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2759776536312849,
"acc_stderr": 0.014950103002475349,
"acc_norm": 0.2759776536312849,
"acc_norm_stderr": 0.014950103002475349
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5620915032679739,
"acc_stderr": 0.028408302020332694,
"acc_norm": 0.5620915032679739,
"acc_norm_stderr": 0.028408302020332694
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.5980707395498392,
"acc_stderr": 0.027846476005930473,
"acc_norm": 0.5980707395498392,
"acc_norm_stderr": 0.027846476005930473
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6172839506172839,
"acc_stderr": 0.027044538138402616,
"acc_norm": 0.6172839506172839,
"acc_norm_stderr": 0.027044538138402616
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.3617021276595745,
"acc_stderr": 0.028663820147199492,
"acc_norm": 0.3617021276595745,
"acc_norm_stderr": 0.028663820147199492
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.38852672750977835,
"acc_stderr": 0.012448817838292355,
"acc_norm": 0.38852672750977835,
"acc_norm_stderr": 0.012448817838292355
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.47794117647058826,
"acc_stderr": 0.030343264224213542,
"acc_norm": 0.47794117647058826,
"acc_norm_stderr": 0.030343264224213542
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.511437908496732,
"acc_stderr": 0.020222541515610863,
"acc_norm": 0.511437908496732,
"acc_norm_stderr": 0.020222541515610863
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6181818181818182,
"acc_stderr": 0.04653429807913507,
"acc_norm": 0.6181818181818182,
"acc_norm_stderr": 0.04653429807913507
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6204081632653061,
"acc_stderr": 0.03106721126287248,
"acc_norm": 0.6204081632653061,
"acc_norm_stderr": 0.03106721126287248
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7014925373134329,
"acc_stderr": 0.03235743789355044,
"acc_norm": 0.7014925373134329,
"acc_norm_stderr": 0.03235743789355044
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.72,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.72,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-virology|5": {
"acc": 0.43373493975903615,
"acc_stderr": 0.03858158940685517,
"acc_norm": 0.43373493975903615,
"acc_norm_stderr": 0.03858158940685517
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7017543859649122,
"acc_stderr": 0.03508771929824563,
"acc_norm": 0.7017543859649122,
"acc_norm_stderr": 0.03508771929824563
},
"harness|truthfulqa:mc|0": {
"mc1": 0.412484700122399,
"mc1_stderr": 0.017233299399571227,
"mc2": 0.5938354841447588,
"mc2_stderr": 0.015090386269121684
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
CyberHarem/musashi_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of musashi/武蔵/武藏 (Azur Lane)
This is the dataset of musashi/武蔵/武藏 (Azur Lane), containing 402 images and their tags.
The core tags of this character are `animal_ears, long_hair, breasts, fox_ears, animal_ear_fluff, black_hair, facial_mark, large_breasts, yellow_eyes, bangs, fox_girl, very_long_hair, hair_ornament, ahoge, tail, fox_tail, huge_breasts, purple_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 402 | 849.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/musashi_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 402 | 381.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/musashi_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1030 | 827.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/musashi_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 402 | 702.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/musashi_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1030 | 1.30 GiB | [Download](https://huggingface.co/datasets/CyberHarem/musashi_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/musashi_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 38 |  |  |  |  |  | 1girl, bare_shoulders, cleavage, magatama_necklace, solo, looking_at_viewer, smile, upper_body, fur-trimmed_kimono, collarbone, simple_background, white_background, parted_lips, brown_eyes |
| 1 | 5 |  |  |  |  |  | 1girl, cleavage, holding_sword, looking_at_viewer, magatama_necklace, solo, bare_shoulders, electricity, lightning, brown_eyes, fur-trimmed_kimono, smile, artist_name, katana, parted_lips |
| 2 | 14 |  |  |  |  |  | 1girl, bare_shoulders, black_kimono, cleavage, hair_flower, looking_at_viewer, maid_headdress, official_alternate_costume, solo, frilled_hairband, purple_flower, black_choker, black_bowtie, brown_eyes, folding_fan, frilled_apron, holding_fan, off_shoulder, wa_maid, purple_nails, white_apron, white_gloves, wide_sleeves, maid_apron, parted_lips, smile, nail_polish, sash, simple_background, waist_apron, white_background, wrist_cuffs, blunt_bangs, blush, closed_mouth, colored_inner_hair, single_fingerless_glove, upper_body |
| 3 | 5 |  |  |  |  |  | 1girl, completely_nude, looking_at_viewer, nipples, solo, navel, simple_background, multiple_tails, pussy, whisker_markings, white_background, full_body, kitsune, open_mouth, tongue_out |
| 4 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, solo, smile, blush, collarbone, completely_nude, upper_body, inverted_nipples, mole_on_breast, multiple_tails, whisker_markings |
| 5 | 12 |  |  |  |  |  | 1girl, cleavage, looking_at_viewer, solo, bare_shoulders, blush, detached_collar, playboy_bunny, strapless_leotard, holding_tray, thighs, bow, cowboy_shot, fake_animal_ears, rabbit_ears, white_gloves, fishnet_pantyhose, highleg_leotard, parted_lips, ponytail, simple_background, drinking_glass, heart, magatama, official_alternate_costume, one_eye_closed, smile, white_background |
| 6 | 5 |  |  |  |  |  | 1girl, black_gloves, looking_at_viewer, see-through, solo, thighhighs, whisker_markings, cleavage_cutout, elbow_gloves, folding_fan, holding_fan, short_sleeves, sitting, braid, china_dress, gold_trim, parted_lips, pelvic_curtain, smile, thighs, bracelet, groin, highleg, knee_up, panties, purple_dress, purple_gloves, twintails |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | cleavage | magatama_necklace | solo | looking_at_viewer | smile | upper_body | fur-trimmed_kimono | collarbone | simple_background | white_background | parted_lips | brown_eyes | holding_sword | electricity | lightning | artist_name | katana | black_kimono | hair_flower | maid_headdress | official_alternate_costume | frilled_hairband | purple_flower | black_choker | black_bowtie | folding_fan | frilled_apron | holding_fan | off_shoulder | wa_maid | purple_nails | white_apron | white_gloves | wide_sleeves | maid_apron | nail_polish | sash | waist_apron | wrist_cuffs | blunt_bangs | blush | closed_mouth | colored_inner_hair | single_fingerless_glove | completely_nude | nipples | navel | multiple_tails | pussy | whisker_markings | full_body | kitsune | open_mouth | tongue_out | inverted_nipples | mole_on_breast | detached_collar | playboy_bunny | strapless_leotard | holding_tray | thighs | bow | cowboy_shot | fake_animal_ears | rabbit_ears | fishnet_pantyhose | highleg_leotard | ponytail | drinking_glass | heart | magatama | one_eye_closed | black_gloves | see-through | thighhighs | cleavage_cutout | elbow_gloves | short_sleeves | sitting | braid | china_dress | gold_trim | pelvic_curtain | bracelet | groin | highleg | knee_up | panties | purple_dress | purple_gloves | twintails |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:-----------|:--------------------|:-------|:--------------------|:--------|:-------------|:---------------------|:-------------|:--------------------|:-------------------|:--------------|:-------------|:----------------|:--------------|:------------|:--------------|:---------|:---------------|:--------------|:-----------------|:-----------------------------|:-------------------|:----------------|:---------------|:---------------|:--------------|:----------------|:--------------|:---------------|:----------|:---------------|:--------------|:---------------|:---------------|:-------------|:--------------|:-------|:--------------|:--------------|:--------------|:--------|:---------------|:---------------------|:--------------------------|:------------------|:----------|:--------|:-----------------|:--------|:-------------------|:------------|:----------|:-------------|:-------------|:-------------------|:-----------------|:------------------|:----------------|:--------------------|:---------------|:---------|:------|:--------------|:-------------------|:--------------|:--------------------|:------------------|:-----------|:-----------------|:--------|:-----------|:-----------------|:---------------|:--------------|:-------------|:------------------|:---------------|:----------------|:----------|:--------|:--------------|:------------|:-----------------|:-----------|:--------|:----------|:----------|:----------|:---------------|:----------------|:------------|
| 0 | 38 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | | X | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 14 |  |  |  |  |  | X | X | X | | X | X | X | X | | | X | X | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | | | X | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | | | | X | X | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | | | X | | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 12 |  |  |  |  |  | X | X | X | | X | X | X | | | | X | X | X | | | | | | | | | | X | | | | | | | | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | | | | X | X | X | | | | | | X | | | | | | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
agestau/preproc-fashion-products | ---
dataset_info:
features:
- name: subCategory
dtype: string
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 464236487.875
num_examples: 36145
download_size: 223972645
dataset_size: 464236487.875
---
# Dataset Card for "preproc-fashion-products"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NimaBoscarino/test-glue | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
0: unacceptable
1: acceptable
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 5145
num_examples: 100
download_size: 4268
dataset_size: 5145
---
# Dataset Card for "test-glue"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alvations/dslml24-jelly-submission-pt | ---
dataset_info:
- config_name: dev
features:
- name: text
dtype: string
- name: label
dtype: string
- name: prediction_oneshot
dtype: string
- name: response_oneshot
list:
- name: generated_text
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 1843139
num_examples: 991
download_size: 585205
dataset_size: 1843139
- config_name: test
features:
- name: text
dtype: string
- name: prediction_oneshot
dtype: string
- name: response_oneshot
list:
- name: generated_text
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 920259
num_examples: 495
download_size: 298742
dataset_size: 920259
- config_name: train
features:
- name: text
dtype: string
- name: label
dtype: string
- name: prediction_oneshot
dtype: string
- name: response_oneshot
list:
- name: generated_text
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 6439867
num_examples: 3467
download_size: 2040472
dataset_size: 6439867
configs:
- config_name: dev
data_files:
- split: train
path: dev/train-*
- config_name: test
data_files:
- split: train
path: test/train-*
- config_name: train
data_files:
- split: train
path: train/train-*
---
|
Lifan-Z/Chinese-poetries-txt | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
tags:
- art
---
这个数据集是把《全唐诗》、《全宋诗》中所有的五绝、五律、七绝、七律都提取出来,做成四个文件。每行对应一首诗。
五绝(5x4): 17521 首
五律(5x8): 60896 首
七绝(7x4): 84485 首
七律(7x8): 71818 首
This dataset extracts four styles of poetries in "Complete Poems of the Tang Dynasty" and "Complete Poems of the Song Dynasty."
Each line corresponds to a Chinese poem.
The syle on 5x4: 17521
The syle on 5x8: 60896
The syle on 7x4: 84485
The syle on 7x8: 71818
The raw data source from https://github.com/chinese-poetry/chinese-poetry/tree/master/%E5%85%A8%E5%94%90%E8%AF%97 |
jack008/SSRS | ---
license: apache-2.0
---
|
huggingartists/oasis | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/oasis"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.229778 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/5b44be44ae2cc44dd08db4e8e07b18bb.800x800x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/oasis">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Oasis</div>
<a href="https://genius.com/artists/oasis">
<div style="text-align: center; font-size: 14px;">@oasis</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/oasis).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/oasis")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|192| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/oasis")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
markbotterill/test_dataset | ---
dataset_info:
features:
- name: label
dtype: int64
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 2650754
num_examples: 29825
download_size: 1492606
dataset_size: 2650754
---
# Dataset Card for "test_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TPM-28/alpaca-cleaned-fr | ---
license: cc-by-4.0
language:
- fr
tags:
- instruction-finetuning
pretty_name: Alpaca-Cleaned-FR
task_categories:
- text-generation
---
|
autoevaluate/autoeval-eval-futin__feed-sen_en_-1de085-2240171543 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/feed
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b1
metrics: []
dataset_name: futin/feed
dataset_config: sen_en_
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: futin/feed
* Config: sen_en_
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
EMBO/biolang | ---
annotations_creators:
- machine-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n>1M
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Card for BioLang
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sourcedata.embo.org
- **Repository:** https://github.com/source-data/soda-roberta
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** thomas.lemberger@embo.org
- **Download Size:** 5_299_878_661
### Dataset Summary
BioLang is a dataset is based on abstracts from the open access section of EuropePubMed Central to train language models in the domain of biology. The dataset can be used for random masked language modeling or for language modeling using only specific part-of-speech maksing. More details on generation and use of the dataset at https://github.com/source-data/soda-roberta .
### Supported Tasks and Leaderboards
- `MLM`: masked language modeling
- `DET`: part-of-speach masked language model, with determinants (`DET`) tagged
- `SMALL`: part-of-speech masked language model, with "small" words (`DET`, `CCONJ`, `SCONJ`, `ADP`, `PRON`) tagged
- `VERB`: part-of-speach masked language model, with verbs (`VERB`) tagged
### Languages
English
## Dataset Structure
### Data Instances
```json
{
"input_ids":[
0, 2444, 6997, 46162, 7744, 35, 20632, 20862, 3457, 36, 500, 23858, 29, 43, 32, 3919, 716, 15, 49, 4476, 4, 1398, 6, 52, 1118, 5, 20862, 819, 9, 430, 23305, 248, 23858, 29, 4, 256, 40086, 104, 35, 1927, 1069, 459, 1484, 58, 4776, 13, 23305, 634, 16706, 493, 2529, 8954, 14475, 73, 34263, 6, 4213, 718, 833, 12, 24291, 4473, 22500, 14475, 73, 510, 705, 73, 34263, 6, 5143, 4313, 2529, 8954, 14475, 73, 34263, 6, 8, 5143, 4313, 2529, 8954, 14475, 248, 23858, 29, 23, 4448, 225, 4722, 2392, 11, 9341, 261, 4, 49043, 35, 96, 746, 6, 5962, 9, 38415, 4776, 408, 36, 3897, 4, 398, 8871, 56, 23305, 4, 20, 15608, 21, 8061, 6164, 207, 13, 70, 248, 23858, 29, 6, 150, 5, 42561, 21, 8061, 5663, 207, 13, 80, 3457, 4, 509, 1296, 5129, 21567, 3457, 36, 398, 23528, 8748, 22065, 11654, 35, 7253, 15, 49, 4476, 6, 70, 3457, 4682, 65, 189, 28, 5131, 13, 23305, 9726, 4, 2
],
"label_ids": [
"X", "NOUN", "NOUN", "NOUN", "NOUN", "PUNCT", "ADJ", "ADJ", "NOUN", "PUNCT", "PROPN", "PROPN", "PROPN", "PUNCT", "AUX", "VERB", "VERB", "ADP", "DET", "NOUN", "PUNCT", "ADV", "PUNCT", "PRON", "VERB", "DET", "ADJ", "NOUN", "ADP", "ADJ", "NOUN", "NOUN", "NOUN", "NOUN", "PUNCT", "ADJ", "ADJ", "ADJ", "PUNCT", "NOUN", "NOUN", "NOUN", "NOUN", "AUX", "VERB", "ADP", "NOUN", "VERB", "PROPN", "PROPN", "PROPN", "PROPN", "PROPN", "SYM", "PROPN", "PUNCT", "PROPN", "PROPN", "PROPN", "PUNCT", "PROPN", "PROPN", "PROPN", "PROPN", "SYM", "PROPN", "PROPN", "SYM", "PROPN", "PUNCT", "PROPN", "PROPN", "PROPN", "PROPN", "PROPN", "SYM", "PROPN", "PUNCT", "CCONJ", "ADJ", "PROPN", "PROPN", "PROPN", "PROPN", "NOUN", "NOUN", "NOUN", "ADP", "PROPN", "PROPN", "PROPN", "PROPN", "ADP", "PROPN", "PROPN", "PUNCT", "PROPN", "PUNCT", "ADP", "NOUN", "PUNCT", "NUM", "ADP", "NUM", "VERB", "NOUN", "PUNCT", "NUM", "NUM", "NUM", "NOUN", "AUX", "NOUN", "PUNCT", "DET", "NOUN", "AUX", "X", "NUM", "NOUN", "ADP", "DET", "NOUN", "NOUN", "NOUN", "PUNCT", "SCONJ", "DET", "NOUN", "AUX", "X", "NUM", "NOUN", "ADP", "NUM", "NOUN", "PUNCT", "NUM", "NOUN", "VERB", "ADJ", "NOUN", "PUNCT", "NUM", "NOUN", "NOUN", "NOUN", "NOUN", "PUNCT", "VERB", "ADP", "DET", "NOUN", "PUNCT", "DET", "NOUN", "SCONJ", "PRON", "VERB", "AUX", "VERB", "ADP", "NOUN", "NOUN", "PUNCT", "X"
],
"special_tokens_mask": [
1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1
]
}
```
### Data Fields
`MLM`:
- `input_ids`: a `list` of `int32` features.
- `special_tokens_mask`: a `list` of `int8` features.
`DET`, `VERB`, `SMALL`:
- `input_ids`: a `list` of `int32` features.
- `tag_mask`: a `list` of `int8` features.
### Data Splits
- `train`:
- features: ['input_ids', 'special_tokens_mask'],
- num_rows: 12_005_390
- `test`:
- features: ['input_ids', 'special_tokens_mask'],
- num_rows: 37_112
- `validation`:
- features: ['input_ids', 'special_tokens_mask'],
- num_rows: 36_713
## Dataset Creation
### Curation Rationale
The dataset was assembled to train language models in the field of cell and molecular biology. To expand the size of the dataset and to include many examples with highly technical language, abstracts were complemented with figure legends (or figure 'captions').
### Source Data
#### Initial Data Collection and Normalization
The xml content of papers were downloaded in January 2021 from the open access section of [EuropePMC]("https://europepmc.org/downloads/openaccess"). Figure legends and abstracts were extracted from the JATS XML, tokenized with the `roberta-base` tokenizer and part-of-speech tagged with Spacy's `en_core_web_sm` model (https://spacy.io).
More details at https://github.com/source-data/soda-roberta
#### Who are the source language producers?
Experts scientists.
### Annotations
#### Annotation process
Part-of-speech was tagged automatically.
#### Who are the annotators?
Spacy's `en_core_web_sm` model (https://spacy.io) was used for part-of-speech tagging.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Thomas Lemberger
### Licensing Information
CC-BY 4.0
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@tlemberger](https://github.com/tlemberger) for adding this dataset.
|
Atul790/dress-lora6 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1784
num_examples: 19
download_size: 1558
dataset_size: 1784
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_mncai__yi-34B-v3 | ---
pretty_name: Evaluation run of mncai/yi-34B-v3
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [mncai/yi-34B-v3](https://huggingface.co/mncai/yi-34B-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mncai__yi-34B-v3\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-11T01:51:08.694143](https://huggingface.co/datasets/open-llm-leaderboard/details_mncai__yi-34B-v3/blob/main/results_2023-12-11T01-51-08.694143.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7536948044621744,\n\
\ \"acc_stderr\": 0.028378789321173548,\n \"acc_norm\": 0.7581198984934292,\n\
\ \"acc_norm_stderr\": 0.02891498378900509,\n \"mc1\": 0.4186046511627907,\n\
\ \"mc1_stderr\": 0.017270015284476855,\n \"mc2\": 0.5753679426280454,\n\
\ \"mc2_stderr\": 0.014962842073717312\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6390784982935154,\n \"acc_stderr\": 0.014034761386175452,\n\
\ \"acc_norm\": 0.6706484641638225,\n \"acc_norm_stderr\": 0.013734057652635476\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6487751443935471,\n\
\ \"acc_stderr\": 0.004763774981834676,\n \"acc_norm\": 0.8511252738498307,\n\
\ \"acc_norm_stderr\": 0.0035523745313052004\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6962962962962963,\n\
\ \"acc_stderr\": 0.03972552884785137,\n \"acc_norm\": 0.6962962962962963,\n\
\ \"acc_norm_stderr\": 0.03972552884785137\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.881578947368421,\n \"acc_stderr\": 0.026293995855474935,\n\
\ \"acc_norm\": 0.881578947368421,\n \"acc_norm_stderr\": 0.026293995855474935\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.79,\n\
\ \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n \
\ \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.8188679245283019,\n \"acc_stderr\": 0.023702963526757798,\n\
\ \"acc_norm\": 0.8188679245283019,\n \"acc_norm_stderr\": 0.023702963526757798\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.9097222222222222,\n\
\ \"acc_stderr\": 0.023964965777906935,\n \"acc_norm\": 0.9097222222222222,\n\
\ \"acc_norm_stderr\": 0.023964965777906935\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \
\ \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n\
\ \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \
\ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7167630057803468,\n\
\ \"acc_stderr\": 0.034355680560478746,\n \"acc_norm\": 0.7167630057803468,\n\
\ \"acc_norm_stderr\": 0.034355680560478746\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.5392156862745098,\n \"acc_stderr\": 0.04959859966384181,\n\
\ \"acc_norm\": 0.5392156862745098,\n \"acc_norm_stderr\": 0.04959859966384181\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.83,\n\
\ \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.7787234042553192,\n \"acc_stderr\": 0.027136349602424056,\n\
\ \"acc_norm\": 0.7787234042553192,\n \"acc_norm_stderr\": 0.027136349602424056\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5701754385964912,\n\
\ \"acc_stderr\": 0.04657047260594963,\n \"acc_norm\": 0.5701754385964912,\n\
\ \"acc_norm_stderr\": 0.04657047260594963\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.7379310344827587,\n \"acc_stderr\": 0.036646663372252565,\n\
\ \"acc_norm\": 0.7379310344827587,\n \"acc_norm_stderr\": 0.036646663372252565\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.6957671957671958,\n \"acc_stderr\": 0.02369541500946309,\n \"\
acc_norm\": 0.6957671957671958,\n \"acc_norm_stderr\": 0.02369541500946309\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5396825396825397,\n\
\ \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.5396825396825397,\n\
\ \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.57,\n \"acc_stderr\": 0.04975698519562427,\n \
\ \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.04975698519562427\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.9,\n\
\ \"acc_stderr\": 0.017066403719657255,\n \"acc_norm\": 0.9,\n \
\ \"acc_norm_stderr\": 0.017066403719657255\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.6699507389162561,\n \"acc_stderr\": 0.033085304262282574,\n\
\ \"acc_norm\": 0.6699507389162561,\n \"acc_norm_stderr\": 0.033085304262282574\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\"\
: 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8545454545454545,\n \"acc_stderr\": 0.027530196355066584,\n\
\ \"acc_norm\": 0.8545454545454545,\n \"acc_norm_stderr\": 0.027530196355066584\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.9343434343434344,\n \"acc_stderr\": 0.01764652667723333,\n \"\
acc_norm\": 0.9343434343434344,\n \"acc_norm_stderr\": 0.01764652667723333\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9740932642487047,\n \"acc_stderr\": 0.01146452335695318,\n\
\ \"acc_norm\": 0.9740932642487047,\n \"acc_norm_stderr\": 0.01146452335695318\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.8205128205128205,\n \"acc_stderr\": 0.019457390787681803,\n\
\ \"acc_norm\": 0.8205128205128205,\n \"acc_norm_stderr\": 0.019457390787681803\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.4111111111111111,\n \"acc_stderr\": 0.02999992350870669,\n \
\ \"acc_norm\": 0.4111111111111111,\n \"acc_norm_stderr\": 0.02999992350870669\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.8529411764705882,\n \"acc_stderr\": 0.023005459446673964,\n\
\ \"acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.023005459446673964\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.4966887417218543,\n \"acc_stderr\": 0.04082393379449654,\n \"\
acc_norm\": 0.4966887417218543,\n \"acc_norm_stderr\": 0.04082393379449654\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9284403669724771,\n \"acc_stderr\": 0.01105125524781546,\n \"\
acc_norm\": 0.9284403669724771,\n \"acc_norm_stderr\": 0.01105125524781546\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6620370370370371,\n \"acc_stderr\": 0.03225941352631295,\n \"\
acc_norm\": 0.6620370370370371,\n \"acc_norm_stderr\": 0.03225941352631295\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.9068627450980392,\n \"acc_stderr\": 0.020397853969426994,\n \"\
acc_norm\": 0.9068627450980392,\n \"acc_norm_stderr\": 0.020397853969426994\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.9156118143459916,\n \"acc_stderr\": 0.01809424711647332,\n \
\ \"acc_norm\": 0.9156118143459916,\n \"acc_norm_stderr\": 0.01809424711647332\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8026905829596412,\n\
\ \"acc_stderr\": 0.02670985334496796,\n \"acc_norm\": 0.8026905829596412,\n\
\ \"acc_norm_stderr\": 0.02670985334496796\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8702290076335878,\n \"acc_stderr\": 0.029473649496907065,\n\
\ \"acc_norm\": 0.8702290076335878,\n \"acc_norm_stderr\": 0.029473649496907065\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.9008264462809917,\n \"acc_stderr\": 0.02728524631275896,\n \"\
acc_norm\": 0.9008264462809917,\n \"acc_norm_stderr\": 0.02728524631275896\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8981481481481481,\n\
\ \"acc_stderr\": 0.029239272675632748,\n \"acc_norm\": 0.8981481481481481,\n\
\ \"acc_norm_stderr\": 0.029239272675632748\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.8650306748466258,\n \"acc_stderr\": 0.026845765054553855,\n\
\ \"acc_norm\": 0.8650306748466258,\n \"acc_norm_stderr\": 0.026845765054553855\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5714285714285714,\n\
\ \"acc_stderr\": 0.04697113923010213,\n \"acc_norm\": 0.5714285714285714,\n\
\ \"acc_norm_stderr\": 0.04697113923010213\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8446601941747572,\n \"acc_stderr\": 0.03586594738573974,\n\
\ \"acc_norm\": 0.8446601941747572,\n \"acc_norm_stderr\": 0.03586594738573974\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9401709401709402,\n\
\ \"acc_stderr\": 0.015537514263253862,\n \"acc_norm\": 0.9401709401709402,\n\
\ \"acc_norm_stderr\": 0.015537514263253862\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.82,\n \"acc_stderr\": 0.03861229196653694,\n \
\ \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.03861229196653694\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9029374201787995,\n\
\ \"acc_stderr\": 0.010586474712018283,\n \"acc_norm\": 0.9029374201787995,\n\
\ \"acc_norm_stderr\": 0.010586474712018283\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.8121387283236994,\n \"acc_stderr\": 0.02102926975242323,\n\
\ \"acc_norm\": 0.8121387283236994,\n \"acc_norm_stderr\": 0.02102926975242323\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.7039106145251397,\n\
\ \"acc_stderr\": 0.015268677317602274,\n \"acc_norm\": 0.7039106145251397,\n\
\ \"acc_norm_stderr\": 0.015268677317602274\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.8398692810457516,\n \"acc_stderr\": 0.020998740930362303,\n\
\ \"acc_norm\": 0.8398692810457516,\n \"acc_norm_stderr\": 0.020998740930362303\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8167202572347267,\n\
\ \"acc_stderr\": 0.021974198848265812,\n \"acc_norm\": 0.8167202572347267,\n\
\ \"acc_norm_stderr\": 0.021974198848265812\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8641975308641975,\n \"acc_stderr\": 0.0190615881815054,\n\
\ \"acc_norm\": 0.8641975308641975,\n \"acc_norm_stderr\": 0.0190615881815054\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.6276595744680851,\n \"acc_stderr\": 0.02883892147125145,\n \
\ \"acc_norm\": 0.6276595744680851,\n \"acc_norm_stderr\": 0.02883892147125145\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5971316818774446,\n\
\ \"acc_stderr\": 0.01252695557711801,\n \"acc_norm\": 0.5971316818774446,\n\
\ \"acc_norm_stderr\": 0.01252695557711801\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.8345588235294118,\n \"acc_stderr\": 0.02257177102549474,\n\
\ \"acc_norm\": 0.8345588235294118,\n \"acc_norm_stderr\": 0.02257177102549474\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.8169934640522876,\n \"acc_stderr\": 0.015643069911273344,\n \
\ \"acc_norm\": 0.8169934640522876,\n \"acc_norm_stderr\": 0.015643069911273344\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7181818181818181,\n\
\ \"acc_stderr\": 0.04309118709946458,\n \"acc_norm\": 0.7181818181818181,\n\
\ \"acc_norm_stderr\": 0.04309118709946458\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8489795918367347,\n \"acc_stderr\": 0.022923004094736844,\n\
\ \"acc_norm\": 0.8489795918367347,\n \"acc_norm_stderr\": 0.022923004094736844\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8855721393034826,\n\
\ \"acc_stderr\": 0.022509345325101706,\n \"acc_norm\": 0.8855721393034826,\n\
\ \"acc_norm_stderr\": 0.022509345325101706\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.92,\n \"acc_stderr\": 0.0272659924344291,\n \
\ \"acc_norm\": 0.92,\n \"acc_norm_stderr\": 0.0272659924344291\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5903614457831325,\n\
\ \"acc_stderr\": 0.038284011150790206,\n \"acc_norm\": 0.5903614457831325,\n\
\ \"acc_norm_stderr\": 0.038284011150790206\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8830409356725146,\n \"acc_stderr\": 0.024648068961366152,\n\
\ \"acc_norm\": 0.8830409356725146,\n \"acc_norm_stderr\": 0.024648068961366152\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4186046511627907,\n\
\ \"mc1_stderr\": 0.017270015284476855,\n \"mc2\": 0.5753679426280454,\n\
\ \"mc2_stderr\": 0.014962842073717312\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.835043409629045,\n \"acc_stderr\": 0.010430917468237419\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.645185746777862,\n \
\ \"acc_stderr\": 0.013179083387979214\n }\n}\n```"
repo_url: https://huggingface.co/mncai/yi-34B-v3
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|arc:challenge|25_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|gsm8k|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hellaswag|10_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-11T01-51-08.694143.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-11T01-51-08.694143.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- '**/details_harness|winogrande|5_2023-12-11T01-51-08.694143.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-11T01-51-08.694143.parquet'
- config_name: results
data_files:
- split: 2023_12_11T01_51_08.694143
path:
- results_2023-12-11T01-51-08.694143.parquet
- split: latest
path:
- results_2023-12-11T01-51-08.694143.parquet
---
# Dataset Card for Evaluation run of mncai/yi-34B-v3
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/mncai/yi-34B-v3
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [mncai/yi-34B-v3](https://huggingface.co/mncai/yi-34B-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mncai__yi-34B-v3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T01:51:08.694143](https://huggingface.co/datasets/open-llm-leaderboard/details_mncai__yi-34B-v3/blob/main/results_2023-12-11T01-51-08.694143.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7536948044621744,
"acc_stderr": 0.028378789321173548,
"acc_norm": 0.7581198984934292,
"acc_norm_stderr": 0.02891498378900509,
"mc1": 0.4186046511627907,
"mc1_stderr": 0.017270015284476855,
"mc2": 0.5753679426280454,
"mc2_stderr": 0.014962842073717312
},
"harness|arc:challenge|25": {
"acc": 0.6390784982935154,
"acc_stderr": 0.014034761386175452,
"acc_norm": 0.6706484641638225,
"acc_norm_stderr": 0.013734057652635476
},
"harness|hellaswag|10": {
"acc": 0.6487751443935471,
"acc_stderr": 0.004763774981834676,
"acc_norm": 0.8511252738498307,
"acc_norm_stderr": 0.0035523745313052004
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6962962962962963,
"acc_stderr": 0.03972552884785137,
"acc_norm": 0.6962962962962963,
"acc_norm_stderr": 0.03972552884785137
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.881578947368421,
"acc_stderr": 0.026293995855474935,
"acc_norm": 0.881578947368421,
"acc_norm_stderr": 0.026293995855474935
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.79,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.79,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8188679245283019,
"acc_stderr": 0.023702963526757798,
"acc_norm": 0.8188679245283019,
"acc_norm_stderr": 0.023702963526757798
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.9097222222222222,
"acc_stderr": 0.023964965777906935,
"acc_norm": 0.9097222222222222,
"acc_norm_stderr": 0.023964965777906935
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7167630057803468,
"acc_stderr": 0.034355680560478746,
"acc_norm": 0.7167630057803468,
"acc_norm_stderr": 0.034355680560478746
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5392156862745098,
"acc_stderr": 0.04959859966384181,
"acc_norm": 0.5392156862745098,
"acc_norm_stderr": 0.04959859966384181
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7787234042553192,
"acc_stderr": 0.027136349602424056,
"acc_norm": 0.7787234042553192,
"acc_norm_stderr": 0.027136349602424056
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5701754385964912,
"acc_stderr": 0.04657047260594963,
"acc_norm": 0.5701754385964912,
"acc_norm_stderr": 0.04657047260594963
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7379310344827587,
"acc_stderr": 0.036646663372252565,
"acc_norm": 0.7379310344827587,
"acc_norm_stderr": 0.036646663372252565
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.6957671957671958,
"acc_stderr": 0.02369541500946309,
"acc_norm": 0.6957671957671958,
"acc_norm_stderr": 0.02369541500946309
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5396825396825397,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.5396825396825397,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562427,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562427
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.9,
"acc_stderr": 0.017066403719657255,
"acc_norm": 0.9,
"acc_norm_stderr": 0.017066403719657255
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6699507389162561,
"acc_stderr": 0.033085304262282574,
"acc_norm": 0.6699507389162561,
"acc_norm_stderr": 0.033085304262282574
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8545454545454545,
"acc_stderr": 0.027530196355066584,
"acc_norm": 0.8545454545454545,
"acc_norm_stderr": 0.027530196355066584
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9343434343434344,
"acc_stderr": 0.01764652667723333,
"acc_norm": 0.9343434343434344,
"acc_norm_stderr": 0.01764652667723333
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9740932642487047,
"acc_stderr": 0.01146452335695318,
"acc_norm": 0.9740932642487047,
"acc_norm_stderr": 0.01146452335695318
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8205128205128205,
"acc_stderr": 0.019457390787681803,
"acc_norm": 0.8205128205128205,
"acc_norm_stderr": 0.019457390787681803
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.4111111111111111,
"acc_stderr": 0.02999992350870669,
"acc_norm": 0.4111111111111111,
"acc_norm_stderr": 0.02999992350870669
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.023005459446673964,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.023005459446673964
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4966887417218543,
"acc_stderr": 0.04082393379449654,
"acc_norm": 0.4966887417218543,
"acc_norm_stderr": 0.04082393379449654
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9284403669724771,
"acc_stderr": 0.01105125524781546,
"acc_norm": 0.9284403669724771,
"acc_norm_stderr": 0.01105125524781546
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6620370370370371,
"acc_stderr": 0.03225941352631295,
"acc_norm": 0.6620370370370371,
"acc_norm_stderr": 0.03225941352631295
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9068627450980392,
"acc_stderr": 0.020397853969426994,
"acc_norm": 0.9068627450980392,
"acc_norm_stderr": 0.020397853969426994
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9156118143459916,
"acc_stderr": 0.01809424711647332,
"acc_norm": 0.9156118143459916,
"acc_norm_stderr": 0.01809424711647332
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.8026905829596412,
"acc_stderr": 0.02670985334496796,
"acc_norm": 0.8026905829596412,
"acc_norm_stderr": 0.02670985334496796
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8702290076335878,
"acc_stderr": 0.029473649496907065,
"acc_norm": 0.8702290076335878,
"acc_norm_stderr": 0.029473649496907065
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.9008264462809917,
"acc_stderr": 0.02728524631275896,
"acc_norm": 0.9008264462809917,
"acc_norm_stderr": 0.02728524631275896
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8981481481481481,
"acc_stderr": 0.029239272675632748,
"acc_norm": 0.8981481481481481,
"acc_norm_stderr": 0.029239272675632748
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8650306748466258,
"acc_stderr": 0.026845765054553855,
"acc_norm": 0.8650306748466258,
"acc_norm_stderr": 0.026845765054553855
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5714285714285714,
"acc_stderr": 0.04697113923010213,
"acc_norm": 0.5714285714285714,
"acc_norm_stderr": 0.04697113923010213
},
"harness|hendrycksTest-management|5": {
"acc": 0.8446601941747572,
"acc_stderr": 0.03586594738573974,
"acc_norm": 0.8446601941747572,
"acc_norm_stderr": 0.03586594738573974
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9401709401709402,
"acc_stderr": 0.015537514263253862,
"acc_norm": 0.9401709401709402,
"acc_norm_stderr": 0.015537514263253862
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.82,
"acc_stderr": 0.03861229196653694,
"acc_norm": 0.82,
"acc_norm_stderr": 0.03861229196653694
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9029374201787995,
"acc_stderr": 0.010586474712018283,
"acc_norm": 0.9029374201787995,
"acc_norm_stderr": 0.010586474712018283
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8121387283236994,
"acc_stderr": 0.02102926975242323,
"acc_norm": 0.8121387283236994,
"acc_norm_stderr": 0.02102926975242323
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.7039106145251397,
"acc_stderr": 0.015268677317602274,
"acc_norm": 0.7039106145251397,
"acc_norm_stderr": 0.015268677317602274
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8398692810457516,
"acc_stderr": 0.020998740930362303,
"acc_norm": 0.8398692810457516,
"acc_norm_stderr": 0.020998740930362303
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8167202572347267,
"acc_stderr": 0.021974198848265812,
"acc_norm": 0.8167202572347267,
"acc_norm_stderr": 0.021974198848265812
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8641975308641975,
"acc_stderr": 0.0190615881815054,
"acc_norm": 0.8641975308641975,
"acc_norm_stderr": 0.0190615881815054
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6276595744680851,
"acc_stderr": 0.02883892147125145,
"acc_norm": 0.6276595744680851,
"acc_norm_stderr": 0.02883892147125145
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5971316818774446,
"acc_stderr": 0.01252695557711801,
"acc_norm": 0.5971316818774446,
"acc_norm_stderr": 0.01252695557711801
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8345588235294118,
"acc_stderr": 0.02257177102549474,
"acc_norm": 0.8345588235294118,
"acc_norm_stderr": 0.02257177102549474
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.8169934640522876,
"acc_stderr": 0.015643069911273344,
"acc_norm": 0.8169934640522876,
"acc_norm_stderr": 0.015643069911273344
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7181818181818181,
"acc_stderr": 0.04309118709946458,
"acc_norm": 0.7181818181818181,
"acc_norm_stderr": 0.04309118709946458
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8489795918367347,
"acc_stderr": 0.022923004094736844,
"acc_norm": 0.8489795918367347,
"acc_norm_stderr": 0.022923004094736844
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8855721393034826,
"acc_stderr": 0.022509345325101706,
"acc_norm": 0.8855721393034826,
"acc_norm_stderr": 0.022509345325101706
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.92,
"acc_stderr": 0.0272659924344291,
"acc_norm": 0.92,
"acc_norm_stderr": 0.0272659924344291
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5903614457831325,
"acc_stderr": 0.038284011150790206,
"acc_norm": 0.5903614457831325,
"acc_norm_stderr": 0.038284011150790206
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8830409356725146,
"acc_stderr": 0.024648068961366152,
"acc_norm": 0.8830409356725146,
"acc_norm_stderr": 0.024648068961366152
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4186046511627907,
"mc1_stderr": 0.017270015284476855,
"mc2": 0.5753679426280454,
"mc2_stderr": 0.014962842073717312
},
"harness|winogrande|5": {
"acc": 0.835043409629045,
"acc_stderr": 0.010430917468237419
},
"harness|gsm8k|5": {
"acc": 0.645185746777862,
"acc_stderr": 0.013179083387979214
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
tglcourse/latent_afhqv2_256px | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
0: cat
1: dog
2: wild
- name: latent
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 267449972
num_examples: 15803
download_size: 260672854
dataset_size: 267449972
---
# Dataset Card for "latent_afhqv2_256px"
Each image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
```python
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_afhqv2_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 32, 32)
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
``` |
CyberHarem/hinatsu_pokemon | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of hinatsu/ヒナツ (Pokémon)
This is the dataset of hinatsu/ヒナツ (Pokémon), containing 500 images and their tags.
The core tags of this character are `short_hair, red_hair, bangs, red_eyes, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:------------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 629.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hinatsu_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 325.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hinatsu_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1251 | 695.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hinatsu_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 541.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hinatsu_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1251 | 1018.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hinatsu_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hinatsu_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, brown_bag, cowlick, hood, long_sleeves, solo, collarbone, gradient_legwear, looking_at_viewer, smile, blush, legwear_under_shorts, pantyhose_under_shorts, red_pantyhose, closed_mouth, hand_up, sitting, grey_jacket, simple_background |
| 1 | 5 |  |  |  |  |  | 1girl, blue_hoodie, blush, cowlick, gradient_legwear, legwear_under_shorts, long_sleeves, looking_at_viewer, pantyhose_under_shorts, smile, brown_bag, gradient_clothes, jacket, solo, ass, brown_hair, open_mouth, two-tone_legwear, closed_mouth, crossed_legs, red_pantyhose, sitting |
| 2 | 8 |  |  |  |  |  | 1girl, cowlick, gradient_legwear, long_sleeves, simple_background, white_background, black_footwear, boots, closed_mouth, gradient_clothes, smile, solo, brown_bag, full_body, hood, legwear_under_shorts, looking_at_viewer, sitting, hand_up, pantyhose_under_shorts, grey_jacket, red_pantyhose, black_shorts, blush, bracelet |
| 3 | 5 |  |  |  |  |  | 1girl, ass, blue_hoodie, cowlick, from_behind, long_sleeves, looking_at_viewer, looking_back, solo, blush, gradient_legwear, simple_background, brown_pantyhose, closed_mouth, hood_down, jacket, open_mouth, white_background |
| 4 | 7 |  |  |  |  |  | 1girl, long_sleeves, looking_at_viewer, red_pantyhose, smile, solo, blue_hoodie, from_behind, looking_back, cowlick, gradient_legwear, jacket, shiny_clothes, simple_background, thighs, ass_focus, blush, closed_mouth, white_background |
| 5 | 9 |  |  |  |  |  | 1boy, 1girl, blush, hetero, nipples, penis, pussy, sex, spread_legs, vaginal, large_breasts, navel, missionary, on_back, collarbone, completely_nude, solo_focus, open_mouth, pov, looking_at_viewer, mosaic_censoring, sweat, trembling, uncensored |
| 6 | 11 |  |  |  |  |  | 1boy, 1girl, ass, hetero, blush, cowlick, pantyhose, uncensored, vaginal, anus, torn_clothes, gradient_legwear, open_mouth, cum_in_pussy, overflow, clothed_female_nude_male, jacket, long_sleeves, heart, looking_at_viewer, looking_back, sex_from_behind, solo_focus, testicles, veiny_penis |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | brown_bag | cowlick | hood | long_sleeves | solo | collarbone | gradient_legwear | looking_at_viewer | smile | blush | legwear_under_shorts | pantyhose_under_shorts | red_pantyhose | closed_mouth | hand_up | sitting | grey_jacket | simple_background | blue_hoodie | gradient_clothes | jacket | ass | brown_hair | open_mouth | two-tone_legwear | crossed_legs | white_background | black_footwear | boots | full_body | black_shorts | bracelet | from_behind | looking_back | brown_pantyhose | hood_down | shiny_clothes | thighs | ass_focus | 1boy | hetero | nipples | penis | pussy | sex | spread_legs | vaginal | large_breasts | navel | missionary | on_back | completely_nude | solo_focus | pov | mosaic_censoring | sweat | trembling | uncensored | pantyhose | anus | torn_clothes | cum_in_pussy | overflow | clothed_female_nude_male | heart | sex_from_behind | testicles | veiny_penis |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------|:----------|:-------|:---------------|:-------|:-------------|:-------------------|:--------------------|:--------|:--------|:-----------------------|:-------------------------|:----------------|:---------------|:----------|:----------|:--------------|:--------------------|:--------------|:-------------------|:---------|:------|:-------------|:-------------|:-------------------|:---------------|:-------------------|:-----------------|:--------|:------------|:---------------|:-----------|:--------------|:---------------|:------------------|:------------|:----------------|:---------|:------------|:-------|:---------|:----------|:--------|:--------|:------|:--------------|:----------|:----------------|:--------|:-------------|:----------|:------------------|:-------------|:------|:-------------------|:--------|:------------|:-------------|:------------|:-------|:---------------|:---------------|:-----------|:---------------------------|:--------|:------------------|:------------|:--------------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | | X | X | | X | X | X | X | X | X | X | X | | X | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | | X | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | X | | X | X | | X | X | | X | | | | X | | | | X | X | | X | X | | X | | | X | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | | X | | X | X | | X | X | X | X | | | X | X | | | | X | X | | X | | | | | | X | | | | | | X | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 9 |  |  |  |  |  | X | | | | | | X | | X | | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 6 | 11 |  |  |  |  |  | X | | X | | X | | | X | X | | X | | | | | | | | | | | X | X | | X | | | | | | | | | | X | | | | | | X | X | | | | | | X | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X |
|
zjunlp/KGEditor | ---
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card for KGEditor
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [https://arxiv.org/abs/2301.10405](https://arxiv.org/abs/2301.10405)
- **Leaderboard:** [https://zjunlp.github.io/project/KGE_Editing/](https://zjunlp.github.io/project/KGE_Editing/)
- **Point of Contact:**
### Supported Tasks and Leaderboards
The purpose of the KGE Edit task is to modify the erroneous knowledge in the KGE model and to inject new knowledge into the KGE model. Thus, in response to the task objectives, we design two subtasks (EDIT & ADD). For the EDIT sub-task, we edit the wrong fact knowledge that is stored in the KG embeddings. Also, for the ADD sub-task, we add brand-new knowledge into the model without re-training the whole model.
### Dataset Summary
We build four datasets for the sub-task of EDIT and ADD based on two benchmark datasets FB15k-237, and WN18RR. Firstly, we train KG embedding models with language models. For EDIT task, we sample some hard triples as candidates following the procedure below. For the ADD sub-task, we leverage the original training set of FB15k-237 and WN18RR to build the pre-train dataset (original pre-train data) and use the data from the standard inductive setting as they are not seen before.
## Dataset Structure
### Data Instances
An example of E-FB15k237:
(Note that we have converted the ID to text for easier understanding)
```
{
"ori": ["Jennifer Connelly", "type of union", "Marriage"],
"cor": ["Stephen Sondheim", "type of union", "Marriage"],
"process": ["[MASK]", "type of union", "Marriage"],
"label": "Jennifer Connelly"
}
```
An example of A-FB15k237:
```
{
"triples": ["Darryl F. Zanuck", "place of death", "Palm Springs"],
"label": "Palm Springs",
"head": 0
}
```
### Data Fields
The data fields are the same among all splits.
For EDIT sub-task:
- ori: the fact in the pre-train dataset.
- cor: corrupted triple.
- process: the triple after replacing the wrong entity with the [MASK] token.
- label: a classification label, the scope is the entire set of entities.
For ADD sub-task:
- triples: the knowledge that needs to be injected into the model.
- label: a classification label, the scope is the entire set of entities.
- head: the head or tail entity that does not appear in pre-train.
### Data Splits
<table>
<tr>
<th></th>
<td>Pre-trained</td>
<th>Train</th>
<th>Test</th>
<th>L-Test</th>
</tr>
<tr>
<th>E-FB15k237</th>
<td>310,117</td>
<td>3,087</td>
<td>3,087</td>
<td>7,051</td>
</tr>
<tr>
<th>A-FB15k237</th>
<td>215,082</td>
<td>2,000</td>
<td>-</td>
<td>16,872</td>
</tr>
<tr>
<th>E-WN18RR</th>
<td>93,003</td>
<td>1,491</td>
<td>1,401</td>
<td>5,003</td>
</tr>
<tr>
<th>A-WN18RR</th>
<td>69,721</td>
<td>2,000</td>
<td>-</td>
<td>10,000</td>
</tr>
</table>
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
For the EDIT subtask, our data(E-FB15k237 and E-WN18RR) is based on the [FB15k237](https://paperswithcode.com/dataset/fb15k-237) and [WN18RR](https://paperswithcode.com/dataset/wn18rr).
For the ADD subtask, our data(A-FB15k237 and E-WN18RR) remain the same as the inductive settings in [paper](https://arxiv.org/abs/2010.03496).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{DBLP:journals/corr/abs-2301-10405,
author = {Siyuan Cheng and
Ningyu Zhang and
Bozhong Tian and
Zelin Dai and
Feiyu Xiong and
Wei Guo and
Huajun Chen},
title = {Editing Language Model-based Knowledge Graph Embeddings},
journal = {CoRR},
volume = {abs/2301.10405},
year = {2023},
url = {https://doi.org/10.48550/arXiv.2301.10405},
doi = {10.48550/arXiv.2301.10405},
eprinttype = {arXiv},
eprint = {2301.10405},
timestamp = {Thu, 26 Jan 2023 17:49:16 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2301-10405.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
[More Information Needed] |
distilled-from-one-sec-cv12/chunk_193 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 991112368
num_examples: 193124
download_size: 1010200245
dataset_size: 991112368
---
# Dataset Card for "chunk_193"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/oasst_top1_standardized | ---
dataset_info:
features:
- name: message_type
dtype: string
- name: message
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
splits:
- name: train
num_bytes: 23156321
num_examples: 37288
download_size: 13298492
dataset_size: 23156321
---
# Dataset Card for "oasst_top1_standardized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/black_heart_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of black_heart/ブラックハート/圣黑之心 (Azur Lane)
This is the dataset of black_heart/ブラックハート/圣黑之心 (Azur Lane), containing 500 images and their tags.
The core tags of this character are `long_hair, white_hair, breasts, symbol-shaped_pupils, blue_eyes, very_long_hair, medium_breasts, hair_between_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 577.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/black_heart_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 345.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/black_heart_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1180 | 717.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/black_heart_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 518.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/black_heart_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1180 | 979.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/black_heart_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/black_heart_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 21 |  |  |  |  |  | 1girl, cleavage, looking_at_viewer, solo, bare_shoulders, elbow_gloves, leotard, thighhighs, smile, blush, twintails, open_mouth, aqua_eyes |
| 1 | 8 |  |  |  |  |  | 1girl, bare_shoulders, elbow_gloves, leotard, looking_at_viewer, solo, cleavage_cutout, smile, blush, thighhighs, power_symbol, simple_background, white_background |
| 2 | 10 |  |  |  |  |  | aqua_eyes, bangs, bare_shoulders, black_gloves, elbow_gloves, halterneck, leotard, looking_at_viewer, magical_girl, power_symbol, turtleneck, 1girl, cleavage_cutout, closed_mouth, flipped_hair, light_smile, simple_background, solo, standing, white_background, from_side, twintails, cowboy_shot, grey_thighhighs, blush, full_body, grey_footwear, high_heel_boots, thigh_boots |
| 3 | 5 |  |  |  |  |  | 1girl, >:), aqua_eyes, bangs, bare_shoulders, black_gloves, blush, cleavage_cutout, closed_mouth, crossed_arms, elbow_gloves, floating_hair, halterneck, legs_apart, light_particles, light_smile, looking_at_viewer, magical_girl, power_symbol, solo, standing, turtleneck, twintails, glowing, black_thighhighs, cityscape, headgear, skyscraper, backlighting, blue_leotard, simple_background, white_background, wings |
| 4 | 27 |  |  |  |  |  | 1girl, angel_wings, hair_flower, looking_at_viewer, cleavage, feathered_wings, smile, solo, halo, bare_shoulders, navel, power_symbol, blush, elbow_gloves, thighhighs |
| 5 | 7 |  |  |  |  |  | 1girl, bikini, blush, looking_at_viewer, solo, cleavage, navel, smile, open_mouth |
| 6 | 5 |  |  |  |  |  | 2girls, blush, cleavage, solo_focus, bare_shoulders, looking_at_viewer, elbow_gloves, yuri, artist_name, large_breasts, leotard, open_mouth, smile |
| 7 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, nipples, solo, large_breasts, navel, power_symbol, blush, completely_nude, open_mouth, sitting, smile, white_background |
| 8 | 11 |  |  |  |  |  | 1girl, fake_animal_ears, looking_at_viewer, playboy_bunny, rabbit_ears, blush, cleavage, bare_shoulders, detached_collar, pantyhose, solo, wrist_cuffs, black_leotard, bowtie, power_symbol, covered_navel, smile, fishnets, large_breasts, white_bow |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cleavage | looking_at_viewer | solo | bare_shoulders | elbow_gloves | leotard | thighhighs | smile | blush | twintails | open_mouth | aqua_eyes | cleavage_cutout | power_symbol | simple_background | white_background | bangs | black_gloves | halterneck | magical_girl | turtleneck | closed_mouth | flipped_hair | light_smile | standing | from_side | cowboy_shot | grey_thighhighs | full_body | grey_footwear | high_heel_boots | thigh_boots | >:) | crossed_arms | floating_hair | legs_apart | light_particles | glowing | black_thighhighs | cityscape | headgear | skyscraper | backlighting | blue_leotard | wings | angel_wings | hair_flower | feathered_wings | halo | navel | bikini | 2girls | solo_focus | yuri | artist_name | large_breasts | nipples | completely_nude | sitting | fake_animal_ears | playboy_bunny | rabbit_ears | detached_collar | pantyhose | wrist_cuffs | black_leotard | bowtie | covered_navel | fishnets | white_bow |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------------------|:-------|:-----------------|:---------------|:----------|:-------------|:--------|:--------|:------------|:-------------|:------------|:------------------|:---------------|:--------------------|:-------------------|:--------|:---------------|:-------------|:---------------|:-------------|:---------------|:---------------|:--------------|:-----------|:------------|:--------------|:------------------|:------------|:----------------|:------------------|:--------------|:------|:---------------|:----------------|:-------------|:------------------|:----------|:-------------------|:------------|:-----------|:-------------|:---------------|:---------------|:--------|:--------------|:--------------|:------------------|:-------|:--------|:---------|:---------|:-------------|:-------|:--------------|:----------------|:----------|:------------------|:----------|:-------------------|:----------------|:--------------|:------------------|:------------|:--------------|:----------------|:---------|:----------------|:-----------|:------------|
| 0 | 21 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | | X | X | X | X | X | X | X | X | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | | X | X | X | X | X | | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | X | X | X | X | | | | X | X | | X | X | X | X | X | X | X | X | X | X | X | | X | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 27 |  |  |  |  |  | X | X | X | X | X | X | | X | X | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | X | X | X | | | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | | X | X | | X | X | X | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | | X | X | | | | | X | X | | X | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | X | X | X | | | | | | | | | | | |
| 8 | 11 |  |  |  |  |  | X | X | X | X | X | | | | X | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X |
|
Kamyar-zeinalipour/Turkish_CW_V3 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 56177167
num_examples: 182395
- name: test
num_bytes: 1537576
num_examples: 5000
download_size: 9563463
dataset_size: 57714743
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
AlekseyKorshuk/crowdsourced-rlhf | ---
license: openrail
---
|
M2UGen/MUVideo | ---
license: cc-by-nc-nd-4.0
arxiv: 2311.11255
extra_gated_prompt: >-
Please fill in the following fields, the full name/institution/group/contact
email/use case are MUST fields, and gender/github/personal homepage are
OPTIONAL fields (You can simply use a '-' symbol to fill in these optional
fields). An application form without required information will be declined.
extra_gated_fields:
Full Name: text
Gender: text
Institution: text
Group: text
Contact Email: text
Github: text
Personal Homepage: text
Use Case: text
I agree to use this dataset for non-commercial use ONLY: checkbox
tags:
- music
---
# MUVideo Dataset
This is the MUVideo dataset used to facilitate image to music generation, consisting of **13,203 music files** with a total playtime of **36.72 hours** generated using the [MU-LLaMA](https://github.com/crypto-code/MU-LLaMA) and [VideoMAE captioning](https://huggingface.co/Neleac/timesformer-gpt2-video-captioning) models.
This dataset is used to train the [M<sup>2</sup>UGen](https://github.com/crypto-code/M2UGen) model.
The [MUVideoInstructions.json](./MUVideoInstructions.json) file contains a list with each of the element having the following format:
```
{
"input_file": "1OhKgYcAujk.mp4",
"output_file": "1OhKgYcAujk.mp3",
"conversation": [
{
"from": "human",
"value": "Generate a music for the video that is upbeat and energetic to match the guitar playing in the living room.",
"input_modality": "video",
"caption": "A man is playing a song on a guitar while sitting in a living room with a couch."
},
{
"from": "gpt",
"value": "Here is a music that is a solo upright bass playing a blues melody.",
"caption": "The music is a solo upright bass playing a blues melody.",
"output_modality": "audio"
}
]
}
``` |
macabdul9/fleurs-hubert-discrete-tokens | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int32
- name: num_samples
dtype: int32
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: raw_transcription
dtype: string
- name: gender
dtype:
class_label:
names:
'0': male
'1': female
'2': other
- name: lang_id
dtype:
class_label:
names:
'0': af_za
'1': am_et
'2': ar_eg
'3': as_in
'4': ast_es
'5': az_az
'6': be_by
'7': bg_bg
'8': bn_in
'9': bs_ba
'10': ca_es
'11': ceb_ph
'12': ckb_iq
'13': cmn_hans_cn
'14': cs_cz
'15': cy_gb
'16': da_dk
'17': de_de
'18': el_gr
'19': en_us
'20': es_419
'21': et_ee
'22': fa_ir
'23': ff_sn
'24': fi_fi
'25': fil_ph
'26': fr_fr
'27': ga_ie
'28': gl_es
'29': gu_in
'30': ha_ng
'31': he_il
'32': hi_in
'33': hr_hr
'34': hu_hu
'35': hy_am
'36': id_id
'37': ig_ng
'38': is_is
'39': it_it
'40': ja_jp
'41': jv_id
'42': ka_ge
'43': kam_ke
'44': kea_cv
'45': kk_kz
'46': km_kh
'47': kn_in
'48': ko_kr
'49': ky_kg
'50': lb_lu
'51': lg_ug
'52': ln_cd
'53': lo_la
'54': lt_lt
'55': luo_ke
'56': lv_lv
'57': mi_nz
'58': mk_mk
'59': ml_in
'60': mn_mn
'61': mr_in
'62': ms_my
'63': mt_mt
'64': my_mm
'65': nb_no
'66': ne_np
'67': nl_nl
'68': nso_za
'69': ny_mw
'70': oc_fr
'71': om_et
'72': or_in
'73': pa_in
'74': pl_pl
'75': ps_af
'76': pt_br
'77': ro_ro
'78': ru_ru
'79': sd_in
'80': sk_sk
'81': sl_si
'82': sn_zw
'83': so_so
'84': sr_rs
'85': sv_se
'86': sw_ke
'87': ta_in
'88': te_in
'89': tg_tj
'90': th_th
'91': tr_tr
'92': uk_ua
'93': umb_ao
'94': ur_pk
'95': uz_uz
'96': vi_vn
'97': wo_sn
'98': xh_za
'99': yo_ng
'100': yue_hant_hk
'101': zu_za
'102': all
- name: language
dtype: string
- name: lang_group_id
dtype:
class_label:
names:
'0': western_european_we
'1': eastern_european_ee
'2': central_asia_middle_north_african_cmn
'3': sub_saharan_african_ssa
'4': south_asian_sa
'5': south_east_asian_sea
'6': chinese_japanase_korean_cjk
- name: hubert_discrete_tokens
sequence: int64
splits:
- name: train
num_bytes: 1737943974.832
num_examples: 2602
- name: validation
num_bytes: 242670188.0
num_examples: 394
- name: test
num_bytes: 411706107.0
num_examples: 647
download_size: 2362815325
dataset_size: 2392320269.832
---
# Dataset Card for "fleurs-hubert-discrete-tokens"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/emma_verde_loveliveschoolidolfestivalallstars | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of emma_verde/エマ/엠마베르데 (Love Live! School Idol Festival ALL STARS)
This is the dataset of emma_verde/エマ/엠마베르데 (Love Live! School Idol Festival ALL STARS), containing 500 images and their tags.
The core tags of this character are `bangs, freckles, brown_hair, breasts, long_hair, braid, twin_braids, blue_eyes, large_breasts, twintails, red_hair, hair_ornament`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 763.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emma_verde_loveliveschoolidolfestivalallstars/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 372.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emma_verde_loveliveschoolidolfestivalallstars/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1277 | 855.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emma_verde_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 650.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emma_verde_loveliveschoolidolfestivalallstars/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1277 | 1.29 GiB | [Download](https://huggingface.co/datasets/CyberHarem/emma_verde_loveliveschoolidolfestivalallstars/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/emma_verde_loveliveschoolidolfestivalallstars',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 21 |  |  |  |  |  | 1girl, blue_skirt, looking_at_viewer, solo, white_shirt, long_sleeves, blush, open_mouth, collared_shirt, :d, long_skirt, white_background, ribbon |
| 1 | 15 |  |  |  |  |  | 1girl, looking_at_viewer, nijigasaki_academy_school_uniform, plaid_skirt, short_sleeves, simple_background, solo, summer_uniform, white_background, collared_shirt, smile, white_shirt, blush, neck_ribbon, pleated_skirt, hair_between_eyes, blue_shirt, open_mouth, shirt_tucked_in |
| 2 | 6 |  |  |  |  |  | 1girl, collared_shirt, nijigasaki_academy_school_uniform, plaid_skirt, pleated_skirt, short_sleeves, smile, solo, summer_uniform, white_shirt, green_background, looking_at_viewer, neck_ribbon, blush, hair_between_eyes, low_twintails, shirt_tucked_in, closed_mouth, open_mouth |
| 3 | 31 |  |  |  |  |  | 1girl, nijigasaki_academy_school_uniform, solo, looking_at_viewer, black_jacket, white_shirt, long_sleeves, smile, winter_uniform, blush, neck_ribbon, blazer, collared_shirt, plaid_skirt, white_skirt, open_mouth, pleated_skirt, green_ribbon, white_background, simple_background |
| 4 | 15 |  |  |  |  |  | 1girl, looking_at_viewer, smile, solo, dirndl, hair_flower, collarbone, hairband, dress, open_mouth, blush, outdoors, sky |
| 5 | 6 |  |  |  |  |  | 1girl, dated, english_text, hair_flower, happy_birthday, looking_at_viewer, solo, blush, smile, twin_drills, character_name, green_dress, hat, low_twintails, sky, upper_body |
| 6 | 17 |  |  |  |  |  | 1girl, hair_flower, solo, smile, looking_at_viewer, bow, open_mouth, short_sleeves, green_dress, twin_drills, white_dress, blush, hat, low_twintails |
| 7 | 6 |  |  |  |  |  | 1girl, blue_sky, day, open_mouth, smile, solo, cleavage, cloud, looking_at_viewer, ocean, outdoors, blush, collarbone, green_bikini, navel, upper_body, beach, frilled_bikini, hair_between_eyes, hair_flower, jewelry |
| 8 | 6 |  |  |  |  |  | 1girl, blush, open_mouth, solo, white_apron, black_dress, enmaided, frills, looking_at_viewer, maid_apron, maid_headdress, :d, aqua_eyes, simple_background, white_background, low_twintails, puffy_short_sleeves, thighhighs, upper_teeth_only |
| 9 | 7 |  |  |  |  |  | 1girl, demon_horns, heart, looking_at_viewer, solo, earrings, sleeveless_dress, black_dress, black_gloves, blush, short_hair, smile, bare_shoulders, cleavage, frills, purple_dress, sitting, aqua_eyes, birthday, demon_tail, demon_wings, elbow_gloves, fake_horns, green_eyes, hairband, petals, see-through, tattoo |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_skirt | looking_at_viewer | solo | white_shirt | long_sleeves | blush | open_mouth | collared_shirt | :d | long_skirt | white_background | ribbon | nijigasaki_academy_school_uniform | plaid_skirt | short_sleeves | simple_background | summer_uniform | smile | neck_ribbon | pleated_skirt | hair_between_eyes | blue_shirt | shirt_tucked_in | green_background | low_twintails | closed_mouth | black_jacket | winter_uniform | blazer | white_skirt | green_ribbon | dirndl | hair_flower | collarbone | hairband | dress | outdoors | sky | dated | english_text | happy_birthday | twin_drills | character_name | green_dress | hat | upper_body | bow | white_dress | blue_sky | day | cleavage | cloud | ocean | green_bikini | navel | beach | frilled_bikini | jewelry | white_apron | black_dress | enmaided | frills | maid_apron | maid_headdress | aqua_eyes | puffy_short_sleeves | thighhighs | upper_teeth_only | demon_horns | heart | earrings | sleeveless_dress | black_gloves | short_hair | bare_shoulders | purple_dress | sitting | birthday | demon_tail | demon_wings | elbow_gloves | fake_horns | green_eyes | petals | see-through | tattoo |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:--------------------|:-------|:--------------|:---------------|:--------|:-------------|:-----------------|:-----|:-------------|:-------------------|:---------|:------------------------------------|:--------------|:----------------|:--------------------|:-----------------|:--------|:--------------|:----------------|:--------------------|:-------------|:------------------|:-------------------|:----------------|:---------------|:---------------|:-----------------|:---------|:--------------|:---------------|:---------|:--------------|:-------------|:-----------|:--------|:-----------|:------|:--------|:---------------|:-----------------|:--------------|:-----------------|:--------------|:------|:-------------|:------|:--------------|:-----------|:------|:-----------|:--------|:--------|:---------------|:--------|:--------|:-----------------|:----------|:--------------|:--------------|:-----------|:---------|:-------------|:-----------------|:------------|:----------------------|:-------------|:-------------------|:--------------|:--------|:-----------|:-------------------|:---------------|:-------------|:-----------------|:---------------|:----------|:-----------|:-------------|:--------------|:---------------|:-------------|:-------------|:---------|:--------------|:---------|
| 0 | 21 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 15 |  |  |  |  |  | X | | X | X | X | | X | X | X | | | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | X | X | X | | X | X | X | | | | | X | X | X | | X | X | X | X | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 31 |  |  |  |  |  | X | | X | X | X | X | X | X | X | | | X | | X | X | | X | | X | X | X | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 15 |  |  |  |  |  | X | | X | X | | | X | X | | | | | | | | | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | | X | X | | | X | | | | | | | | | | | | X | | | | | | | X | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 17 |  |  |  |  |  | X | | X | X | | | X | X | | | | | | | | X | | | X | | | | | | | X | | | | | | | | X | | | | | | | | | X | | X | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | | X | X | | | X | X | | | | | | | | | | | X | | | X | | | | | | | | | | | | X | X | | | X | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 6 |  |  |  |  |  | X | | X | X | | | X | X | | X | | X | | | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 9 | 7 |  |  |  |  |  | X | | X | X | | | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | | X | | X | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
Shrenik/CodeLLamaBash | ---
license: mit
---
|
CronosGhost/code-reranking-CodeLangQueries-MachineGeneratedDocs | ---
dataset_info:
features:
- name: query
dtype: string
- name: positive
sequence: string
- name: negative
sequence: string
splits:
- name: train
num_bytes: 15923659
num_examples: 9900
download_size: 7047696
dataset_size: 15923659
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/ai_chan_honkai3 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of ai_chan (Houkai 3rd)
This is the dataset of ai_chan (Houkai 3rd), containing 106 images and their tags.
The core tags of this character are `green_hair, bangs, hair_bun, double_bun, orange_eyes, long_hair, hair_ornament, breasts, twintails`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 106 | 158.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ai_chan_honkai3/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 106 | 81.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ai_chan_honkai3/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 251 | 177.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ai_chan_honkai3/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 106 | 135.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ai_chan_honkai3/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 251 | 258.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ai_chan_honkai3/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ai_chan_honkai3',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, barcode_tattoo, bare_shoulders, black_dress, black_gloves, cleavage, fingerless_gloves, solo, smile, looking_at_viewer, open_mouth, headband, white_thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | barcode_tattoo | bare_shoulders | black_dress | black_gloves | cleavage | fingerless_gloves | solo | smile | looking_at_viewer | open_mouth | headband | white_thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:-----------------|:--------------|:---------------|:-----------|:--------------------|:-------|:--------|:--------------------|:-------------|:-----------|:-------------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
projecte-aina/catalan_government_crawling | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ca
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Catalan Government Crawling
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- fill-mask
task_ids: []
---
# Dataset Card for Catalan Government Crawling
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/5511667
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** langtechbsc.es
### Dataset Summary
The Catalan Government Crawling Corpus is a 39-million-token web corpus of Catalan built from the web. It has been obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government during September and October 2020. It consists of 39,117,909 tokens, 1,565,433 sentences and 71,043 documents. Documents are separated by single new lines. It is a subcorpus of the Catalan Textual Corpus.
This work is licensed under a [Creative Commons CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/) license.
### Supported Tasks and Leaderboards
This corpus is mainly intended to pretrain language models and word representations.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
```
{
'text': 'Títol: Estudi de tres marededéus del bisbat de Solsona\nResponsables del projecte: Pep Paret conservador–restaurador de l\'Àrea de Pintura i Escultura sobre fusta del CRBMC\nL\'objecte d\'aquest est
udi és un millor coneixement de l\'estat de conservació del patrimoni moble català, en concret de tres escultures romàniques del bisbat de Solsona.\nEs du a terme un estudi científic de tres marededéus del bisb
at de Solsona: la Mare de Déu de Queralt, la Mare de Déu de Coaner i la Mare de Déu de la Quar.\nLes imatges originals són romàniques, però totes elles han patit modificacions estructurals...'
}
```
### Data Fields
- `text` (str): Text.
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
The corpus has been obtained by crawling the all the `.gencat.cat` domains during July 2020.
For preprocessing we used [Corpus-Cleaner](https://github.com/TeMU-BSC/corpus-cleaner-acl), a modular Python-based toolkit to clean raw text corpora through generator pipelines.
#### Who are the source language producers?
The data comes from the official Catalan Government websites.
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
Since all data comes from public websites, no anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that since the data comes from public web pages, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
[Creative Commons CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/).
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
eprint={2107.07903},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
phatle157/dsada | ---
license: mit
---
|
jonasantos5240/leon4 | ---
license: openrail
---
|
kunal18/ScienceQA-processed_VALIDATION | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: pixel_values
sequence:
sequence:
sequence: float32
- name: pixel_mask
sequence:
sequence: int64
- name: labels
sequence: float32
splits:
- name: validation
num_bytes: 9840363136
num_examples: 1328
download_size: 319687102
dataset_size: 9840363136
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
sngsfydy/Messidor2_except_0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '1'
'1': '2'
'2': '3'
'3': '4'
splits:
- name: train
num_bytes: 1381059381.0
num_examples: 727
download_size: 1375867454
dataset_size: 1381059381.0
---
# Dataset Card for "Messidor2_except_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vkaradeniz/moneypay_sss | ---
dataset_info:
features:
- name: input
dtype: int64
- name: instruction
dtype: string
- name: output
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 24455
num_examples: 74
download_size: 16040
dataset_size: 24455
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Bsbell21/generadai-sample | ---
dataset_info:
features:
- name: item
dtype: string
- name: description
dtype: string
- name: ad
dtype: string
splits:
- name: train
num_bytes: 3915
num_examples: 5
download_size: 7989
dataset_size: 3915
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "generadai-sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fubel/synthehicle | ---
license: cc-by-nc-sa-4.0
language:
- en
size_categories:
- 1M<n<10M
---
# Dataset Card for Synthehicle
Synthehicle is a massive CARLA-based synthehic multi-vehicle multi-camera tracking dataset and includes ground truth for 2D detection and tracking, 3D detection and tracking, depth estimation, and semantic, instance and panoptic segmentation.
All details can be found in [our paper](https://openaccess.thecvf.com/content/WACV2023W/RWS/html/Herzog_Synthehicle_Multi-Vehicle_Multi-Camera_Tracking_in_Virtual_Cities_WACVW_2023_paper.html) and [git repository](https://github.com/fubel/synthehicle). |
freshpearYoon/vr_train_free_67 | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: filename
dtype: string
- name: NumOfUtterance
dtype: int64
- name: text
dtype: string
- name: samplingrate
dtype: int64
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: speaker_id
dtype: string
- name: directory
dtype: string
splits:
- name: train
num_bytes: 6163836244
num_examples: 10000
download_size: 953974656
dataset_size: 6163836244
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lerobot/aloha_sim_insertion_scripted | ---
dataset_info:
features:
- name: observation.state
sequence: float32
- name: action
sequence: float32
- name: episode_id
dtype: int64
- name: frame_id
dtype: int64
- name: timestamp
dtype: float32
- name: next.done
dtype: bool
- name: observation.images.top
sequence:
sequence:
sequence: uint8
- name: index
dtype: int64
- name: episode_data_id_from
dtype: int64
- name: episode_data_id_to
dtype: int64
splits:
- name: train
num_bytes: 18550802500
num_examples: 20000
download_size: 1291836262
dataset_size: 18550802500
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-34156b-59952145380 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: 0x70DA/pegasus-cnn_dailymail
metrics: ['rouge', 'accuracy', 'bleu', 'exact_match', 'f1', 'perplexity', 'recall', 'precision', 'roc_auc']
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: 0x70DA/pegasus-cnn_dailymail
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sini raj p](https://huggingface.co/sini raj p) for evaluating this model. |
tavink/Vozes | ---
license: openrail
---
|
CyberHarem/franka_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of franka/フランカ/芙兰卡 (Arknights)
This is the dataset of franka/フランカ/芙兰卡 (Arknights), containing 297 images and their tags.
The core tags of this character are `animal_ears, fox_ears, long_hair, brown_hair, fox_girl, animal_ear_fluff, tail, fox_tail, brown_eyes, breasts, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 297 | 470.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/franka_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 297 | 396.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/franka_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 728 | 749.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/franka_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/franka_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 14 |  |  |  |  |  | 1girl, single_leg_pantyhose, solo, asymmetrical_legwear, elbow_gloves, single_thighhigh, grey_shirt, holding_sword, looking_at_viewer, simple_background, black_gloves, white_background, black_skirt, black_footwear, black_thighhighs, smile, high_heels, collared_shirt, full_body, id_card, standing |
| 1 | 23 |  |  |  |  |  | 1girl, simple_background, upper_body, solo, collared_shirt, looking_at_viewer, smile, white_background, blush, grey_shirt, elbow_gloves, black_gloves, hair_between_eyes, open_mouth, short_sleeves, closed_mouth |
| 2 | 25 |  |  |  |  |  | sleeveless_shirt, bare_shoulders, 1girl, off_shoulder, open_jacket, black_shirt, solo, black_jacket, collared_shirt, looking_at_viewer, black_gloves, long_sleeves, ponytail, smile, black_pantyhose, black_shorts, brown_pantyhose, closed_mouth, grey_necktie, simple_background, thigh_strap, official_alternate_costume, yellow_eyes, cowboy_shot |
| 3 | 15 |  |  |  |  |  | 1girl, long_sleeves, solo, looking_at_viewer, crop_top, midriff, official_alternate_costume, very_long_hair, navel, pants, smile, holding, open_mouth, outdoors, pantyhose, simple_background, stomach, white_shirt, blush, choker, cropped_jacket, sitting, white_background, white_jacket |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | single_leg_pantyhose | solo | asymmetrical_legwear | elbow_gloves | single_thighhigh | grey_shirt | holding_sword | looking_at_viewer | simple_background | black_gloves | white_background | black_skirt | black_footwear | black_thighhighs | smile | high_heels | collared_shirt | full_body | id_card | standing | upper_body | blush | hair_between_eyes | open_mouth | short_sleeves | closed_mouth | sleeveless_shirt | bare_shoulders | off_shoulder | open_jacket | black_shirt | black_jacket | long_sleeves | ponytail | black_pantyhose | black_shorts | brown_pantyhose | grey_necktie | thigh_strap | official_alternate_costume | yellow_eyes | cowboy_shot | crop_top | midriff | very_long_hair | navel | pants | holding | outdoors | pantyhose | stomach | white_shirt | choker | cropped_jacket | sitting | white_jacket |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------------|:-------|:-----------------------|:---------------|:-------------------|:-------------|:----------------|:--------------------|:--------------------|:---------------|:-------------------|:--------------|:-----------------|:-------------------|:--------|:-------------|:-----------------|:------------|:----------|:-----------|:-------------|:--------|:--------------------|:-------------|:----------------|:---------------|:-------------------|:-----------------|:---------------|:--------------|:--------------|:---------------|:---------------|:-----------|:------------------|:---------------|:------------------|:---------------|:--------------|:-----------------------------|:--------------|:--------------|:-----------|:----------|:-----------------|:--------|:--------|:----------|:-----------|:------------|:----------|:--------------|:---------|:-----------------|:----------|:---------------|
| 0 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 23 |  |  |  |  |  | X | | X | | X | | X | | X | X | X | X | | | | X | | X | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 25 |  |  |  |  |  | X | | X | | | | | | X | X | X | | | | | X | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 3 | 15 |  |  |  |  |  | X | | X | | | | | | X | X | | X | | | | X | | | | | | | X | | X | | | | | | | | | X | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
Weyaxi/HelpSteer-filtered | ---
license: cc-by-4.0
---
# HelpSteer-filtered
This dataset is a highly filtered version of the [nvidia/HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) dataset.
# ❓ How this dataset was filtered:
1. I calculated the sum of the columns `["helpfulness," "correctness," "coherence," "complexity," "verbosity"]` and created a new column named `sum`.
2. I changed some column names and added a **empty column** to match the Alpaca format.
3. The dataset was then filtered to include only those entries with a sum greater than or equal to 16.
# 🧐 More Information
You can find more information about the unfiltered dataset here:
- [nvidia/HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) |
nayohan/koquality_raw | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: len
dtype: int64
- name: group
dtype: string
splits:
- name: train
num_bytes: 334831140
num_examples: 375506
download_size: 177046961
dataset_size: 334831140
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "koquality_raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Corianas/EnglishGrader | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
---
This is inspired by the classifier of Textbooks is all you need.
Asking gpt-4 to rank samples an a scale of 0-4
You are a harsh English teacher, please determine the educational value of the following text for a student whose goal is to learn simple English with a single number from 0-4.
The numbers mean:
0 - No value
1 - low quality English
2 - medium quality English
3 - High quality english
4 - Perfect English
(the word harsh was not in all of the samples taken, and should be re-run with it.) |
stulcrad/CNEC1_1_Supertypes_flat | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-A
'2': I-A
'3': B-C
'4': I-C
'5': B-G
'6': I-G
'7': B-I
'8': I-I
'9': B-M
'10': I-M
'11': B-N
'12': I-N
'13': B-O
'14': I-O
'15': B-P
'16': I-P
'17': B-Q
'18': I-Q
'19': B-T
'20': I-T
- name: langs
sequence: string
- name: spans
sequence: string
splits:
- name: train
num_bytes: 3328683
num_examples: 4695
- name: validation
num_bytes: 415693
num_examples: 587
- name: test
num_bytes: 419691
num_examples: 586
download_size: 923898
dataset_size: 4164067
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
language:
- cs
--- |
ziq/depression_advice | ---
license: mit
---
|
dongyoung4091/shp_with_features_20k_flan_t5_large_external_rm1 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: post_id
dtype: string
- name: domain
dtype: string
- name: upvote_ratio
dtype: float64
- name: history
dtype: string
- name: c_root_id_A
dtype: string
- name: c_root_id_B
dtype: string
- name: created_at_utc_A
dtype: int64
- name: created_at_utc_B
dtype: int64
- name: score_A
dtype: int64
- name: score_B
dtype: int64
- name: human_ref_A
dtype: string
- name: human_ref_B
dtype: string
- name: labels
dtype: int64
- name: seconds_difference
dtype: float64
- name: score_ratio
dtype: float64
- name: helpfulness_A
dtype: float64
- name: helpfulness_B
dtype: float64
- name: specificity_A
dtype: float64
- name: specificity_B
dtype: float64
- name: intent_A
dtype: float64
- name: intent_B
dtype: float64
- name: factuality_A
dtype: float64
- name: factuality_B
dtype: float64
- name: easy-to-understand_A
dtype: float64
- name: easy-to-understand_B
dtype: float64
- name: relevance_A
dtype: float64
- name: relevance_B
dtype: float64
- name: readability_A
dtype: float64
- name: readability_B
dtype: float64
- name: enough-detail_A
dtype: float64
- name: enough-detail_B
dtype: float64
- name: biased:_A
dtype: float64
- name: biased:_B
dtype: float64
- name: fail-to-consider-individual-preferences_A
dtype: float64
- name: fail-to-consider-individual-preferences_B
dtype: float64
- name: repetetive_A
dtype: float64
- name: repetetive_B
dtype: float64
- name: fail-to-consider-context_A
dtype: float64
- name: fail-to-consider-context_B
dtype: float64
- name: too-long_A
dtype: float64
- name: too-long_B
dtype: float64
- name: __index_level_0__
dtype: int64
- name: log_score_A
dtype: float64
- name: log_score_B
dtype: float64
- name: external_rm1_A
dtype: float64
- name: external_rm1_B
dtype: float64
splits:
- name: train
num_bytes: 20858406
num_examples: 9459
- name: test
num_bytes: 20811284
num_examples: 9459
download_size: 24209228
dataset_size: 41669690
---
# Dataset Card for "shp_with_features_20k_flan_t5_large_external_rm1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Anirith/2345111 | ---
license: apache-2.0
---
|
liuyanchen1015/MULTI_VALUE_mnli_perfect_slam | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev_matched
num_bytes: 134358
num_examples: 548
- name: dev_mismatched
num_bytes: 165909
num_examples: 636
- name: test_matched
num_bytes: 157113
num_examples: 616
- name: test_mismatched
num_bytes: 157412
num_examples: 629
- name: train
num_bytes: 5569654
num_examples: 22574
download_size: 3771244
dataset_size: 6184446
---
# Dataset Card for "MULTI_VALUE_mnli_perfect_slam"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
irds/wikiclir_en-simple | ---
pretty_name: '`wikiclir/en-simple`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `wikiclir/en-simple`
The `wikiclir/en-simple` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/en-simple).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=127,089
- `queries` (i.e., topics); count=114,572
- `qrels`: (relevance assessments); count=250,380
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_en-simple', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_en-simple', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_en-simple', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
wobswobs/Vox | ---
license: bigscience-openrail-m
---
|
lusstta/stable_diffusion_instructional_dataset | ---
task_categories:
- text2text-generation
- question-answering
language:
- en
tags:
- stable diffussion
- llama
- llama2
- chatgpt
- prompt
- llm
- dataset
- finetune
- train
- qlora
- lora
pretty_name: Stable Difussion Instruct Dataset - AiresAI
---
# Stable Diffusion Dataset
# Description:
This dataset is in Jsonl format and is based on the MadVoyager/stable_diffusion_instructional_dataset.
# Overview:
The Stable Diffusion Dataset comprises approximately 80,000 meticulously curated prompts sourced from the image finder of Stable Diffusion: "Lexica.art". The dataset is intended to facilitate training and fine-tuning of various language models, including LLaMa2.
# Key Features:
◉ Jsonl format for seamless integration with existing projects.
◉ High-quality prompts extracted from the Stable Diffusion image finder.
◉ Ideal for enhancing models like LLaMa2 through training and fine-tuning.
◉ Usage:
◉ Researchers and developers can utilize this dataset to:
Train and fine-tune language models like LLaMa2.
Conduct experiments in natural language processing and generation.
Enhance and expand AI capabilities in creative and interactive applications.
# Acknowledgments:
We acknowledge the creators and contributors of the MadVoyager/stable_diffusion_instructional_dataset for providing the foundation for this dataset. |
edbeeching/prj_gia_dataset_atari_2B_atari_enduro_1111 | ---
library_name: gia
tags:
- deep-reinforcement-learning
- reinforcement-learning
- gia
- multi-task
- multi-modal
- imitation-learning
- offline-reinforcement-learning
---
An imitation learning environment for the atari_enduro environment, sample for the policy atari_2B_atari_enduro_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
open-llm-leaderboard/details_yam-peleg__Experiment26-7B | ---
pretty_name: Evaluation run of yam-peleg/Experiment26-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_yam-peleg__Experiment26-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-01T20:37:14.349624](https://huggingface.co/datasets/open-llm-leaderboard/details_yam-peleg__Experiment26-7B/blob/main/results_2024-03-01T20-37-14.349624.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6498242327367915,\n\
\ \"acc_stderr\": 0.03203527519486544,\n \"acc_norm\": 0.6487304781510416,\n\
\ \"acc_norm_stderr\": 0.032710587128299856,\n \"mc1\": 0.6340269277845777,\n\
\ \"mc1_stderr\": 0.016862941684088386,\n \"mc2\": 0.7803918385000735,\n\
\ \"mc2_stderr\": 0.01369516952969401\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7150170648464164,\n \"acc_stderr\": 0.013191348179838795,\n\
\ \"acc_norm\": 0.7312286689419796,\n \"acc_norm_stderr\": 0.012955065963710695\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7171878111929895,\n\
\ \"acc_stderr\": 0.004494454911844619,\n \"acc_norm\": 0.8911571400119498,\n\
\ \"acc_norm_stderr\": 0.0031080545633521066\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n\
\ \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n\
\ \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7105263157894737,\n \"acc_stderr\": 0.03690677986137283,\n\
\ \"acc_norm\": 0.7105263157894737,\n \"acc_norm_stderr\": 0.03690677986137283\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n\
\ \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7056603773584905,\n \"acc_stderr\": 0.028049186315695255,\n\
\ \"acc_norm\": 0.7056603773584905,\n \"acc_norm_stderr\": 0.028049186315695255\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n\
\ \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n\
\ \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.57,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\"\
: 0.57,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n\
\ \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n\
\ \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.35294117647058826,\n \"acc_stderr\": 0.04755129616062946,\n\
\ \"acc_norm\": 0.35294117647058826,\n \"acc_norm_stderr\": 0.04755129616062946\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n\
\ \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5702127659574469,\n \"acc_stderr\": 0.03236214467715564,\n\
\ \"acc_norm\": 0.5702127659574469,\n \"acc_norm_stderr\": 0.03236214467715564\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n\
\ \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4126984126984127,\n \"acc_stderr\": 0.025355741263055273,\n \"\
acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.025355741263055273\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.47619047619047616,\n\
\ \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.47619047619047616,\n\
\ \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7838709677419354,\n\
\ \"acc_stderr\": 0.02341529343356853,\n \"acc_norm\": 0.7838709677419354,\n\
\ \"acc_norm_stderr\": 0.02341529343356853\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n\
\ \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n\
\ \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.797979797979798,\n \"acc_stderr\": 0.02860620428922987,\n \"acc_norm\"\
: 0.797979797979798,\n \"acc_norm_stderr\": 0.02860620428922987\n },\n\
\ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.9119170984455959,\n \"acc_stderr\": 0.02045374660160103,\n\
\ \"acc_norm\": 0.9119170984455959,\n \"acc_norm_stderr\": 0.02045374660160103\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563976,\n\
\ \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563976\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.31851851851851853,\n \"acc_stderr\": 0.02840653309060846,\n \
\ \"acc_norm\": 0.31851851851851853,\n \"acc_norm_stderr\": 0.02840653309060846\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \
\ \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.37748344370860926,\n \"acc_stderr\": 0.03958027231121569,\n \"\
acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.03958027231121569\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8440366972477065,\n \"acc_stderr\": 0.01555580271359017,\n \"\
acc_norm\": 0.8440366972477065,\n \"acc_norm_stderr\": 0.01555580271359017\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5092592592592593,\n \"acc_stderr\": 0.034093869469927006,\n \"\
acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.034093869469927006\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8431372549019608,\n \"acc_stderr\": 0.02552472232455335,\n \"\
acc_norm\": 0.8431372549019608,\n \"acc_norm_stderr\": 0.02552472232455335\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.810126582278481,\n \"acc_stderr\": 0.02553010046023349,\n \
\ \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.02553010046023349\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n\
\ \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n\
\ \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159465,\n\
\ \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159465\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070416,\n \"\
acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070416\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n\
\ \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n\
\ \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7852760736196319,\n \"acc_stderr\": 0.032262193772867744,\n\
\ \"acc_norm\": 0.7852760736196319,\n \"acc_norm_stderr\": 0.032262193772867744\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.41964285714285715,\n\
\ \"acc_stderr\": 0.04684099321077106,\n \"acc_norm\": 0.41964285714285715,\n\
\ \"acc_norm_stderr\": 0.04684099321077106\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.021262719400406964,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.021262719400406964\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.822477650063857,\n\
\ \"acc_stderr\": 0.013664230995834841,\n \"acc_norm\": 0.822477650063857,\n\
\ \"acc_norm_stderr\": 0.013664230995834841\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7225433526011561,\n \"acc_stderr\": 0.024105712607754307,\n\
\ \"acc_norm\": 0.7225433526011561,\n \"acc_norm_stderr\": 0.024105712607754307\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4223463687150838,\n\
\ \"acc_stderr\": 0.016519594275297117,\n \"acc_norm\": 0.4223463687150838,\n\
\ \"acc_norm_stderr\": 0.016519594275297117\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.025738854797818733,\n\
\ \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.025738854797818733\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6977491961414791,\n\
\ \"acc_stderr\": 0.02608270069539966,\n \"acc_norm\": 0.6977491961414791,\n\
\ \"acc_norm_stderr\": 0.02608270069539966\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7376543209876543,\n \"acc_stderr\": 0.024477222856135114,\n\
\ \"acc_norm\": 0.7376543209876543,\n \"acc_norm_stderr\": 0.024477222856135114\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.48226950354609927,\n \"acc_stderr\": 0.02980873964223777,\n \
\ \"acc_norm\": 0.48226950354609927,\n \"acc_norm_stderr\": 0.02980873964223777\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4771838331160365,\n\
\ \"acc_stderr\": 0.012756933382823698,\n \"acc_norm\": 0.4771838331160365,\n\
\ \"acc_norm_stderr\": 0.012756933382823698\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6801470588235294,\n \"acc_stderr\": 0.02833295951403121,\n\
\ \"acc_norm\": 0.6801470588235294,\n \"acc_norm_stderr\": 0.02833295951403121\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6781045751633987,\n \"acc_stderr\": 0.018901015322093092,\n \
\ \"acc_norm\": 0.6781045751633987,\n \"acc_norm_stderr\": 0.018901015322093092\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784596,\n\
\ \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784596\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454125,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454125\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n\
\ \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n\
\ \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.6340269277845777,\n\
\ \"mc1_stderr\": 0.016862941684088386,\n \"mc2\": 0.7803918385000735,\n\
\ \"mc2_stderr\": 0.01369516952969401\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8500394632991318,\n \"acc_stderr\": 0.010034394804580809\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7043214556482184,\n \
\ \"acc_stderr\": 0.012570068947898775\n }\n}\n```"
repo_url: https://huggingface.co/yam-peleg/Experiment26-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|arc:challenge|25_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|arc:challenge|25_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|gsm8k|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|gsm8k|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hellaswag|10_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hellaswag|10_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-29T20-05-18.899144.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-01T20-37-14.349624.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-01T20-37-14.349624.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- '**/details_harness|winogrande|5_2024-02-29T20-05-18.899144.parquet'
- split: 2024_03_01T20_37_14.349624
path:
- '**/details_harness|winogrande|5_2024-03-01T20-37-14.349624.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-01T20-37-14.349624.parquet'
- config_name: results
data_files:
- split: 2024_02_29T20_05_18.899144
path:
- results_2024-02-29T20-05-18.899144.parquet
- split: 2024_03_01T20_37_14.349624
path:
- results_2024-03-01T20-37-14.349624.parquet
- split: latest
path:
- results_2024-03-01T20-37-14.349624.parquet
---
# Dataset Card for Evaluation run of yam-peleg/Experiment26-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_yam-peleg__Experiment26-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-01T20:37:14.349624](https://huggingface.co/datasets/open-llm-leaderboard/details_yam-peleg__Experiment26-7B/blob/main/results_2024-03-01T20-37-14.349624.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6498242327367915,
"acc_stderr": 0.03203527519486544,
"acc_norm": 0.6487304781510416,
"acc_norm_stderr": 0.032710587128299856,
"mc1": 0.6340269277845777,
"mc1_stderr": 0.016862941684088386,
"mc2": 0.7803918385000735,
"mc2_stderr": 0.01369516952969401
},
"harness|arc:challenge|25": {
"acc": 0.7150170648464164,
"acc_stderr": 0.013191348179838795,
"acc_norm": 0.7312286689419796,
"acc_norm_stderr": 0.012955065963710695
},
"harness|hellaswag|10": {
"acc": 0.7171878111929895,
"acc_stderr": 0.004494454911844619,
"acc_norm": 0.8911571400119498,
"acc_norm_stderr": 0.0031080545633521066
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.04153948404742398,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.04153948404742398
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7105263157894737,
"acc_stderr": 0.03690677986137283,
"acc_norm": 0.7105263157894737,
"acc_norm_stderr": 0.03690677986137283
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7056603773584905,
"acc_stderr": 0.028049186315695255,
"acc_norm": 0.7056603773584905,
"acc_norm_stderr": 0.028049186315695255
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.03583901754736412,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.03583901754736412
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.35294117647058826,
"acc_stderr": 0.04755129616062946,
"acc_norm": 0.35294117647058826,
"acc_norm_stderr": 0.04755129616062946
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5702127659574469,
"acc_stderr": 0.03236214467715564,
"acc_norm": 0.5702127659574469,
"acc_norm_stderr": 0.03236214467715564
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5448275862068965,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.5448275862068965,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.025355741263055273,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.025355741263055273
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.47619047619047616,
"acc_stderr": 0.04467062628403273,
"acc_norm": 0.47619047619047616,
"acc_norm_stderr": 0.04467062628403273
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7838709677419354,
"acc_stderr": 0.02341529343356853,
"acc_norm": 0.7838709677419354,
"acc_norm_stderr": 0.02341529343356853
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5123152709359606,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.5123152709359606,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.02860620428922987,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.02860620428922987
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9119170984455959,
"acc_stderr": 0.02045374660160103,
"acc_norm": 0.9119170984455959,
"acc_norm_stderr": 0.02045374660160103
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6641025641025641,
"acc_stderr": 0.023946724741563976,
"acc_norm": 0.6641025641025641,
"acc_norm_stderr": 0.023946724741563976
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.31851851851851853,
"acc_stderr": 0.02840653309060846,
"acc_norm": 0.31851851851851853,
"acc_norm_stderr": 0.02840653309060846
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.37748344370860926,
"acc_stderr": 0.03958027231121569,
"acc_norm": 0.37748344370860926,
"acc_norm_stderr": 0.03958027231121569
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8440366972477065,
"acc_stderr": 0.01555580271359017,
"acc_norm": 0.8440366972477065,
"acc_norm_stderr": 0.01555580271359017
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8431372549019608,
"acc_stderr": 0.02552472232455335,
"acc_norm": 0.8431372549019608,
"acc_norm_stderr": 0.02552472232455335
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.02553010046023349,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.02553010046023349
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7938931297709924,
"acc_stderr": 0.03547771004159465,
"acc_norm": 0.7938931297709924,
"acc_norm_stderr": 0.03547771004159465
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7603305785123967,
"acc_stderr": 0.03896878985070416,
"acc_norm": 0.7603305785123967,
"acc_norm_stderr": 0.03896878985070416
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7852760736196319,
"acc_stderr": 0.032262193772867744,
"acc_norm": 0.7852760736196319,
"acc_norm_stderr": 0.032262193772867744
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.41964285714285715,
"acc_stderr": 0.04684099321077106,
"acc_norm": 0.41964285714285715,
"acc_norm_stderr": 0.04684099321077106
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406964,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406964
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.822477650063857,
"acc_stderr": 0.013664230995834841,
"acc_norm": 0.822477650063857,
"acc_norm_stderr": 0.013664230995834841
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7225433526011561,
"acc_stderr": 0.024105712607754307,
"acc_norm": 0.7225433526011561,
"acc_norm_stderr": 0.024105712607754307
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4223463687150838,
"acc_stderr": 0.016519594275297117,
"acc_norm": 0.4223463687150838,
"acc_norm_stderr": 0.016519594275297117
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7189542483660131,
"acc_stderr": 0.025738854797818733,
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.025738854797818733
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6977491961414791,
"acc_stderr": 0.02608270069539966,
"acc_norm": 0.6977491961414791,
"acc_norm_stderr": 0.02608270069539966
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7376543209876543,
"acc_stderr": 0.024477222856135114,
"acc_norm": 0.7376543209876543,
"acc_norm_stderr": 0.024477222856135114
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48226950354609927,
"acc_stderr": 0.02980873964223777,
"acc_norm": 0.48226950354609927,
"acc_norm_stderr": 0.02980873964223777
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4771838331160365,
"acc_stderr": 0.012756933382823698,
"acc_norm": 0.4771838331160365,
"acc_norm_stderr": 0.012756933382823698
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6801470588235294,
"acc_stderr": 0.02833295951403121,
"acc_norm": 0.6801470588235294,
"acc_norm_stderr": 0.02833295951403121
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6781045751633987,
"acc_stderr": 0.018901015322093092,
"acc_norm": 0.6781045751633987,
"acc_norm_stderr": 0.018901015322093092
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.028263889943784596,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.028263889943784596
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454125,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454125
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.6340269277845777,
"mc1_stderr": 0.016862941684088386,
"mc2": 0.7803918385000735,
"mc2_stderr": 0.01369516952969401
},
"harness|winogrande|5": {
"acc": 0.8500394632991318,
"acc_stderr": 0.010034394804580809
},
"harness|gsm8k|5": {
"acc": 0.7043214556482184,
"acc_stderr": 0.012570068947898775
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
BangumiBase/welcometothenhk | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Welcome To The N.h.k.
This is the image base of bangumi Welcome to the N.H.K., we detected 17 characters, 2205 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 1316 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 37 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 323 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 71 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 8 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 13 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 47 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 26 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 107 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 9 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 5 | [Download](10/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 11 | 74 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 16 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 8 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 22 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 45 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 78 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
CyberHarem/serena_pokemon | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of serena/セレナ (Pokémon)
This is the dataset of serena/セレナ (Pokémon), containing 500 images and their tags.
The core tags of this character are `long_hair, blue_eyes, hat, blonde_hair, breasts, sunglasses, eyelashes, brown_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:------------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 605.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/serena_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 363.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/serena_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1181 | 744.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/serena_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 542.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/serena_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1181 | 1021.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/serena_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/serena_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 33 |  |  |  |  |  | 1girl, eyewear_on_headwear, pleated_skirt, red_skirt, sleeveless_shirt, solo, bracelet, black_thighhighs, pink_bag, pink_headwear, collared_shirt, looking_at_viewer, white-framed_eyewear, black_shirt, grey_eyes, high-waist_skirt, handbag, open_mouth, :d, blush, red_headwear, shoes |
| 1 | 7 |  |  |  |  |  | 1girl, collared_shirt, eyewear_on_headwear, pleated_skirt, red_skirt, sleeveless_shirt, white-framed_eyewear, high-waist_skirt, looking_at_viewer, pink_headwear, red_headwear, solo, black_shirt, black_thighhighs, parted_lips, floating_hair, sitting, white_background, zettai_ryouiki |
| 2 | 8 |  |  |  |  |  | 1girl, black_thighhighs, solo, eyewear_on_head, pleated_skirt, sleeveless, smile, bracelet, zettai_ryouiki |
| 3 | 10 |  |  |  |  |  | 1girl, blush, day, outdoors, solo, black_thighhighs, looking_at_viewer, open_mouth, sky, tree, cloud, no_panties, pleated_skirt, red_skirt, sleeveless_shirt, pink_headwear, :d, black_shirt, tongue, anus, bow, bush, flower, from_behind, looking_back, pussy_juice, bare_shoulders, grass, shiny, sweat, uncensored |
| 4 | 34 |  |  |  |  |  | 1girl, nipples, blush, open_mouth, navel, 1boy, hetero, pussy, penis, sex, vaginal, mosaic_censoring, spread_legs, day, collarbone, light_brown_hair, outdoors, shiny_skin, tongue, completely_nude, solo_focus, grass, looking_at_viewer, shiny_hair, smile, cum, sweat, tree |
| 5 | 5 |  |  |  |  |  | 1girl, cloud, looking_at_viewer, navel, outdoors, solo, blush, day, medium_breasts, ocean, water, wet, beach, blue_sky, closed_mouth, nipples, shiny, bangs, cleavage, collarbone, completely_nude, front-tie_top, pink_bikini, pussy, rock, side-tie_bikini_bottom, smile, standing, wading |
| 6 | 5 |  |  |  |  |  | 1girl, heart, looking_at_viewer, anus, blush, female_pubic_hair, solo, uncensored, ass, choker, grin, nude, on_back, presenting, spread_legs, artist_name, black_thighhighs, clitoris, closed_mouth, simple_background, spread_pussy, sweat |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | eyewear_on_headwear | pleated_skirt | red_skirt | sleeveless_shirt | solo | bracelet | black_thighhighs | pink_bag | pink_headwear | collared_shirt | looking_at_viewer | white-framed_eyewear | black_shirt | grey_eyes | high-waist_skirt | handbag | open_mouth | :d | blush | red_headwear | shoes | parted_lips | floating_hair | sitting | white_background | zettai_ryouiki | eyewear_on_head | sleeveless | smile | day | outdoors | sky | tree | cloud | no_panties | tongue | anus | bow | bush | flower | from_behind | looking_back | pussy_juice | bare_shoulders | grass | shiny | sweat | uncensored | nipples | navel | 1boy | hetero | pussy | penis | sex | vaginal | mosaic_censoring | spread_legs | collarbone | light_brown_hair | shiny_skin | completely_nude | solo_focus | shiny_hair | cum | medium_breasts | ocean | water | wet | beach | blue_sky | closed_mouth | bangs | cleavage | front-tie_top | pink_bikini | rock | side-tie_bikini_bottom | standing | wading | heart | female_pubic_hair | ass | choker | grin | nude | on_back | presenting | artist_name | clitoris | simple_background | spread_pussy |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------------|:----------------|:------------|:-------------------|:-------|:-----------|:-------------------|:-----------|:----------------|:-----------------|:--------------------|:-----------------------|:--------------|:------------|:-------------------|:----------|:-------------|:-----|:--------|:---------------|:--------|:--------------|:----------------|:----------|:-------------------|:-----------------|:------------------|:-------------|:--------|:------|:-----------|:------|:-------|:--------|:-------------|:---------|:-------|:------|:-------|:---------|:--------------|:---------------|:--------------|:-----------------|:--------|:--------|:--------|:-------------|:----------|:--------|:-------|:---------|:--------|:--------|:------|:----------|:-------------------|:--------------|:-------------|:-------------------|:-------------|:------------------|:-------------|:-------------|:------|:-----------------|:--------|:--------|:------|:--------|:-----------|:---------------|:--------|:-----------|:----------------|:--------------|:-------|:-------------------------|:-----------|:---------|:--------|:--------------------|:------|:---------|:-------|:-------|:----------|:-------------|:--------------|:-----------|:--------------------|:---------------|
| 0 | 33 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | X | X | | X | | X | X | X | X | X | | X | | | | | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | X | | | X | X | X | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 10 |  |  |  |  |  | X | | X | X | X | X | | X | | X | | X | | X | | | | X | X | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 34 |  |  |  |  |  | X | | | | | | | | | | | X | | | | | | X | | X | | | | | | | | | | X | X | X | | X | | | X | | | | | | | | | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | | | | X | | | | | | X | | | | | | | | X | | | | | | | | | | X | X | X | | | X | | | | | | | | | | | | X | | | X | X | | | X | | | | | | X | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | | | | | X | | X | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | X | | | | | | | | | | X | | | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
bdsaglam/musique-jerx-sft-multi-turn-openai | ---
dataset_info:
features:
- name: chat
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 98554
num_examples: 58
download_size: 33868
dataset_size: 98554
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
prash1721/visainterviewquestions | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 93936
num_examples: 310
download_size: 12106
dataset_size: 93936
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chromadb/paul_graham_essay | ---
dataset_info:
features:
- name: id
dtype: string
- name: embedding
sequence: float64
- name: metadata
struct:
- name: author
dtype: string
- name: document
dtype: string
splits:
- name: data
num_bytes: 1359141
num_examples: 104
download_size: 1270436
dataset_size: 1359141
---
# Dataset Card for "paul_graham_essay"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nexdata/71535_Images_English_OCR_Data_in_Natural_Scenes | ---
license: cc-by-nc-nd-4.0
---
## Description
71,535 Images English OCR Data in Natural Scenes. The collecting scenes of this dataset are the real scenes in Britain and the United States. The data diversity includes multiple scenes, multiple photographic angles and multiple light conditions. For annotation, line-level & word-leve & character-level rectangular bounding box or quadrilateral bounding box annotation were adopted, the text transcription was also adopted. The dataset can be used for English OCR tasks in natural scenes.
For more details, please refer to the link: https://www.nexdata.ai/dataset/162?source=Huggingface
## Data size
71,535 images, each image has 1-200 words
## Collecting environment
onsite collection in Britain and the United States, including shop plaque, poster, road sign, reminder, warning, packing instruction, menu, building sign, etc.
## Data diversity
including multiple scenes, multiple photographic angles, multiple light conditions
## Device
cellphone, camera, tablet
## Photographic angle
looking up angle, looking down angle, eye-level angle
## Data format
the image data format is .jpg, the annotation file format is .json
## Annotation content
line-level & word-level & character-level rectangular bounding box or quadrilateral bounding box annotation; transcription for the texts
## Accuracy
the accuracy of bounding boxes annotation is not less than 95%; the texts transcription accuracy is not less than 95%
# Licensing Information
Commercial License
|
Msun/sunrgbd | ---
license: wtfpl
---
|
Vision-Flan/vision-flan_191-task_1k | ---
task_categories:
- visual-question-answering
language:
- en
pretty_name: Vision-Flan
size_categories:
- 100K<n<1M
---
# 🚀 Vision-Flan Dataset
vision-flan_191-task-1k is a human-labeled visual instruction tuning dataset consisting of 191 diverse tasks and 1,000 examples for each task.
It is constructed for visual instruction tuning and for building large-scale vision-language models.
## Paper or blog for more information:
https://github.com/VT-NLP/MultiInstruct/
https://vision-flan.github.io/
*Paper coming soon* 😊
## Citation
*Paper coming soon* 😊. If you use Vision-Flan, please use the following cites:
```
@misc{visionFlan2023,
title = {Vision-Flan:Scaling Visual Instruction Tuning},
url = {https://vision-flan.github.io/},
author = {Zhiyang Xu and Trevor Ashby and Chao Feng and Rulin Shao and Ying Shen and Di Jin and Qifan Wang and Lifu Huang},
month = {Sep},
year = {2023}
}
```
```
@inproceedings{DBLP:conf/acl/XuSH23,
author = {Zhiyang Xu and Ying Shen and Lifu Huang},
editor = {Anna Rogers and Jordan L. Boyd{-}Graber and Naoaki Okazaki},
title = {MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning},
booktitle = {Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), {ACL} 2023, Toronto, Canada, July 9-14, 2023},
pages = {11445--11465},
publisher = {Association for Computational Linguistics},
year = {2023},
url = {https://doi.org/10.18653/v1/2023.acl-long.641},
doi = {10.18653/v1/2023.acl-long.641},
timestamp = {Thu, 10 Aug 2023 12:35:59 +0200},
biburl = {https://dblp.org/rec/conf/acl/XuSH23.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
## License:
Please carefully check the licenses for all the datasets on this [page](https://vision-flan.github.io/tasks.html) before use.
## Contact:
If you have any questions or concerns please contact us at zhiyangx@vt.edu . |
yukuai0011/elec5307-project-2-dataset-splited-public | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Apple
'1': Avocado
'2': Banana
'3': Blueberry
'4': Coconut
'5': Cucumber
'6': Dragon_fruit
'7': Grape
'8': Grapefruit
'9': Kiwifruit
'10': Lemon
'11': Lychee
'12': Mangoes
'13': Orange
'14': Papaya
'15': Passion fruit
'16': Peach
'17': Pear
'18': Pineapple
'19': Pomegranate
'20': Raspberry
'21': Rockmelon
'22': Strawberries
'23': Tomato
'24': Waterlemon
splits:
- name: train
num_bytes: 270703771.307
num_examples: 2421
- name: test
num_bytes: 63336528.0
num_examples: 605
download_size: 320028339
dataset_size: 334040299.307
---
# Dataset Card for "elec5307-project-2-dataset-splited-public"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
google/xtreme_s | ---
annotations_creators:
- expert-generated
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- expert-generated
language:
- afr
- amh
- ara
- asm
- ast
- azj
- bel
- ben
- bos
- cat
- ceb
- cmn
- ces
- cym
- dan
- deu
- ell
- eng
- spa
- est
- fas
- ful
- fin
- tgl
- fra
- gle
- glg
- guj
- hau
- heb
- hin
- hrv
- hun
- hye
- ind
- ibo
- isl
- ita
- jpn
- jav
- kat
- kam
- kea
- kaz
- khm
- kan
- kor
- ckb
- kir
- ltz
- lug
- lin
- lao
- lit
- luo
- lav
- mri
- mkd
- mal
- mon
- mar
- msa
- mlt
- mya
- nob
- npi
- nld
- nso
- nya
- oci
- orm
- ory
- pan
- pol
- pus
- por
- ron
- rus
- bul
- snd
- slk
- slv
- sna
- som
- srp
- swe
- swh
- tam
- tel
- tgk
- tha
- tur
- ukr
- umb
- urd
- uzb
- vie
- wol
- xho
- yor
- yue
- zul
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: librispeech-1
pretty_name: 'The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech
(XTREME-S) benchmark is a benchmark designed to evaluate speech representations
across languages, tasks, domains and data regimes. It covers 102 languages from 10+ language families, 3 different domains and 4 task families: speech recognition, translation, classification and retrieval.'
size_categories:
- 10K<n<100K
source_datasets:
- extended|multilingual_librispeech
- extended|covost2
task_categories:
- automatic-speech-recognition
- speech-processing
task_ids:
- speech-recognition
---
# XTREME-S
## Dataset Description
- **Fine-Tuning script:** [research-projects/xtreme-s](https://github.com/huggingface/transformers/tree/master/examples/research_projects/xtreme-s)
- **Paper:** [XTREME-S: Evaluating Cross-lingual Speech Representations](https://arxiv.org/abs/2203.10752)
- **Leaderboard:** [TODO(PVP)]()
- **FLEURS amount of disk used:** 350 GB
- **Multilingual Librispeech amount of disk used:** 2700 GB
- **Voxpopuli amount of disk used:** 400 GB
- **Covost2 amount of disk used:** 70 GB
- **Minds14 amount of disk used:** 5 GB
- **Total amount of disk used:** ca. 3500 GB
The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech (XTREME-S) benchmark is a benchmark designed to evaluate speech representations across languages, tasks, domains and data regimes. It covers 102 languages from 10+ language families, 3 different domains and 4 task families: speech recognition, translation, classification and retrieval.
***TLDR; XTREME-S is the first speech benchmark that is both diverse, fully accessible, and reproducible. All datasets can be downloaded with a single line of code.
An easy-to-use and flexible fine-tuning script is provided and actively maintained.***
XTREME-S covers speech recognition with Fleurs, Multilingual LibriSpeech (MLS) and VoxPopuli, speech translation with CoVoST-2, speech classification with LangID (Fleurs) and intent classification (MInds-14) and finally speech(-text) retrieval with Fleurs. Each of the tasks covers a subset of the 102 languages included in XTREME-S, from various regions:
- **Western Europe**: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh*
- **Eastern Europe**: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*
- **Central-Asia/Middle-East/North-Africa**: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*
- **Sub-Saharan Africa**: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*
- **South-Asia**: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*
- **South-East Asia**: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*
- **CJK languages**: *Cantonese and Mandarin Chinese, Japanese, Korean*
## Design principles
### Diversity
XTREME-S aims for task, domain and language
diversity. Tasks should be diverse and cover several domains to
provide a reliable evaluation of model generalization and
robustness to noisy naturally-occurring speech in different
environments. Languages should be diverse to ensure that
models can adapt to a wide range of linguistic and phonological
phenomena.
### Accessibility
The sub-dataset for each task can be downloaded
with a **single line of code** as shown in [Supported Tasks](#supported-tasks).
Each task is available under a permissive license that allows the use and redistribution
of the data for research purposes. Tasks have been selected based on their usage by
pre-existing multilingual pre-trained models, for simplicity.
### Reproducibility
We produce fully **open-sourced, maintained and easy-to-use** fine-tuning scripts
for each task as shown under [Fine-tuning Example](#fine-tuning-and-evaluation-example).
XTREME-S encourages submissions that leverage publicly available speech and text datasets. Users should detail which data they use.
In general, we encourage settings that can be reproduced by the community, but also encourage the exploration of new frontiers for speech representation learning.
## Fine-tuning and Evaluation Example
We provide a fine-tuning script under [**research-projects/xtreme-s**](https://github.com/huggingface/transformers/tree/master/examples/research_projects/xtreme-s).
The fine-tuning script is written in PyTorch and allows one to fine-tune and evaluate any [Hugging Face model](https://huggingface.co/models) on XTREME-S.
The example script is actively maintained by [@anton-l](https://github.com/anton-l) and [@patrickvonplaten](https://github.com/patrickvonplaten). Feel free
to reach out via issues or pull requests on GitHub if you have any questions.
## Leaderboards
The leaderboard for the XTREME-S benchmark can be found at [this address (TODO(PVP))]().
## Supported Tasks
Note that the suppoprted tasks are focused particularly on linguistic aspect of speech,
while nonlinguistic/paralinguistic aspects of speech relevant to e.g. speech synthesis or voice conversion are **not** evaluated.
<p align="center">
<img src="https://github.com/patrickvonplaten/scientific_images/raw/master/xtreme_s.png" alt="Datasets used in XTREME"/>
</p>
### 1. Speech Recognition (ASR)
We include three speech recognition datasets: FLEURS-ASR, MLS and VoxPopuli (optionally BABEL). Multilingual fine-tuning is used for these three datasets.
#### FLEURS-ASR
*FLEURS-ASR* is the speech version of the FLORES machine translation benchmark, covering 2000 n-way parallel sentences in n=102 languages.
```py
from datasets import load_dataset
fleurs_asr = load_dataset("google/xtreme_s", "fleurs.af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_asr = load_dataset("google/xtreme_s", "fleurs.all")
# see structure
print(fleurs_asr)
# load audio sample on the fly
audio_input = fleurs_asr["train"][0]["audio"] # first decoded audio sample
transcription = fleurs_asr["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
# for analyses see language groups
all_language_groups = fleurs_asr["train"].features["lang_group_id"].names
lang_group_id = fleurs_asr["train"][0]["lang_group_id"]
all_language_groups[lang_group_id]
```
#### Multilingual LibriSpeech (MLS)
*MLS* is a large multilingual corpus derived from read audiobooks from LibriVox and consists of 8 languages. For this challenge the training data is limited to 10-hours splits.
```py
from datasets import load_dataset
mls = load_dataset("google/xtreme_s", "mls.pl") # for Polish
# to download all data for multi-lingual fine-tuning uncomment following line
# mls = load_dataset("google/xtreme_s", "mls.all")
# see structure
print(mls)
# load audio sample on the fly
audio_input = mls["train"][0]["audio"] # first decoded audio sample
transcription = mls["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
```
#### VoxPopuli
*VoxPopuli* is a large-scale multilingual speech corpus for representation learning and semi-supervised learning, from which we use the speech recognition dataset. The raw data is collected from 2009-2020 European Parliament event recordings. We acknowledge the European Parliament for creating and sharing these materials.
**VoxPopuli has to download the whole dataset 100GB since languages
are entangled into each other - maybe not worth testing here due to the size**
```py
from datasets import load_dataset
voxpopuli = load_dataset("google/xtreme_s", "voxpopuli.ro") # for Romanian
# to download all data for multi-lingual fine-tuning uncomment following line
# voxpopuli = load_dataset("google/xtreme_s", "voxpopuli.all")
# see structure
print(voxpopuli)
# load audio sample on the fly
audio_input = voxpopuli["train"][0]["audio"] # first decoded audio sample
transcription = voxpopuli["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
```
#### (Optionally) BABEL
*BABEL* from IARPA is a conversational speech recognition dataset in low-resource languages. First, download LDC2016S06, LDC2016S12, LDC2017S08, LDC2017S05 and LDC2016S13. BABEL is the only dataset in our benchmark who is less easily accessible, so you will need to sign in to get access to it on LDC. Although not officially part of the XTREME-S ASR datasets, BABEL is often used for evaluating speech representations on a difficult domain (phone conversations).
```py
from datasets import load_dataset
babel = load_dataset("google/xtreme_s", "babel.as")
```
**The above command is expected to fail with a nice error message,
explaining how to download BABEL**
The following should work:
```py
from datasets import load_dataset
babel = load_dataset("google/xtreme_s", "babel.as", data_dir="/path/to/IARPA_BABEL_OP1_102_LDC2016S06.zip")
# see structure
print(babel)
# load audio sample on the fly
audio_input = babel["train"][0]["audio"] # first decoded audio sample
transcription = babel["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
```
### 2. Speech Translation (ST)
We include the CoVoST-2 dataset for automatic speech translation.
#### CoVoST-2
The *CoVoST-2* benchmark has become a commonly used dataset for evaluating automatic speech translation. It covers language pairs from English into 15 languages, as well as 21 languages into English. We use only the "X->En" direction to evaluate cross-lingual representations. The amount of supervision varies greatly in this setting, from one hour for Japanese->English to 180 hours for French->English. This makes pretraining particularly useful to enable such few-shot learning. We enforce multiligual fine-tuning for simplicity. Results are splitted in high/med/low-resource language pairs as explained in the [paper (TODO(PVP))].
```py
from datasets import load_dataset
covost_2 = load_dataset("google/xtreme_s", "covost2.id.en") # for Indonesian to English
# to download all data for multi-lingual fine-tuning uncomment following line
# covost_2 = load_dataset("google/xtreme_s", "covost2.all")
# see structure
print(covost_2)
# load audio sample on the fly
audio_input = covost_2["train"][0]["audio"] # first decoded audio sample
transcription = covost_2["train"][0]["transcription"] # first transcription
translation = covost_2["train"][0]["translation"] # first translation
# use audio_input and translation to fine-tune your model for AST
```
### 3. Speech Classification
We include two multilingual speech classification datasets: FLEURS-LangID and Minds-14.
#### Language Identification - FLEURS-LangID
LangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.
```py
from datasets import load_dataset
fleurs_langID = load_dataset("google/xtreme_s", "fleurs.all") # to download all data
# see structure
print(fleurs_langID)
# load audio sample on the fly
audio_input = fleurs_langID["train"][0]["audio"] # first decoded audio sample
language_class = fleurs_langID["train"][0]["lang_id"] # first id class
language = fleurs_langID["train"].features["lang_id"].names[language_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
#### Intent classification - Minds-14
Minds-14 is an intent classification made from e-banking speech datasets in 14 languages, with 14 intent labels. We impose a single multilingual fine-tuning to increase the size of the train and test sets and reduce the variance associated with the small size of the dataset per language.
```py
from datasets import load_dataset
minds_14 = load_dataset("google/xtreme_s", "minds14.fr-FR") # for French
# to download all data for multi-lingual fine-tuning uncomment following line
# minds_14 = load_dataset("google/xtreme_s", "minds14.all")
# see structure
print(minds_14)
# load audio sample on the fly
audio_input = minds_14["train"][0]["audio"] # first decoded audio sample
intent_class = minds_14["train"][0]["intent_class"] # first transcription
intent = minds_14["train"].features["intent_class"].names[intent_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
### 4. (Optionally) Speech Retrieval
We optionally include one speech retrieval dataset: FLEURS-Retrieval as explained in the [FLEURS paper](https://arxiv.org/abs/2205.12446).
#### FLEURS-Retrieval
FLEURS-Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use FLEURS-Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English "key" utterance corresponding to the speech translation of "queries" in 15 languages. Results have to be reported on the test sets of FLEURS-Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.
```py
from datasets import load_dataset
fleurs_retrieval = load_dataset("google/xtreme_s", "fleurs.af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_retrieval = load_dataset("google/xtreme_s", "fleurs.all")
# see structure
print(fleurs_retrieval)
# load audio sample on the fly
audio_input = fleurs_retrieval["train"][0]["audio"] # decoded audio sample
text_sample_pos = fleurs_retrieval["train"][0]["transcription"] # positive text sample
text_sample_neg = fleurs_retrieval["train"][1:20]["transcription"] # negative text samples
# use `audio_input`, `text_sample_pos`, and `text_sample_neg` to fine-tune your model for retrieval
```
Users can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.
## Dataset Structure
The XTREME-S benchmark is composed of the following datasets:
- [FLEURS](https://huggingface.co/datasets/google/fleurs#dataset-structure)
- [Multilingual Librispeech (MLS)](https://huggingface.co/datasets/facebook/multilingual_librispeech#dataset-structure)
Note that for MLS, XTREME-S uses `path` instead of `file` and `transcription` instead of `text`.
- [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli#dataset-structure)
- [Minds14](https://huggingface.co/datasets/polyai/minds14#dataset-structure)
- [Covost2](https://huggingface.co/datasets/covost2#dataset-structure)
Note that for Covost2, XTREME-S uses `path` instead of `file` and `transcription` instead of `sentence`.
- [BABEL](https://huggingface.co/datasets/ldc/iarpa_babel#dataset-structure)
Please click on the link of the dataset cards to get more information about its dataset structure.
## Dataset Creation
The XTREME-S benchmark is composed of the following datasets:
- [FLEURS](https://huggingface.co/datasets/google/fleurs#dataset-creation)
- [Multilingual Librispeech (MLS)](https://huggingface.co/datasets/facebook/multilingual_librispeech#dataset-creation)
- [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli#dataset-creation)
- [Minds14](https://huggingface.co/datasets/polyai/minds14#dataset-creation)
- [Covost2](https://huggingface.co/datasets/covost2#dataset-creation)
- [BABEL](https://huggingface.co/datasets/ldc/iarpa_babel#dataset-creation)
Please visit the corresponding dataset cards to get more information about the source data.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).
### Discussion of Biases
Most datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through XTREME-S should generalize to all languages.
### Other Known Limitations
The benchmark has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on XTREME-S should still correlate well with actual progress made for speech understanding.
## Additional Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
#### XTREME-S
```
@article{conneau2022xtreme,
title={XTREME-S: Evaluating Cross-lingual Speech Representations},
author={Conneau, Alexis and Bapna, Ankur and Zhang, Yu and Ma, Min and von Platen, Patrick and Lozhkov, Anton and Cherry, Colin and Jia, Ye and Rivera, Clara and Kale, Mihir and others},
journal={arXiv preprint arXiv:2203.10752},
year={2022}
}
```
#### MLS
```
@article{Pratap2020MLSAL,
title={MLS: A Large-Scale Multilingual Dataset for Speech Research},
author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
journal={ArXiv},
year={2020},
volume={abs/2012.03411}
}
```
#### VoxPopuli
```
@article{wang2021voxpopuli,
title={Voxpopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation},
author={Wang, Changhan and Riviere, Morgane and Lee, Ann and Wu, Anne and Talnikar, Chaitanya and Haziza, Daniel and Williamson, Mary and Pino, Juan and Dupoux, Emmanuel},
journal={arXiv preprint arXiv:2101.00390},
year={2021}
}
```
#### CoVoST 2
```
@article{DBLP:journals/corr/abs-2007-10310,
author = {Changhan Wang and
Anne Wu and
Juan Miguel Pino},
title = {CoVoST 2: {A} Massively Multilingual Speech-to-Text Translation Corpus},
journal = {CoRR},
volume = {abs/2007.10310},
year = {2020},
url = {https://arxiv.org/abs/2007.10310},
eprinttype = {arXiv},
eprint = {2007.10310},
timestamp = {Thu, 12 Aug 2021 15:37:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-10310.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
#### Minds14
```
@article{gerz2021multilingual,
title={Multilingual and cross-lingual intent detection from spoken data},
author={Gerz, Daniela and Su, Pei-Hao and Kusztos, Razvan and Mondal, Avishek and Lis, Micha{\l} and Singhal, Eshan and Mrk{\v{s}}i{\'c}, Nikola and Wen, Tsung-Hsien and Vuli{\'c}, Ivan},
journal={arXiv preprint arXiv:2104.08524},
year={2021}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@anton-l](https://github.com/anton-l), [@aconneau](https://github.com/aconneau) for adding this dataset
|
vikp/code_instructions_filtered_7k | ---
dataset_info:
features:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: quality_prob
dtype: float64
- name: learning_prob
dtype: float64
splits:
- name: train
num_bytes: 3935708.9048315734
num_examples: 7526
download_size: 2442024
dataset_size: 3935708.9048315734
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "code_instructions_filtered_7k"
Filtered version of `sahil2801/code_instructions_120k` based on manual, quality, and learning value filters. |
Gbssreejith/Type2_dataset_235 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 58109075.0
num_examples: 211
- name: val
num_bytes: 6748431.0
num_examples: 24
download_size: 64646791
dataset_size: 64857506.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
---
|
irds/beir_fever_test | ---
pretty_name: '`beir/fever/test`'
viewer: false
source_datasets: ['irds/beir_fever']
task_categories:
- text-retrieval
---
# Dataset Card for `beir/fever/test`
The `beir/fever/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fever/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=6,666
- `qrels`: (relevance assessments); count=7,937
- For `docs`, use [`irds/beir_fever`](https://huggingface.co/datasets/irds/beir_fever)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_fever_test', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_fever_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Thorne2018Fever,
title = "{FEVER}: a Large-scale Dataset for Fact Extraction and {VER}ification",
author = "Thorne, James and
Vlachos, Andreas and
Christodoulopoulos, Christos and
Mittal, Arpit",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/N18-1074",
doi = "10.18653/v1/N18-1074",
pages = "809--819"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
Thewillonline/l-gpt4 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 20624506405
num_examples: 2841711
download_size: 12706539991
dataset_size: 20624506405
---
# Dataset Card for "l-gpt4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bassie96code/Label_lijsten | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: toktekst-met-labels
pretty_name: Toktekst-met-labels
dataset_info:
features:
- name: id
dtype: string
- name: tok_wettekst
sequence: string
- name: label-lijsten
sequence:
class_label:
names:
'0': O
'1': B-subj
'2': I-subj
'3': Betr
config_name: label_lijsten
splits:
- name: train
num_bytes: 6931345
num_examples: 90
- name: validation
num_bytes: 1739223
num_examples: 5
- name: test
num_bytes: 1582054
num_examples: 5
download_size: 982975
dataset_size: 10252622
train-eval-index:
- config: toktekst-met-labels
task: token-classification
task_id: element-extraction
splits:
train_split: train
eval_split: test
col_mapping:
tok_wettekst: tokens
label-lijsten: tags
metrics:
- type: seqeval
name: seqeval
---
# Dataset Card for "conll2003"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 4.85 MB
- **Size of the generated dataset:** 10.26 MB
- **Total amount of disk used:** 15.11 MB
### Dataset Summary
The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on
four types of named entities: persons, locations, organizations and names of miscellaneous entities that do
not belong to the previous three groups.
The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on
a separate line and there is an empty line after each sentence. The first item on each line is a word, the second
a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags
and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only
if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag
B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2
tagging scheme, whereas the original dataset uses IOB1.
For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### conll2003
- **Size of downloaded dataset files:** 4.85 MB
- **Size of the generated dataset:** 10.26 MB
- **Total amount of disk used:** 15.11 MB
An example of 'train' looks as follows.
```
{
"chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],
"id": "0",
"ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
"tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
}
```
The original data files have `-DOCSTART-` lines used to separate documents, but these lines are removed here.
Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
### Data Fields
The data fields are the same among all splits.
#### conll2003
- `id`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'"': 0, "''": 1, '#': 2, '$': 3, '(': 4, ')': 5, ',': 6, '.': 7, ':': 8, '``': 9, 'CC': 10, 'CD': 11, 'DT': 12,
'EX': 13, 'FW': 14, 'IN': 15, 'JJ': 16, 'JJR': 17, 'JJS': 18, 'LS': 19, 'MD': 20, 'NN': 21, 'NNP': 22, 'NNPS': 23,
'NNS': 24, 'NN|SYM': 25, 'PDT': 26, 'POS': 27, 'PRP': 28, 'PRP$': 29, 'RB': 30, 'RBR': 31, 'RBS': 32, 'RP': 33,
'SYM': 34, 'TO': 35, 'UH': 36, 'VB': 37, 'VBD': 38, 'VBG': 39, 'VBN': 40, 'VBP': 41, 'VBZ': 42, 'WDT': 43,
'WP': 44, 'WP$': 45, 'WRB': 46}
```
- `chunk_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'O': 0, 'B-ADJP': 1, 'I-ADJP': 2, 'B-ADVP': 3, 'I-ADVP': 4, 'B-CONJP': 5, 'I-CONJP': 6, 'B-INTJ': 7, 'I-INTJ': 8,
'B-LST': 9, 'I-LST': 10, 'B-NP': 11, 'I-NP': 12, 'B-PP': 13, 'I-PP': 14, 'B-PRT': 15, 'I-PRT': 16, 'B-SBAR': 17,
'I-SBAR': 18, 'B-UCP': 19, 'I-UCP': 20, 'B-VP': 21, 'I-VP': 22}
```
- `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6, 'B-MISC': 7, 'I-MISC': 8}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|conll2003|14041| 3250|3453|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page:
> The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.
The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html):
> The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
>
> [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html)
>
> This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
>
> [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html)
>
> This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
### Citation Information
```
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F. and
De Meulder, Fien",
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
year = "2003",
url = "https://www.aclweb.org/anthology/W03-0419",
pages = "142--147",
}
```
### Contributions
Thanks to [@jplu](https://github.com/bassie96code) |
SivaResearch/Agri | ---
license: mit
---
|
FaalSa/cluster0_4 | ---
dataset_info:
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: item_id
dtype: string
- name: feat_static_cat
sequence: uint64
splits:
- name: train
num_bytes: 558404
num_examples: 7
- name: validation
num_bytes: 561764
num_examples: 7
- name: test
num_bytes: 565124
num_examples: 7
download_size: 8061608
dataset_size: 1685292
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
CyberHarem/himekaidou_hatate_touhou | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of himekaidou_hatate/姫海棠はたて/히메카이도하타테 (Touhou)
This is the dataset of himekaidou_hatate/姫海棠はたて/히메카이도하타테 (Touhou), containing 499 images and their tags.
The core tags of this character are `twintails, brown_hair, tokin_hat, hat, long_hair, ribbon, purple_eyes, brown_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 499 | 580.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/himekaidou_hatate_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 499 | 381.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/himekaidou_hatate_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1107 | 735.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/himekaidou_hatate_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 499 | 540.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/himekaidou_hatate_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1107 | 960.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/himekaidou_hatate_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/himekaidou_hatate_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, cellphone, checkered_skirt, necktie, pointy_ears, solo, tengu-geta |
| 1 | 7 |  |  |  |  |  | 1girl, cellphone, checkered_skirt, necktie, solo |
| 2 | 6 |  |  |  |  |  | 1girl, cellphone, checkered_skirt, necktie, solo, blush, pointy_ears |
| 3 | 31 |  |  |  |  |  | 1girl, solo, obi, japanese_clothes, kourindou_tengu_costume, wide_sleeves, looking_at_viewer, pointy_ears, long_sleeves, black_wings, hair_ribbon, smile, alternate_costume, katana, detached_sleeves, thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cellphone | checkered_skirt | necktie | pointy_ears | solo | tengu-geta | blush | obi | japanese_clothes | kourindou_tengu_costume | wide_sleeves | looking_at_viewer | long_sleeves | black_wings | hair_ribbon | smile | alternate_costume | katana | detached_sleeves | thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------|:------------------|:----------|:--------------|:-------|:-------------|:--------|:------|:-------------------|:--------------------------|:---------------|:--------------------|:---------------|:--------------|:--------------|:--------|:--------------------|:---------|:-------------------|:-------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | | X | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | X | X | X | | X | | | | | | | | | | | | | |
| 3 | 31 |  |  |  |  |  | X | | | | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
sebastian-hofstaetter/tripclick-training | ---
annotations_creators:
- other
- clicks
language_creators:
- other
language:
- en-US
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: tripclick-training
size_categories:
- unknown
source_datasets: [tripclick]
task_categories:
- text-retrieval
task_ids:
- document-retrieval
---
# TripClick Baselines with Improved Training Data
*Establishing Strong Baselines for TripClick Health Retrieval* Sebastian Hofstätter, Sophia Althammer, Mete Sertkan and Allan Hanbury
https://arxiv.org/abs/2201.00365
**tl;dr** We create strong re-ranking and dense retrieval baselines (BERT<sub>CAT</sub>, BERT<sub>DOT</sub>, ColBERT, and TK) for TripClick (health ad-hoc retrieval). We improve the – originally too noisy – training data with a simple negative sampling policy. We achieve large gains over BM25 in the re-ranking and retrieval setting on TripClick, which were not achieved with the original baselines. We publish the improved training files for everyone to use.
If you have any questions, suggestions, or want to collaborate please don't hesitate to get in contact with us via [Twitter](https://twitter.com/s_hofstaetter) or mail to s.hofstaetter@tuwien.ac.at
**Please cite our work as:**
````
@misc{hofstaetter2022tripclick,
title={Establishing Strong Baselines for TripClick Health Retrieval},
author={Sebastian Hofst{\"a}tter and Sophia Althammer and Mete Sertkan and Allan Hanbury},
year={2022},
eprint={2201.00365},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
````
## Published Training Files
We publish the improved training files without the text content instead using the ids from TripClick (with permission from the TripClick owners); for the text content please get the full TripClick dataset from [the TripClick Github page](https://github.com/tripdatabase/tripclick).
Our training file **improved_tripclick_train_triple-ids.tsv** has the format ``query_id pos_passage_id neg_passage_id`` (with tab separation).
----
For more information on how to use the training files see: https://github.com/sebastian-hofstaetter/tripclick |
edbeeching/prj_gia_dataset_atari_2B_atari_defender_1111 | ---
library_name: gia
tags:
- deep-reinforcement-learning
- reinforcement-learning
- gia
- multi-task
- multi-modal
- imitation-learning
- offline-reinforcement-learning
---
An imitation learning environment for the atari_defender environment, sample for the policy atari_2B_atari_defender_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
homersimpson/opensubtitles_fr | ---
dataset_info:
features:
- name: id
dtype: string
- name: meta
struct:
- name: year
dtype: uint32
- name: imdbId
dtype: uint32
- name: subtitleId
struct:
- name: ca
dtype: uint32
- name: fr
dtype: uint32
- name: sentenceIds
struct:
- name: ca
sequence: uint32
- name: fr
sequence: uint32
- name: translation
dtype:
translation:
languages:
- ca
- fr
splits:
- name: train
num_bytes: 29095202.4
num_examples: 240000
- name: validation
num_bytes: 3636900.3
num_examples: 30000
- name: test
num_bytes: 3636900.3
num_examples: 30000
download_size: 26004408
dataset_size: 36369003.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
mii-llm/poetica | ---
dataset_info:
features:
- name: title
dtype: string
- name: author
dtype: string
- name: author_info
dtype: string
- name: poem
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 3014606
num_examples: 2241
download_size: 1783194
dataset_size: 3014606
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "poetica"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Signal0ne/logs-for-evaluation | ---
license: mit
---
|
daniilak/russian_captcha_images | ---
license: cc
language:
- ru
tags:
- image
- captcha
---
# Note
Captcha images are presented as base64 string.
All csv files have a "\t" separator.
# Dataset consists of several files
## fssp_*.csv
I am publishing an updated version of the archive of 40,310 pictures, which I have divided into 4 categories:
- 4 symbols on the picture - 6 747 pcs.
- 5 symbols - 18 403 pcs.
- 6 characters - 7,038 pcs.
- 7 characters - 7 589 pcs.
Symbols used in captcha
'б','в','г','д','ж','к','л','м','н','п','р','с','т','2','4','5','6','7','8','9'
## fms.csv
About 15 thousand captcha imgs, which consists of 6 numbers.
## rosreestr.csv
About 10 thousand captcha, which consists of English characters and numbers with a length of 5 elements.
## vk.csv
About 19 thousand captcha, which consists of Russian characters and numbers from 5 to 6 elements long. Images from social network vk.com
# Kaggle
This Dataset is updated by the previous one, which I published on [Kaggle](https://www.kaggle.com/datasets/mrdaniilak/russian-captcha-images-base64)
### Citation
```
@misc{ russian_captcha_dataset,
title = { Russian Captcha Dataset },
type = { Open Source Dataset },
author = { Daniil Agniashvili },
url = { https://huggingface.co/datasets/daniilak/russian_captcha_images/ },
note = { visited on 2023-02-24 },
}
```
### License
Public Domain |
DavidVivancos/MindBigData2023_MNIST-2B | ---
license: odbl
---
## Dataset Summary
MindBigData 2023 MNIST-2B is a reduced subset of the MindBigData 2023 MNIST-8B https://huggingface.co/datasets/DavidVivancos/MindBigData2023_MNIST-8B (June 1st 2023), brain signals open dataset created for Machine Learning, based on EEG signals from a single subject captured using a custom 128 channels device, replicating the full 70,000 digits from Yaan LeCun et all MNIST dataset. The brain signals were captured while the subject was watching the pixels of the original digits one by one on a screen and listening at the same time to the spoken number 0 to 9 from the real label.
Supporting dataset for paper https://arxiv.org/abs/2306.00455
The dataset contains 70,000 records from 128 EEG channels, each of 256 samples ( a bit more than 1 second), recorded at 250hz
(From the Original 8 Billion datapoints dataset, all the non digits (labled -1) (70000 records) where removed and also the EEG signals were reduced from 500 samples to 256 samples(a bit more than 1 second))
It consists of 2 main csv data files:
- “train.csv” 10,7Gb Header + 60,000 rows 32,558 columns
- “test.csv” 1,79Gb Header + 10,000 rows 32,558 columns
10 audio files at a folder named “audiolabels”: “0.wav”, “1.wav”......“9.wav”
And 1 csv file with 3d coordinates of the EEG electrodes: “3Dcoords.csv” 4,27Kb Header + 130 rows 4 columns
## Dataset Structure
review supporting paper https://arxiv.org/abs/2306.00455
## Data Fields
review supporting paper https://arxiv.org/abs/2306.00455
## Citation
```sh
@article{MindBigData_2023_MNIST-8B,
title={MindBigData 2023 MNIST-8B The 8 billion datapoints Multimodal Dataset of Brain Signals},
author={David Vivancos},
journal={arXiv preprint arXiv:2306.00455},
year={2023}
}
``` |
Asap7772/skewlognormal | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: output
dtype: string
- name: text
dtype: string
- name: alpaca_text
dtype: string
- name: prompt
dtype: string
- name: alpaca_prompt
dtype: string
- name: y_ref
dtype: string
- name: y_1
dtype: string
- name: y_2
dtype: string
- name: y_w
dtype: string
- name: y_w_alpaca
dtype: string
- name: y_l
dtype: string
- name: y_l_alpaca
dtype: string
- name: y_w_score
dtype: float64
- name: y_l_score
dtype: float64
- name: score_diff
dtype: float64
splits:
- name: train
num_bytes: 77844991
num_examples: 19000
- name: test
num_bytes: 4082779
num_examples: 1000
download_size: 40268839
dataset_size: 81927770
---
# Dataset Card for "skewlognormal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sdadasfgdfgfdg/CaineBR_fandub | ---
license: openrail
---
|
open-llm-leaderboard/details_digitous__13B-Chimera | ---
pretty_name: Evaluation run of digitous/13B-Chimera
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [digitous/13B-Chimera](https://huggingface.co/digitous/13B-Chimera) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_digitous__13B-Chimera\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-21T22:03:30.588181](https://huggingface.co/datasets/open-llm-leaderboard/details_digitous__13B-Chimera/blob/main/results_2023-10-21T22-03-30.588181.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.2860738255033557,\n\
\ \"em_stderr\": 0.004628128039725735,\n \"f1\": 0.35844274328859277,\n\
\ \"f1_stderr\": 0.004563129120809242,\n \"acc\": 0.4397952815178321,\n\
\ \"acc_stderr\": 0.010144797366305785\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.2860738255033557,\n \"em_stderr\": 0.004628128039725735,\n\
\ \"f1\": 0.35844274328859277,\n \"f1_stderr\": 0.004563129120809242\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.1068991660348749,\n \
\ \"acc_stderr\": 0.008510982565520481\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7726913970007893,\n \"acc_stderr\": 0.011778612167091088\n\
\ }\n}\n```"
repo_url: https://huggingface.co/digitous/13B-Chimera
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|arc:challenge|25_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_21T22_03_30.588181
path:
- '**/details_harness|drop|3_2023-10-21T22-03-30.588181.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-21T22-03-30.588181.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_21T22_03_30.588181
path:
- '**/details_harness|gsm8k|5_2023-10-21T22-03-30.588181.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-21T22-03-30.588181.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hellaswag|10_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T15:36:44.224352.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-17T15:36:44.224352.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-17T15:36:44.224352.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_21T22_03_30.588181
path:
- '**/details_harness|winogrande|5_2023-10-21T22-03-30.588181.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-21T22-03-30.588181.parquet'
- config_name: results
data_files:
- split: 2023_08_17T15_36_44.224352
path:
- results_2023-08-17T15:36:44.224352.parquet
- split: 2023_10_21T22_03_30.588181
path:
- results_2023-10-21T22-03-30.588181.parquet
- split: latest
path:
- results_2023-10-21T22-03-30.588181.parquet
---
# Dataset Card for Evaluation run of digitous/13B-Chimera
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/digitous/13B-Chimera
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [digitous/13B-Chimera](https://huggingface.co/digitous/13B-Chimera) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_digitous__13B-Chimera",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-21T22:03:30.588181](https://huggingface.co/datasets/open-llm-leaderboard/details_digitous__13B-Chimera/blob/main/results_2023-10-21T22-03-30.588181.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.2860738255033557,
"em_stderr": 0.004628128039725735,
"f1": 0.35844274328859277,
"f1_stderr": 0.004563129120809242,
"acc": 0.4397952815178321,
"acc_stderr": 0.010144797366305785
},
"harness|drop|3": {
"em": 0.2860738255033557,
"em_stderr": 0.004628128039725735,
"f1": 0.35844274328859277,
"f1_stderr": 0.004563129120809242
},
"harness|gsm8k|5": {
"acc": 0.1068991660348749,
"acc_stderr": 0.008510982565520481
},
"harness|winogrande|5": {
"acc": 0.7726913970007893,
"acc_stderr": 0.011778612167091088
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
crylake/facesyntheticsspigacaptioned_9percent | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: spiga_seg
dtype: image
- name: image_caption
dtype: string
splits:
- name: train
num_bytes: 2720811186.0
num_examples: 9000
download_size: 2716728106
dataset_size: 2720811186.0
---
# Dataset Card for "facesyntheticsspigacaptioned_9percent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Jyshen/Chat_Suzumiya_extended | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: train
struct:
- name: context
sequence: string
- name: target
sequence: string
splits:
- name: train
num_bytes: 109757726
num_examples: 28612
download_size: 38545400
dataset_size: 109757726
---
# Dataset Card for "Chat_Suzumiya_extended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
confit/ravdess | ---
task_categories:
- audio-classification
dataset_info:
- config_name: fold1
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: emotion
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': calm
'2': happy
'3': sad
'4': angry
'5': fearful
'6': disgust
'7': surprised
splits:
- name: train
num_bytes: 937751877.24
num_examples: 2280
- name: test
num_bytes: 247086499.0
num_examples: 600
download_size: 649949169
dataset_size: 1184838376.24
- config_name: fold2
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: emotion
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': calm
'2': happy
'3': sad
'4': angry
'5': fearful
'6': disgust
'7': surprised
splits:
- name: train
num_bytes: 941178598.68
num_examples: 2280
- name: test
num_bytes: 242416331.0
num_examples: 600
download_size: 649810021
dataset_size: 1183594929.6799998
- config_name: fold3
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: emotion
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': calm
'2': happy
'3': sad
'4': angry
'5': fearful
'6': disgust
'7': surprised
splits:
- name: train
num_bytes: 936307789.08
num_examples: 2280
- name: test
num_bytes: 246688971.0
num_examples: 600
download_size: 650824120
dataset_size: 1182996760.08
- config_name: fold4
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: emotion
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': calm
'2': happy
'3': sad
'4': angry
'5': fearful
'6': disgust
'7': surprised
splits:
- name: train
num_bytes: 934992735.24
num_examples: 2280
- name: test
num_bytes: 248861587.0
num_examples: 600
download_size: 649424384
dataset_size: 1183854322.24
- config_name: fold5
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: emotion
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': calm
'2': happy
'3': sad
'4': angry
'5': fearful
'6': disgust
'7': surprised
splits:
- name: train
num_bytes: 986251792.8
num_examples: 2400
- name: test
num_bytes: 196270016.0
num_examples: 480
download_size: 650150538
dataset_size: 1182521808.8
configs:
- config_name: fold1
data_files:
- split: train
path: fold1/train-*
- split: test
path: fold1/test-*
- config_name: fold2
data_files:
- split: train
path: fold2/train-*
- split: test
path: fold2/test-*
- config_name: fold3
data_files:
- split: train
path: fold3/train-*
- split: test
path: fold3/test-*
- config_name: fold4
data_files:
- split: train
path: fold4/train-*
- split: test
path: fold4/test-*
- config_name: fold5
data_files:
- split: train
path: fold5/train-*
- split: test
path: fold5/test-*
tags:
- audio
- paralinguistics
- multiclass
- emotion
---
|
freshpearYoon/vr_train_free_29 | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: filename
dtype: string
- name: NumOfUtterance
dtype: int64
- name: text
dtype: string
- name: samplingrate
dtype: int64
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: speaker_id
dtype: string
- name: directory
dtype: string
splits:
- name: train
num_bytes: 6223204981
num_examples: 10000
download_size: 1022515410
dataset_size: 6223204981
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/murasame_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of murasame (Kantai Collection)
This is the dataset of murasame (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `long_hair, light_brown_hair, brown_eyes, breasts, red_eyes, ribbon, large_breasts, twintails, two_side_up, hair_ribbon, heterochromia, hair_ornament, hairclip`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 672.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/murasame_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 373.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/murasame_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1294 | 857.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/murasame_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 596.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/murasame_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1294 | 1.21 GiB | [Download](https://huggingface.co/datasets/CyberHarem/murasame_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/murasame_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, black_serafuku, black_skirt, pleated_skirt, red_neckerchief, solo, looking_at_viewer, blush, smile |
| 1 | 5 |  |  |  |  |  | 1girl, asymmetrical_clothes, beret, black_headwear, black_serafuku, black_skirt, hair_flaps, pleated_skirt, red_neckerchief, smile, solo, white_gloves, looking_at_viewer, simple_background, white_background, belt, white_sailor_collar |
| 2 | 5 |  |  |  |  |  | 1girl, anchor, black_serafuku, machinery, pleated_skirt, solo, chain, black_skirt, socks, blonde_hair, brown_hair, neckerchief, open_mouth, torpedo, very_long_hair |
| 3 | 29 |  |  |  |  |  | 1girl, hair_flaps, solo, looking_at_viewer, competition_swimsuit, covered_navel, blue_one-piece_swimsuit, cleavage, two-tone_swimsuit, simple_background, smile, white_background, highleg_swimsuit, twitter_username, collarbone, cowboy_shot, dated |
| 4 | 7 |  |  |  |  |  | 1girl, cleavage, looking_at_viewer, navel, sailor_bikini, solo, adapted_costume, smile, black_bikini, brown_hair, white_background, blush, collarbone, open_mouth, simple_background |
| 5 | 5 |  |  |  |  |  | cleavage, day, looking_at_viewer, medium_breasts, navel, outdoors, bikini_skirt, black_bikini, ocean, sailor_bikini, smile, water, cloud, collarbone, open_mouth, solo_focus, 1girl, 2girls, blonde_hair, blue_sky, groin, hair_between_eyes, very_long_hair, wading |
| 6 | 12 |  |  |  |  |  | 1girl, solo, blush, looking_at_viewer, panties, bra, cleavage, collarbone, navel, smile, white_background, simple_background, underwear_only, hair_between_eyes, heart, medium_breasts, twitter_username, cowboy_shot, very_long_hair |
| 7 | 5 |  |  |  |  |  | 1boy, 1girl, blush, solo_focus, cum_on_breasts, open_mouth, black_bikini, cleavage, ejaculation, paizuri_under_clothes, smile, sweat, collarbone, facial, looking_at_viewer, nipples, penis, pov, sailor_bikini |
| 8 | 11 |  |  |  |  |  | 1girl, detached_collar, fake_animal_ears, playboy_bunny, rabbit_ears, solo, strapless_leotard, cleavage, hair_flaps, looking_at_viewer, black_leotard, black_pantyhose, blush, bowtie, simple_background, wrist_cuffs, smile, white_background, alternate_costume, cowboy_shot, fishnets |
| 9 | 6 |  |  |  |  |  | 1girl, nipples, 1boy, cum, hair_flaps, hetero, nude, penis, solo_focus, tongue_out, blush, white_gloves, hair_between_eyes, heart, looking_at_viewer, mosaic_censoring, navel, pussy, sex, testicles, thighhighs, vaginal |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_serafuku | black_skirt | pleated_skirt | red_neckerchief | solo | looking_at_viewer | blush | smile | asymmetrical_clothes | beret | black_headwear | hair_flaps | white_gloves | simple_background | white_background | belt | white_sailor_collar | anchor | machinery | chain | socks | blonde_hair | brown_hair | neckerchief | open_mouth | torpedo | very_long_hair | competition_swimsuit | covered_navel | blue_one-piece_swimsuit | cleavage | two-tone_swimsuit | highleg_swimsuit | twitter_username | collarbone | cowboy_shot | dated | navel | sailor_bikini | adapted_costume | black_bikini | day | medium_breasts | outdoors | bikini_skirt | ocean | water | cloud | solo_focus | 2girls | blue_sky | groin | hair_between_eyes | wading | panties | bra | underwear_only | heart | 1boy | cum_on_breasts | ejaculation | paizuri_under_clothes | sweat | facial | nipples | penis | pov | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | strapless_leotard | black_leotard | black_pantyhose | bowtie | wrist_cuffs | alternate_costume | fishnets | cum | hetero | nude | tongue_out | mosaic_censoring | pussy | sex | testicles | thighhighs | vaginal |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:--------------|:----------------|:------------------|:-------|:--------------------|:--------|:--------|:-----------------------|:--------|:-----------------|:-------------|:---------------|:--------------------|:-------------------|:-------|:----------------------|:---------|:------------|:--------|:--------|:--------------|:-------------|:--------------|:-------------|:----------|:-----------------|:-----------------------|:----------------|:--------------------------|:-----------|:--------------------|:-------------------|:-------------------|:-------------|:--------------|:--------|:--------|:----------------|:------------------|:---------------|:------|:-----------------|:-----------|:---------------|:--------|:--------|:--------|:-------------|:---------|:-----------|:--------|:--------------------|:---------|:----------|:------|:-----------------|:--------|:-------|:-----------------|:--------------|:------------------------|:--------|:---------|:----------|:--------|:------|:------------------|:-------------------|:----------------|:--------------|:--------------------|:----------------|:------------------|:---------|:--------------|:--------------------|:-----------|:------|:---------|:-------|:-------------|:-------------------|:--------|:------|:------------|:-------------|:----------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | X | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 29 |  |  |  |  |  | X | | | | | X | X | | X | | | | X | | X | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | | | | | X | X | X | X | | | | | | X | X | | | | | | | | X | | X | | | | | | X | | | | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | | | | | X | | X | | | | | | | | | | | | | | X | | | X | | X | | | | X | | | | X | | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 12 |  |  |  |  |  | X | | | | | X | X | X | X | | | | | | X | X | | | | | | | | | | | | X | | | | X | | | X | X | X | | X | | | | | X | | | | | | | | | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | | | | | X | X | X | | | | | | | | | | | | | | | | | X | | | | | | X | | | | X | | | | X | | X | | | | | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 8 | 11 |  |  |  |  |  | X | | | | | X | X | X | X | | | | X | | X | X | | | | | | | | | | | | | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 9 | 6 |  |  |  |  |  | X | | | | | | X | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | X | | | | | X | X | | | | | | X | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
lyakaap/balanced-cc100-ja | ---
license: mit
---
|
lansinuote/nlp.2.predict_middle_word | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 5711991
num_examples: 44279
- name: validation
num_bytes: 111069
num_examples: 861
- name: test
num_bytes: 229104
num_examples: 1776
download_size: 0
dataset_size: 6052164
---
# Dataset Card for "nlp.2.predict_middle_word"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sam-bha/un-general-assembly-votes-2000-2023 | ---
license: cc-by-nc-4.0
task_categories:
- tabular-regression
- tabular-classification
language:
- en
tags:
- politics
pretty_name: UN General Assembly Votes from 2000 to 2023
---
# UN General Assembly Votes from 2000 to 2023
The following is a cleaned and compiled version of all of the UN General Assembly votes, from [the UN Digital Library](https://digitallibrary.un.org/), which includes ~1800 different resolutions and votes by the 196 voting members.
Fields include **Title**, **Resolution Number** and the actual votes.
The votes are in a dict format, with the name of the country. Countries have have changed names over the period (such as Turkey -> Türkiye, Swaziland -> Eswatini), so we use the latest name each country has used as of 2023. One voting member country (Serbia and Montengro) has since split into two voting member countries during the time period in question, and is not considered. South Sudan, Serbia, and Montenegro only came into existing in the middle of the time period in question, and so we consider them as not voting / null votes before they became voting members.
Please follow the [UN Digital Library terms of service](https://digitallibrary.un.org/pages/?ln=en&page=tos) (e.g. non-commercial use)
© United Nations, 2023, https://digitallibrary.un.org, downloaded on 10/29/2023 |
yangwang825/tnews | ---
task_categories:
- text-classification
language:
- en
viewer: true
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': news_story
'1': news_culture
'2': news_entertainment
'3': news_sports
'4': news_finance
'5': news_house
'6': news_car
'7': news_edu
'8': news_tech
'9': news_military
'10': news_travel
'11': news_world
'12': news_stock
'13': news_agriculture
'14': news_game
--- |
one-sec-cv12/chunk_37 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 17963857440.25
num_examples: 187030
download_size: 15529369715
dataset_size: 17963857440.25
---
# Dataset Card for "chunk_37"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.