datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
NikkoIGuess/NikkoDoesRandom_Ai | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- chemistry
pretty_name: 'NikkoDoesRandom '
size_categories:
- n>1T
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
leslyarun/c4_200m_gec_train100k_test25k | ---
language:
- en
source_datasets:
- allenai/c4
task_categories:
- text-generation
pretty_name: C4 200M Grammatical Error Correction Dataset
tags:
- grammatical-error-correction
---
# C4 200M
# Dataset Summary
C4 200M Sample Dataset adopted from https://huggingface.co/datasets/liweili/c4_200m
C4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks.
The corruption edits and scripts used to synthesize this dataset is referenced from: [C4_200M Synthetic Dataset](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction)
# Description
As discussed before, this dataset contains 185 million sentence pairs. Each article has these two attributes: `input` and `output`. Here is a sample of dataset:
```
{
"input": "Bitcoin is for $7,094 this morning, which CoinDesk says."
"output": "Bitcoin goes for $7,094 this morning, according to CoinDesk."
}
``` |
shireesh-uop/nhs_classification | ---
dataset_info:
features:
- name: label
dtype: string
- name: data
dtype: string
- name: idx
dtype: int64
splits:
- name: train
num_bytes: 3574265
num_examples: 27124
download_size: 1511963
dataset_size: 3574265
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TrainingDataPro/facial-emotion-recognition-dataset | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- image-classification
- image-to-image
tags:
- code
dataset_info:
features:
- name: set_id
dtype: int32
- name: neutral
dtype: image
- name: anger
dtype: image
- name: contempt
dtype: image
- name: disgust
dtype: image
- name: fear
dtype: image
- name: happy
dtype: image
- name: sad
dtype: image
- name: surprised
dtype: image
- name: age
dtype: int8
- name: gender
dtype: string
- name: country
dtype: string
splits:
- name: train
num_bytes: 22981
num_examples: 19
download_size: 453786356
dataset_size: 22981
---
# Facial Emotion Recognition Dataset
The dataset consists of images capturing people displaying **7 distinct emotions** (*anger, contempt, disgust, fear, happiness, sadness and surprise*). Each image in the dataset represents one of these specific emotions, enabling researchers and machine learning practitioners to study and develop models for emotion recognition and analysis.
The images encompass a diverse range of individuals, including different *genders, ethnicities, and age groups*. The dataset aims to provide a comprehensive representation of human emotions, allowing for a wide range of use cases.
### The dataset's possible applications:
- automatic emotion detection
- mental health analysis
- artificial intelligence (AI) and computer vision
- entertainment industries
- advertising and market research
- security and surveillance

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=facial-emotion-recognition-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Content
- **images**: includes folders corresponding to people and containing images with 8 different impersonated emotions, each file is named according to the expressed emotion
- **.csv** file: contains information about people in the dataset
### Emotions in the dataset:
- anger
- contempt
- disgust
- fear
- happy
- sad
- surprised
### File with the extension .csv
includes the following information for each set of media files:
- **set_id**: id of the set of images,
- **gender**: gender of the person,
- **age**: age of the person,
- **country**: country of the person
# Images for facial emotion recognition might be collected in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=facial-emotion-recognition-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
CLEAR-Global/Gamayun-kits | ---
task_categories:
- translation
language:
- ha
- kr
- en
- fr
- sw
- swc
- ln
- nnd
- rhg
- ti
size_categories:
- 10K<n<100K
pretty_name: Gamayun kits
---
# Gamayun Language Data Kits
There are more than 7,000 languages in the world, yet only a small proportion of them have language data presence in public. CLEAR Global's Gamayun kits are a starting point for developing audio and text corpora for languages without pre-existing data resources. We create parallel data for a language by translating a pre-compiled set of general-domain sentences in English. If audio data is needed, these translated sentences are recorded by native speakers.
To scale corpus production, we offer four dataset versions:
- Mini-kit of 5,000 sentences (`kit5k`)
- Small-kit of 10,000 sentences (`kit10k`)
- Medium-kit of 15,000 sentences (`kit15k`)
- Large-kit of 30,000 sentences (`kit30k`)
For audio corpora developed using these kits refer to the official initiative website [Gamayun portal](https://gamayun.translatorswb.org/data/).
## Source sentences (`core`)
Sentences in `core` directory are in English, French and Spanish and are sourced from the [Tatoeba repository](https://tatoeba.org). Sentence selection algorithm ensures representation of most frequently used words in the language. For more information, please refer to [corepus-gen repository](https://github.com/translatorswb/corepus-gen). `etc` directories contain sentence id's as used in the Tatoeba corpus.
## Parallel corpora (`parallel`)
Translations of the kits are performed by professionals and volunteers of TWB's translator community. A complete list of translated sentences are:
| Language | Pair | # Segments | Source |
|------|--------|--------|--------|
| Hausa | English | 15,000 | Tatoeba |
| Kanuri | English | 5,000 | Tatoeba |
| Nande | French | 15,000 | Tatoeba |
| Rohingya | English | 5,000 | Tatoeba |
| Swahili (Coastal) | English | 5,000 | Tatoeba |
| Swahili (Congolese) | French | 25,302 | Tatoeba |
## Reference
More on [Gamayun, language equity initiative](https://translatorswithoutborders.org/gamayun/)
Gamayun kits are officially published in the [Gamayun portal](https://gamayun.translatorswb.org/data/). Conditions for use are described in `LICENSE.txt`.
If you need to cite Gamayun kits:
```
Alp Öktem, Muhannad Albayk Jaam, Eric DeLuca, Grace Tang
Gamayun – Language Technology for Humanitarian Response
In: 2020 IEEE Global Humanitarian Technology Conference (GHTC)
2020 October 29 - November 1; Virtual.
Link: https://ieeexplore.ieee.org/document/9342939
``` |
Mitsuki-Sakamoto/alpaca_farm-deberta-re-preference-64-nsample-16_filter_gold_thr_0.0_self_70m | ---
dataset_info:
config_name: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: preference
dtype: int64
- name: output_1
dtype: string
- name: output_2
dtype: string
- name: reward_model_prompt_format
dtype: string
- name: gen_prompt_format
dtype: string
- name: gen_kwargs
struct:
- name: do_sample
dtype: bool
- name: max_new_tokens
dtype: int64
- name: pad_token_id
dtype: int64
- name: top_k
dtype: int64
- name: top_p
dtype: float64
- name: reward_1
dtype: float64
- name: reward_2
dtype: float64
- name: n_samples
dtype: int64
- name: reject_select
dtype: string
- name: index
dtype: int64
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: filtered_epoch
dtype: int64
- name: gen_reward
dtype: float64
- name: gen_response
dtype: string
splits:
- name: epoch_0
num_bytes: 43497434
num_examples: 18928
- name: epoch_1
num_bytes: 44355307
num_examples: 18928
- name: epoch_2
num_bytes: 44429044
num_examples: 18928
- name: epoch_3
num_bytes: 44454073
num_examples: 18928
- name: epoch_4
num_bytes: 44459094
num_examples: 18928
- name: epoch_5
num_bytes: 44477699
num_examples: 18928
- name: epoch_6
num_bytes: 44479423
num_examples: 18928
- name: epoch_7
num_bytes: 44487040
num_examples: 18928
- name: epoch_8
num_bytes: 44493050
num_examples: 18928
- name: epoch_9
num_bytes: 44497058
num_examples: 18928
download_size: 683566317
dataset_size: 443629222
configs:
- config_name: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1
data_files:
- split: epoch_0
path: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_0-*
- split: epoch_1
path: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_1-*
- split: epoch_2
path: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_2-*
- split: epoch_3
path: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_3-*
- split: epoch_4
path: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_4-*
- split: epoch_5
path: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_5-*
- split: epoch_6
path: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_6-*
- split: epoch_7
path: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_7-*
- split: epoch_8
path: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_8-*
- split: epoch_9
path: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_9-*
---
|
open-llm-leaderboard/details_bhenrym14__airoboros-33b-gpt4-1.4.1-PI-8192-fp16 | ---
pretty_name: Evaluation run of bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bhenrym14__airoboros-33b-gpt4-1.4.1-PI-8192-fp16\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T19:12:34.050776](https://huggingface.co/datasets/open-llm-leaderboard/details_bhenrym14__airoboros-33b-gpt4-1.4.1-PI-8192-fp16/blob/main/results_2023-10-15T19-12-34.050776.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.03544463087248322,\n\
\ \"em_stderr\": 0.0018935573437954016,\n \"f1\": 0.08440436241610706,\n\
\ \"f1_stderr\": 0.002470333585036359,\n \"acc\": 0.2841357537490134,\n\
\ \"acc_stderr\": 0.0069604360550053574\n },\n \"harness|drop|3\":\
\ {\n \"em\": 0.03544463087248322,\n \"em_stderr\": 0.0018935573437954016,\n\
\ \"f1\": 0.08440436241610706,\n \"f1_stderr\": 0.002470333585036359\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5682715074980268,\n\
\ \"acc_stderr\": 0.013920872110010715\n }\n}\n```"
repo_url: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T19_12_34.050776
path:
- '**/details_harness|drop|3_2023-10-15T19-12-34.050776.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T19-12-34.050776.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T19_12_34.050776
path:
- '**/details_harness|gsm8k|5_2023-10-15T19-12-34.050776.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T19-12-34.050776.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T19_12_34.050776
path:
- '**/details_harness|winogrande|5_2023-10-15T19-12-34.050776.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T19-12-34.050776.parquet'
- config_name: results
data_files:
- split: 2023_10_15T19_12_34.050776
path:
- results_2023-10-15T19-12-34.050776.parquet
- split: latest
path:
- results_2023-10-15T19-12-34.050776.parquet
---
# Dataset Card for Evaluation run of bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_bhenrym14__airoboros-33b-gpt4-1.4.1-PI-8192-fp16",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T19:12:34.050776](https://huggingface.co/datasets/open-llm-leaderboard/details_bhenrym14__airoboros-33b-gpt4-1.4.1-PI-8192-fp16/blob/main/results_2023-10-15T19-12-34.050776.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.03544463087248322,
"em_stderr": 0.0018935573437954016,
"f1": 0.08440436241610706,
"f1_stderr": 0.002470333585036359,
"acc": 0.2841357537490134,
"acc_stderr": 0.0069604360550053574
},
"harness|drop|3": {
"em": 0.03544463087248322,
"em_stderr": 0.0018935573437954016,
"f1": 0.08440436241610706,
"f1_stderr": 0.002470333585036359
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5682715074980268,
"acc_stderr": 0.013920872110010715
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
hafsteinn/ice_and_fire | ---
license: cc-by-4.0
task_categories:
- text-classification
language:
- is
---
# Ice and Fire Comment Dataset
## Description
The Ice and Fire Dataset is a collection of comments from the Icelandic blog platform, blog.is, that have been annotated in several tasks.
## Dataset Structure
### Data Fields
- `annotator_id`: An integer identifier for the annotator who labeled the comment.
- `label`: The label assigned to the comment.
- `task_type`: The type of task the comment was annotated for (see paper).
- `show_blog_post`: A boolean indicating whether the annotator viewed the blog post in the annotation process.
- `show_preceding_comments`: A boolean indicating whether the annotator viewed preceding comments in the annotation process.
- `blog_title`: The title of the blog post associated with the comment.
- `blog_text`: The text of the blog post associated with the comment.
- `comment_body`: The body of the comment.
- `previous_comments`: A string containing all previous comments concatenated together, separated by " || ".
### Data Splits
This dataset is provided as a single CSV file, `ice_and_fire_huggingface_dataset.csv`, without predefined training, validation, or test splits due to the size and label distribution. Users are encouraged to create their own splits as needed for their specific tasks or to use cross-validation for benchmarking.
### Citation Information
If you use the Ice and Fire Dataset in your research, please cite it as follows:
TODO |
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-c51db7-51930145327 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: Alred/t5-small-finetuned-summarization-cnn
metrics: []
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Alred/t5-small-finetuned-summarization-cnn
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MaryYarova](https://huggingface.co/MaryYarova) for evaluating this model. |
ostapeno/tulu_v2_cot_subset | ---
dataset_info:
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: messages
list:
- name: role
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 57705790
num_examples: 50000
download_size: 25971959
dataset_size: 57705790
---
# Dataset Card for "tulu_v2_cot_subset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ctang/formatted_util_deontology_for_llama2_v2 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 26907365
num_examples: 30471
download_size: 3740261
dataset_size: 26907365
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sbunlp/hmblogs-v3 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 45957987986
num_examples: 16896817
download_size: 21312867175
dataset_size: 45957987986
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
language:
- fa
pretty_name: 'HmBlogs: A big general Persian corpus'
size_categories:
- 10M<n<100M
---
# HmBlogs: A big general Persian corpus
HmBlogs is a general Persian corpus collected from nearly 20 million blog posts over a period of 15 years containig 6.8 billion tokens.
This version is the **preprocessed version** of the dataset prepared by the original authors and converted to proper format to integrate with 🤗Datasets.
In order to access the raw versions visit the official link at http://nlplab.sbu.ac.ir/hmBlogs-v3 .
**Paper:** https://arxiv.org/abs/2111.02362 <br>
**Authors:** Hamzeh Motahari Khansari, Mehrnoush Shamsfard <br>
**Original Link:** http://nlplab.sbu.ac.ir/hmBlogs-v3/<br>
## Usage
This dataset can be used for masked/causal language modeling. You can easily load this dataset like below:
```python
from datasets import load_dataset
# Load the whole dataset
dataset = load_dataset("sbunlp/hmblogs-v3", split="train")
# Load a portion by %
dataset = load_dataset("sbunlp/hmblogs-v3", split="train[:50%]")
# Load a custom shard
dataset = load_dataset("sbunlp/hmblogs-v3", data_files=["data/train-00000-of-00046.parquet", "data/train-00001-of-00046.parquet"])
```
# Citation
```cite
@article{DBLP:journals/corr/abs-2111-02362,
author = {Hamzeh Motahari Khansari and
Mehrnoush Shamsfard},
title = {HmBlogs: {A} big general Persian corpus},
journal = {CoRR},
volume = {abs/2111.02362},
year = {2021},
url = {https://arxiv.org/abs/2111.02362},
eprinttype = {arXiv},
eprint = {2111.02362},
timestamp = {Fri, 05 Nov 2021 15:25:54 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-02362.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
CyberHarem/hougetsu_shimamura_adachitoshimamura | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Hougetsu Shimamura
This is the dataset of Hougetsu Shimamura, containing 550 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 550 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 1263 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 1370 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 550 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 550 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 550 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 1263 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 1263 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 1087 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 1370 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 1370 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
steven1116/ninespecies_exclude_honeybee | ---
license: apache-2.0
---
|
tasksource/med | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: gold_label
dtype: string
- name: genre
dtype: string
splits:
- name: train
num_bytes: 532705
num_examples: 4068
download_size: 146614
dataset_size: 532705
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- en
---
# Dataset Card for "med"
Crowsourced (=original part) of the MED dataset for Monotonicity Entailment
https://github.com/verypluming/MED
```
@inproceedings{yanaka-etal-2019-neural,
title = "Can Neural Networks Understand Monotonicity Reasoning?",
author = "Yanaka, Hitomi and
Mineshima, Koji and
Bekki, Daisuke and
Inui, Kentaro and
Sekine, Satoshi and
Abzianidze, Lasha and
Bos, Johan",
booktitle = "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
year = "2019",
pages = "31--40",
}
``` |
qbaro/speech2text | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: audio
struct:
- name: array
sequence: float32
- name: path
dtype: string
- name: sampling_rate
dtype: int64
splits:
- name: train
num_bytes: 1357744185
num_examples: 1057
- name: test
num_bytes: 589556544
num_examples: 464
download_size: 1949997840
dataset_size: 1947300729
---
# Dataset Card for "speech2text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sfsdfsafsddsfsdafsa/MovieLLM-raw-data | ---
license: mit
---
|
arthurmluz/cstnews_data-xlsum_gptextsum2_results | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: summary
dtype: string
- name: gen_summary
dtype: string
- name: rouge
struct:
- name: rouge1
dtype: float64
- name: rouge2
dtype: float64
- name: rougeL
dtype: float64
- name: rougeLsum
dtype: float64
- name: bert
struct:
- name: f1
sequence: float64
- name: hashcode
dtype: string
- name: precision
sequence: float64
- name: recall
sequence: float64
- name: moverScore
dtype: float64
splits:
- name: validation
num_bytes: 59919
num_examples: 16
download_size: 59830
dataset_size: 59919
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "cstnews_data-xlsum_gptextsum2_results"
rouge= {'rouge1': 0.5251493615673016, 'rouge2': 0.2936121215948489, 'rougeL': 0.35087788149320814, 'rougeLsum': 0.35087788149320814}
bert= {'precision': 0.7674689218401909, 'recall': 0.8024204447865486, 'f1': 0.7838323190808296}
mover = 0.6346333578747139 |
lazybear17/ShapeColor_33_500 | ---
size_categories:
- 1K<n<10K
--- |
liuyanchen1015/MULTI_VALUE_rte_say_complementizer | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 298321
num_examples: 627
- name: train
num_bytes: 286475
num_examples: 601
download_size: 381820
dataset_size: 584796
---
# Dataset Card for "MULTI_VALUE_rte_say_complementizer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Seenka/direvtv-test | ---
dataset_info:
features:
- name: image
dtype: image
- name: timestamp
dtype: int64
- name: video_storage_path
dtype: string
splits:
- name: train
num_bytes: 14771526.0
num_examples: 50
download_size: 9696484
dataset_size: 14771526.0
---
# Dataset Card for "direvtv-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/cirno_touhou | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of cirno/ちるの/치르노 (Touhou)
This is the dataset of cirno/ちるの/치르노 (Touhou), containing 500 images and their tags.
The core tags of this character are `blue_hair, short_hair, bow, hair_bow, wings, blue_eyes, ice_wings, blue_bow, ribbon, bangs, hair_between_eyes, red_ribbon, neck_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 740.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/cirno_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 397.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/cirno_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1223 | 859.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/cirno_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 642.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/cirno_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1223 | 1.22 GiB | [Download](https://huggingface.co/datasets/CyberHarem/cirno_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/cirno_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, :d, blue_dress, blush, cowboy_shot, ice, looking_at_viewer, open_mouth, puffy_short_sleeves, simple_background, solo, white_background, white_shirt, breasts, collared_shirt, standing |
| 1 | 6 |  |  |  |  |  | 1girl, blue_dress, closed_mouth, collared_shirt, ice, looking_at_viewer, puffy_short_sleeves, simple_background, solo, white_background, white_shirt, blush, pinafore_dress, cowboy_shot, smile |
| 2 | 8 |  |  |  |  |  | 1girl, blue_dress, ice, looking_at_viewer, puffy_short_sleeves, solo, white_background, simple_background, shirt, upper_body, smile |
| 3 | 7 |  |  |  |  |  | 1girl, blue_dress, full_body, ice, looking_at_viewer, open_mouth, solo, white_socks, puffy_short_sleeves, white_shirt, :d, blush, black_footwear, mary_janes, pinafore_dress |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | :d | blue_dress | blush | cowboy_shot | ice | looking_at_viewer | open_mouth | puffy_short_sleeves | simple_background | solo | white_background | white_shirt | breasts | collared_shirt | standing | closed_mouth | pinafore_dress | smile | shirt | upper_body | full_body | white_socks | black_footwear | mary_janes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----|:-------------|:--------|:--------------|:------|:--------------------|:-------------|:----------------------|:--------------------|:-------|:-------------------|:--------------|:----------|:-----------------|:-----------|:---------------|:-----------------|:--------|:--------|:-------------|:------------|:--------------|:-----------------|:-------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | | X | X | X | X | X | | X | X | X | X | X | | X | | X | X | X | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | X | | | X | X | | X | X | X | X | | | | | | | X | X | X | | | | |
| 3 | 7 |  |  |  |  |  | X | X | X | X | | X | X | X | X | | X | | X | | | | | X | | | | X | X | X | X |
|
HustonMatthew/LenghtPrediction | ---
license: cc
---
|
Hemanth-thunder/ocr-data-tnpsc | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 12574068
num_examples: 9217
download_size: 4400902
dataset_size: 12574068
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
pretty_name: Public Tamil Nadu old School Books and Tnpsc Content (English)
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
language:
- ta
tags:
- ocr
- tnpsc
- tamil
- chemistry
- biology
- finance
- medical
---
# Tamil Public Domain Books (Tamil)
The dataset comprises over 30 school textbooks and certain TNPSC (Tamil Nadu Public Service Commission) materials in Tamil medium, presumed to be in the public domain. |
ekolasky/RelevantTextForCustomLEDForQA650 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: start_positions
sequence: int64
- name: end_positions
sequence: int64
- name: global_attention_mask
sequence: int64
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 36495790
num_examples: 586
- name: validation
num_bytes: 4341131
num_examples: 65
download_size: 4313316
dataset_size: 40836921
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
mohammedriza-rahman/conll2003 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-reuters-corpus
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
paperswithcode_id: conll-2003
pretty_name: CoNLL-2003
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': '"'
'1': ''''''
'2': '#'
'3': $
'4': (
'5': )
'6': ','
'7': .
'8': ':'
'9': '``'
'10': CC
'11': CD
'12': DT
'13': EX
'14': FW
'15': IN
'16': JJ
'17': JJR
'18': JJS
'19': LS
'20': MD
'21': NN
'22': NNP
'23': NNPS
'24': NNS
'25': NN|SYM
'26': PDT
'27': POS
'28': PRP
'29': PRP$
'30': RB
'31': RBR
'32': RBS
'33': RP
'34': SYM
'35': TO
'36': UH
'37': VB
'38': VBD
'39': VBG
'40': VBN
'41': VBP
'42': VBZ
'43': WDT
'44': WP
'45': WP$
'46': WRB
- name: chunk_tags
sequence:
class_label:
names:
'0': O
'1': B-ADJP
'2': I-ADJP
'3': B-ADVP
'4': I-ADVP
'5': B-CONJP
'6': I-CONJP
'7': B-INTJ
'8': I-INTJ
'9': B-LST
'10': I-LST
'11': B-NP
'12': I-NP
'13': B-PP
'14': I-PP
'15': B-PRT
'16': I-PRT
'17': B-SBAR
'18': I-SBAR
'19': B-UCP
'20': I-UCP
'21': B-VP
'22': I-VP
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
config_name: conll2003
splits:
- name: train
num_bytes: 6931345
num_examples: 14041
- name: validation
num_bytes: 1739223
num_examples: 3250
- name: test
num_bytes: 1582054
num_examples: 3453
download_size: 982975
dataset_size: 10252622
train-eval-index:
- config: conll2003
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
col_mapping:
tokens: tokens
ner_tags: tags
metrics:
- type: seqeval
name: seqeval
---
# Dataset Card for "conll2003"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 4.85 MB
- **Size of the generated dataset:** 10.26 MB
- **Total amount of disk used:** 15.11 MB
### Dataset Summary
The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on
four types of named entities: persons, locations, organizations and names of miscellaneous entities that do
not belong to the previous three groups.
The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on
a separate line and there is an empty line after each sentence. The first item on each line is a word, the second
a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags
and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only
if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag
B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2
tagging scheme, whereas the original dataset uses IOB1.
For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### conll2003
- **Size of downloaded dataset files:** 4.85 MB
- **Size of the generated dataset:** 10.26 MB
- **Total amount of disk used:** 15.11 MB
An example of 'train' looks as follows.
```
{
"chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],
"id": "0",
"ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
"tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
}
```
The original data files have `-DOCSTART-` lines used to separate documents, but these lines are removed here.
Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
### Data Fields
The data fields are the same among all splits.
#### conll2003
- `id`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'"': 0, "''": 1, '#': 2, '$': 3, '(': 4, ')': 5, ',': 6, '.': 7, ':': 8, '``': 9, 'CC': 10, 'CD': 11, 'DT': 12,
'EX': 13, 'FW': 14, 'IN': 15, 'JJ': 16, 'JJR': 17, 'JJS': 18, 'LS': 19, 'MD': 20, 'NN': 21, 'NNP': 22, 'NNPS': 23,
'NNS': 24, 'NN|SYM': 25, 'PDT': 26, 'POS': 27, 'PRP': 28, 'PRP$': 29, 'RB': 30, 'RBR': 31, 'RBS': 32, 'RP': 33,
'SYM': 34, 'TO': 35, 'UH': 36, 'VB': 37, 'VBD': 38, 'VBG': 39, 'VBN': 40, 'VBP': 41, 'VBZ': 42, 'WDT': 43,
'WP': 44, 'WP$': 45, 'WRB': 46}
```
- `chunk_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'O': 0, 'B-ADJP': 1, 'I-ADJP': 2, 'B-ADVP': 3, 'I-ADVP': 4, 'B-CONJP': 5, 'I-CONJP': 6, 'B-INTJ': 7, 'I-INTJ': 8,
'B-LST': 9, 'I-LST': 10, 'B-NP': 11, 'I-NP': 12, 'B-PP': 13, 'I-PP': 14, 'B-PRT': 15, 'I-PRT': 16, 'B-SBAR': 17,
'I-SBAR': 18, 'B-UCP': 19, 'I-UCP': 20, 'B-VP': 21, 'I-VP': 22}
```
- `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6, 'B-MISC': 7, 'I-MISC': 8}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|conll2003|14041| 3250|3453|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page:
> The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.
The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html):
> The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
>
> [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html)
>
> This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
>
> [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html)
>
> This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
### Citation Information
```
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F. and
De Meulder, Fien",
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
year = "2003",
url = "https://www.aclweb.org/anthology/W03-0419",
pages = "142--147",
}
```
### Contributions
Thanks to [@jplu](https://github.com/jplu), [@vblagoje](https://github.com/vblagoje), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
infgrad/retrieval_data_llm | ---
license: mit
language:
- zh
size_categories:
- 100K<n<1M
---
带有难负例的检索训练数据。约20万。
文件格式:jsonl。单行示例:
```
{"Query": "大熊猫的饮食习性", "Positive Document": "大熊猫主要以竹子为食,但也会吃水果和小型动物。它们拥有强壮的颌部和牙齿,能够咬碎竹子坚硬的外壳。", "Hard Negative Document": "老虎是肉食性动物,主要捕食鹿、野猪等大型动物。它们的牙齿和爪子非常锋利,是捕猎的利器。"}
``` |
gguichard/wsd_fr_wngt_semcor_translated_aligned | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: tokens
sequence: string
- name: wn_sens
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 120127351.96891159
num_examples: 167549
- name: test
num_bytes: 6322945.031088406
num_examples: 8819
download_size: 35442307
dataset_size: 126450297.0
---
# Dataset Card for "wsd_fr_wngt_semcor_translated_aligned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Venki-ds/test-my-alpaca-llama2-1k | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 668749
num_examples: 1000
download_size: 412751
dataset_size: 668749
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
unknown12367556/43590439 | ---
license: afl-3.0
---
|
autoevaluate/autoeval-staging-eval-project-xsum-6cd6bf3a-11245505 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: ARTeLab/it5-summarization-ilpost
metrics: []
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-ilpost
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@dishant16](https://huggingface.co/dishant16) for evaluating this model. |
ImperialIndians23/nlp_cw_data_unprocessed | ---
dataset_info:
features:
- name: par_id
dtype: string
- name: community
dtype: string
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 2520387
num_examples: 8375
- name: valid
num_bytes: 616626
num_examples: 2094
download_size: 1979627
dataset_size: 3137013
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
---
|
heliosprime/twitter_dataset_1713164754 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 8822
num_examples: 21
download_size: 12027
dataset_size: 8822
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713164754"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
davanstrien/haiku-kto-raw-argilla | ---
size_categories: 1K<n<10K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for haiku-kto-raw-argilla
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("davanstrien/haiku-kto-raw-argilla")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("davanstrien/haiku-kto-raw-argilla")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| prompt | Haiku prompt | text | True | True |
| completion | Haiku | text | True | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| label | Do you like this haiku? | label_selection | True | Classify the text by selecting the correct label from the given list of labels. | ['Yes', 'No'] |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"completion": "Iceberg, silent threat\nDeceptive beauty, hidden\nSinking ships, cold death",
"prompt": "Can you write a haiku that describes the danger of an iceberg?"
},
"metadata": {
"generation_model": "NousResearch/Nous-Hermes-2-Yi-34B",
"prompt": "Can you write a haiku that describes the danger of an iceberg?"
},
"responses": [],
"suggestions": [],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"completion": "Iceberg, silent threat\nDeceptive beauty, hidden\nSinking ships, cold death",
"external_id": null,
"label": [],
"label-suggestion": null,
"label-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"metadata": "{\"prompt\": \"Can you write a haiku that describes the danger of an iceberg?\", \"generation_model\": \"NousResearch/Nous-Hermes-2-Yi-34B\"}",
"prompt": "Can you write a haiku that describes the danger of an iceberg?"
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **prompt** is of type `text`.
* **completion** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **label** is of type `label_selection` with the following allowed values ['Yes', 'No'], and description "Classify the text by selecting the correct label from the given list of labels.".
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **label-suggestion** is of type `label_selection` with the following allowed values ['Yes', 'No'].
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
Do you like this haiku?
Yes or no?
A vibes only assessment is fine!
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
GEM-submissions/lewtun__hugging-face-test-t5-base.outputs.json-36bf2a59__1645800191 | ---
benchmark: gem
type: prediction
submission_name: Hugging Face test T5-base.outputs.json 36bf2a59
---
|
crumb/Clean-Instruct-440k | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 650842125.0
num_examples: 443612
download_size: 357775511
dataset_size: 650842125.0
license: mit
task_categories:
- conversational
language:
- en
---
# Dataset Card for "Clean-Instruct"
[yahma/alpaca-cleaned](https://hf.co/datasets/yahma/alpaca-cleaned) + [crumb/gpt4all-clean](https://hf.co/datasets/crumb/gpt4all-clean) + GPTeacher-Instruct-Dedup
It isn't perfect, but it's 443k high quality semi-cleaned instructions without "As an Ai language model".
```python
from datasets import load_dataset
dataset = load_dataset("crumb/clean-instruct", split="train")
def promptify(example):
if example['input']!='':
return {"text": f"<instruction> {example['instruction']} <input> {example['input']} <output> {example['output']}"}
return {"text": f"<instruction> {example['instruction']} <output> {example['output']}"}
dataset = dataset.map(promptify, batched=False)
dataset = dataset.remove_columns(["instruction", "input", "output"])
``` |
open-llm-leaderboard/details_PY007__TinyLlama-1.1B-intermediate-step-480k-1T | ---
pretty_name: Evaluation run of PY007/TinyLlama-1.1B-intermediate-step-480k-1T
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [PY007/TinyLlama-1.1B-intermediate-step-480k-1T](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-480k-1T)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_PY007__TinyLlama-1.1B-intermediate-step-480k-1T\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-24T09:15:17.830156](https://huggingface.co/datasets/open-llm-leaderboard/details_PY007__TinyLlama-1.1B-intermediate-step-480k-1T/blob/main/results_2023-10-24T09-15-17.830156.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0012583892617449664,\n\
\ \"em_stderr\": 0.0003630560893119088,\n \"f1\": 0.0418026426174498,\n\
\ \"f1_stderr\": 0.0011748218433740387,\n \"acc\": 0.2891570770949507,\n\
\ \"acc_stderr\": 0.007951591896761558\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0012583892617449664,\n \"em_stderr\": 0.0003630560893119088,\n\
\ \"f1\": 0.0418026426174498,\n \"f1_stderr\": 0.0011748218433740387\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.00530705079605762,\n \
\ \"acc_stderr\": 0.0020013057209480613\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5730071033938438,\n \"acc_stderr\": 0.013901878072575057\n\
\ }\n}\n```"
repo_url: https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-480k-1T
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|arc:challenge|25_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_24T09_15_17.830156
path:
- '**/details_harness|drop|3_2023-10-24T09-15-17.830156.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-24T09-15-17.830156.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_24T09_15_17.830156
path:
- '**/details_harness|gsm8k|5_2023-10-24T09-15-17.830156.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-24T09-15-17.830156.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hellaswag|10_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T06-32-33.540256.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T06-32-33.540256.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T06-32-33.540256.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_24T09_15_17.830156
path:
- '**/details_harness|winogrande|5_2023-10-24T09-15-17.830156.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-24T09-15-17.830156.parquet'
- config_name: results
data_files:
- split: 2023_10_04T06_32_33.540256
path:
- results_2023-10-04T06-32-33.540256.parquet
- split: 2023_10_24T09_15_17.830156
path:
- results_2023-10-24T09-15-17.830156.parquet
- split: latest
path:
- results_2023-10-24T09-15-17.830156.parquet
---
# Dataset Card for Evaluation run of PY007/TinyLlama-1.1B-intermediate-step-480k-1T
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-480k-1T
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [PY007/TinyLlama-1.1B-intermediate-step-480k-1T](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-480k-1T) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_PY007__TinyLlama-1.1B-intermediate-step-480k-1T",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T09:15:17.830156](https://huggingface.co/datasets/open-llm-leaderboard/details_PY007__TinyLlama-1.1B-intermediate-step-480k-1T/blob/main/results_2023-10-24T09-15-17.830156.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0012583892617449664,
"em_stderr": 0.0003630560893119088,
"f1": 0.0418026426174498,
"f1_stderr": 0.0011748218433740387,
"acc": 0.2891570770949507,
"acc_stderr": 0.007951591896761558
},
"harness|drop|3": {
"em": 0.0012583892617449664,
"em_stderr": 0.0003630560893119088,
"f1": 0.0418026426174498,
"f1_stderr": 0.0011748218433740387
},
"harness|gsm8k|5": {
"acc": 0.00530705079605762,
"acc_stderr": 0.0020013057209480613
},
"harness|winogrande|5": {
"acc": 0.5730071033938438,
"acc_stderr": 0.013901878072575057
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
open-llm-leaderboard/details_MSL7__INEX4-7b | ---
pretty_name: Evaluation run of MSL7/INEX4-7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [MSL7/INEX4-7b](https://huggingface.co/MSL7/INEX4-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_MSL7__INEX4-7b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-02T09:01:52.507914](https://huggingface.co/datasets/open-llm-leaderboard/details_MSL7__INEX4-7b/blob/main/results_2024-03-02T09-01-52.507914.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6531311496231127,\n\
\ \"acc_stderr\": 0.03203119305036496,\n \"acc_norm\": 0.6524432251753999,\n\
\ \"acc_norm_stderr\": 0.03270048450151107,\n \"mc1\": 0.5973072215422277,\n\
\ \"mc1_stderr\": 0.01716883093518721,\n \"mc2\": 0.7441900610335439,\n\
\ \"mc2_stderr\": 0.014429111949951435\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7090443686006825,\n \"acc_stderr\": 0.013273077865907593,\n\
\ \"acc_norm\": 0.7295221843003413,\n \"acc_norm_stderr\": 0.012980954547659556\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7134037044413464,\n\
\ \"acc_stderr\": 0.004512471612415587,\n \"acc_norm\": 0.8878709420434177,\n\
\ \"acc_norm_stderr\": 0.003148803246964289\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6518518518518519,\n\
\ \"acc_stderr\": 0.041153246103369526,\n \"acc_norm\": 0.6518518518518519,\n\
\ \"acc_norm_stderr\": 0.041153246103369526\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n\
\ \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n\
\ \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \
\ \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322663,\n\
\ \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322663\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n\
\ \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n\
\ \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"\
acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n\
\ \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n\
\ \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.04913595201274498,\n\
\ \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.04913595201274498\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n\
\ \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5659574468085107,\n \"acc_stderr\": 0.03240038086792747,\n\
\ \"acc_norm\": 0.5659574468085107,\n \"acc_norm_stderr\": 0.03240038086792747\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n\
\ \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.47368421052631576,\n\
\ \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5310344827586206,\n \"acc_stderr\": 0.04158632762097828,\n\
\ \"acc_norm\": 0.5310344827586206,\n \"acc_norm_stderr\": 0.04158632762097828\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42592592592592593,\n \"acc_stderr\": 0.025467149045469546,\n \"\
acc_norm\": 0.42592592592592593,\n \"acc_norm_stderr\": 0.025467149045469546\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n\
\ \"acc_stderr\": 0.04463112720677171,\n \"acc_norm\": 0.46825396825396826,\n\
\ \"acc_norm_stderr\": 0.04463112720677171\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7935483870967742,\n\
\ \"acc_stderr\": 0.023025899617188716,\n \"acc_norm\": 0.7935483870967742,\n\
\ \"acc_norm_stderr\": 0.023025899617188716\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.035176035403610105,\n\
\ \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.035176035403610105\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\"\
: 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009182,\n\
\ \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009182\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8080808080808081,\n \"acc_stderr\": 0.028057791672989017,\n \"\
acc_norm\": 0.8080808080808081,\n \"acc_norm_stderr\": 0.028057791672989017\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9119170984455959,\n \"acc_stderr\": 0.02045374660160103,\n\
\ \"acc_norm\": 0.9119170984455959,\n \"acc_norm_stderr\": 0.02045374660160103\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6717948717948717,\n \"acc_stderr\": 0.023807633198657266,\n\
\ \"acc_norm\": 0.6717948717948717,\n \"acc_norm_stderr\": 0.023807633198657266\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3296296296296296,\n \"acc_stderr\": 0.028661201116524572,\n \
\ \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.028661201116524572\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.030388353551886797,\n\
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.030388353551886797\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.37748344370860926,\n \"acc_stderr\": 0.03958027231121569,\n \"\
acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.03958027231121569\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8440366972477065,\n \"acc_stderr\": 0.015555802713590167,\n \"\
acc_norm\": 0.8440366972477065,\n \"acc_norm_stderr\": 0.015555802713590167\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5138888888888888,\n \"acc_stderr\": 0.03408655867977749,\n \"\
acc_norm\": 0.5138888888888888,\n \"acc_norm_stderr\": 0.03408655867977749\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8480392156862745,\n \"acc_stderr\": 0.025195658428931792,\n \"\
acc_norm\": 0.8480392156862745,\n \"acc_norm_stderr\": 0.025195658428931792\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7974683544303798,\n \"acc_stderr\": 0.026160568246601446,\n \
\ \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.026160568246601446\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n\
\ \"acc_stderr\": 0.031381476375754995,\n \"acc_norm\": 0.6771300448430493,\n\
\ \"acc_norm_stderr\": 0.031381476375754995\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159465,\n\
\ \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159465\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\"\
: 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n\
\ \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n\
\ \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n\
\ \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.41964285714285715,\n\
\ \"acc_stderr\": 0.04684099321077106,\n \"acc_norm\": 0.41964285714285715,\n\
\ \"acc_norm_stderr\": 0.04684099321077106\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n\
\ \"acc_stderr\": 0.02093019318517933,\n \"acc_norm\": 0.8846153846153846,\n\
\ \"acc_norm_stderr\": 0.02093019318517933\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.822477650063857,\n\
\ \"acc_stderr\": 0.013664230995834841,\n \"acc_norm\": 0.822477650063857,\n\
\ \"acc_norm_stderr\": 0.013664230995834841\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7398843930635838,\n \"acc_stderr\": 0.023618678310069363,\n\
\ \"acc_norm\": 0.7398843930635838,\n \"acc_norm_stderr\": 0.023618678310069363\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4402234636871508,\n\
\ \"acc_stderr\": 0.01660256461504994,\n \"acc_norm\": 0.4402234636871508,\n\
\ \"acc_norm_stderr\": 0.01660256461504994\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.025261691219729484,\n\
\ \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.025261691219729484\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7106109324758842,\n\
\ \"acc_stderr\": 0.025755865922632945,\n \"acc_norm\": 0.7106109324758842,\n\
\ \"acc_norm_stderr\": 0.025755865922632945\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7469135802469136,\n \"acc_stderr\": 0.024191808600712995,\n\
\ \"acc_norm\": 0.7469135802469136,\n \"acc_norm_stderr\": 0.024191808600712995\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5,\n \"acc_stderr\": 0.029827499313594685,\n \"acc_norm\"\
: 0.5,\n \"acc_norm_stderr\": 0.029827499313594685\n },\n \"harness|hendrycksTest-professional_law|5\"\
: {\n \"acc\": 0.4706649282920469,\n \"acc_stderr\": 0.012748238397365549,\n\
\ \"acc_norm\": 0.4706649282920469,\n \"acc_norm_stderr\": 0.012748238397365549\n\
\ },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\"\
: 0.6838235294117647,\n \"acc_stderr\": 0.028245687391462923,\n \"\
acc_norm\": 0.6838235294117647,\n \"acc_norm_stderr\": 0.028245687391462923\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.673202614379085,\n \"acc_stderr\": 0.018975427920507205,\n \
\ \"acc_norm\": 0.673202614379085,\n \"acc_norm_stderr\": 0.018975427920507205\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n\
\ \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n\
\ \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.02812342933514278,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.02812342933514278\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n\
\ \"acc_stderr\": 0.025538433368578337,\n \"acc_norm\": 0.845771144278607,\n\
\ \"acc_norm_stderr\": 0.025538433368578337\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n\
\ \"acc_stderr\": 0.03864139923699122,\n \"acc_norm\": 0.5602409638554217,\n\
\ \"acc_norm_stderr\": 0.03864139923699122\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5973072215422277,\n\
\ \"mc1_stderr\": 0.01716883093518721,\n \"mc2\": 0.7441900610335439,\n\
\ \"mc2_stderr\": 0.014429111949951435\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8389897395422258,\n \"acc_stderr\": 0.010329712832785722\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7028051554207733,\n \
\ \"acc_stderr\": 0.012588685966624186\n }\n}\n```"
repo_url: https://huggingface.co/MSL7/INEX4-7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|arc:challenge|25_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|gsm8k|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hellaswag|10_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-02T09-01-52.507914.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-02T09-01-52.507914.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- '**/details_harness|winogrande|5_2024-03-02T09-01-52.507914.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-02T09-01-52.507914.parquet'
- config_name: results
data_files:
- split: 2024_03_02T09_01_52.507914
path:
- results_2024-03-02T09-01-52.507914.parquet
- split: latest
path:
- results_2024-03-02T09-01-52.507914.parquet
---
# Dataset Card for Evaluation run of MSL7/INEX4-7b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [MSL7/INEX4-7b](https://huggingface.co/MSL7/INEX4-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_MSL7__INEX4-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-02T09:01:52.507914](https://huggingface.co/datasets/open-llm-leaderboard/details_MSL7__INEX4-7b/blob/main/results_2024-03-02T09-01-52.507914.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6531311496231127,
"acc_stderr": 0.03203119305036496,
"acc_norm": 0.6524432251753999,
"acc_norm_stderr": 0.03270048450151107,
"mc1": 0.5973072215422277,
"mc1_stderr": 0.01716883093518721,
"mc2": 0.7441900610335439,
"mc2_stderr": 0.014429111949951435
},
"harness|arc:challenge|25": {
"acc": 0.7090443686006825,
"acc_stderr": 0.013273077865907593,
"acc_norm": 0.7295221843003413,
"acc_norm_stderr": 0.012980954547659556
},
"harness|hellaswag|10": {
"acc": 0.7134037044413464,
"acc_stderr": 0.004512471612415587,
"acc_norm": 0.8878709420434177,
"acc_norm_stderr": 0.003148803246964289
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6518518518518519,
"acc_stderr": 0.041153246103369526,
"acc_norm": 0.6518518518518519,
"acc_norm_stderr": 0.041153246103369526
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6943396226415094,
"acc_stderr": 0.028353298073322663,
"acc_norm": 0.6943396226415094,
"acc_norm_stderr": 0.028353298073322663
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4215686274509804,
"acc_stderr": 0.04913595201274498,
"acc_norm": 0.4215686274509804,
"acc_norm_stderr": 0.04913595201274498
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5659574468085107,
"acc_stderr": 0.03240038086792747,
"acc_norm": 0.5659574468085107,
"acc_norm_stderr": 0.03240038086792747
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5310344827586206,
"acc_stderr": 0.04158632762097828,
"acc_norm": 0.5310344827586206,
"acc_norm_stderr": 0.04158632762097828
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42592592592592593,
"acc_stderr": 0.025467149045469546,
"acc_norm": 0.42592592592592593,
"acc_norm_stderr": 0.025467149045469546
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677171,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677171
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7935483870967742,
"acc_stderr": 0.023025899617188716,
"acc_norm": 0.7935483870967742,
"acc_norm_stderr": 0.023025899617188716
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5073891625615764,
"acc_stderr": 0.035176035403610105,
"acc_norm": 0.5073891625615764,
"acc_norm_stderr": 0.035176035403610105
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009182,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009182
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8080808080808081,
"acc_stderr": 0.028057791672989017,
"acc_norm": 0.8080808080808081,
"acc_norm_stderr": 0.028057791672989017
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9119170984455959,
"acc_stderr": 0.02045374660160103,
"acc_norm": 0.9119170984455959,
"acc_norm_stderr": 0.02045374660160103
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6717948717948717,
"acc_stderr": 0.023807633198657266,
"acc_norm": 0.6717948717948717,
"acc_norm_stderr": 0.023807633198657266
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3296296296296296,
"acc_stderr": 0.028661201116524572,
"acc_norm": 0.3296296296296296,
"acc_norm_stderr": 0.028661201116524572
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.030388353551886797,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.030388353551886797
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.37748344370860926,
"acc_stderr": 0.03958027231121569,
"acc_norm": 0.37748344370860926,
"acc_norm_stderr": 0.03958027231121569
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8440366972477065,
"acc_stderr": 0.015555802713590167,
"acc_norm": 0.8440366972477065,
"acc_norm_stderr": 0.015555802713590167
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5138888888888888,
"acc_stderr": 0.03408655867977749,
"acc_norm": 0.5138888888888888,
"acc_norm_stderr": 0.03408655867977749
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8480392156862745,
"acc_stderr": 0.025195658428931792,
"acc_norm": 0.8480392156862745,
"acc_norm_stderr": 0.025195658428931792
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7974683544303798,
"acc_stderr": 0.026160568246601446,
"acc_norm": 0.7974683544303798,
"acc_norm_stderr": 0.026160568246601446
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.031381476375754995,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.031381476375754995
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7938931297709924,
"acc_stderr": 0.03547771004159465,
"acc_norm": 0.7938931297709924,
"acc_norm_stderr": 0.03547771004159465
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.41964285714285715,
"acc_stderr": 0.04684099321077106,
"acc_norm": 0.41964285714285715,
"acc_norm_stderr": 0.04684099321077106
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.02093019318517933,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.02093019318517933
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.822477650063857,
"acc_stderr": 0.013664230995834841,
"acc_norm": 0.822477650063857,
"acc_norm_stderr": 0.013664230995834841
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7398843930635838,
"acc_stderr": 0.023618678310069363,
"acc_norm": 0.7398843930635838,
"acc_norm_stderr": 0.023618678310069363
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4402234636871508,
"acc_stderr": 0.01660256461504994,
"acc_norm": 0.4402234636871508,
"acc_norm_stderr": 0.01660256461504994
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7352941176470589,
"acc_stderr": 0.025261691219729484,
"acc_norm": 0.7352941176470589,
"acc_norm_stderr": 0.025261691219729484
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7106109324758842,
"acc_stderr": 0.025755865922632945,
"acc_norm": 0.7106109324758842,
"acc_norm_stderr": 0.025755865922632945
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7469135802469136,
"acc_stderr": 0.024191808600712995,
"acc_norm": 0.7469135802469136,
"acc_norm_stderr": 0.024191808600712995
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5,
"acc_stderr": 0.029827499313594685,
"acc_norm": 0.5,
"acc_norm_stderr": 0.029827499313594685
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4706649282920469,
"acc_stderr": 0.012748238397365549,
"acc_norm": 0.4706649282920469,
"acc_norm_stderr": 0.012748238397365549
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6838235294117647,
"acc_stderr": 0.028245687391462923,
"acc_norm": 0.6838235294117647,
"acc_norm_stderr": 0.028245687391462923
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.673202614379085,
"acc_stderr": 0.018975427920507205,
"acc_norm": 0.673202614379085,
"acc_norm_stderr": 0.018975427920507205
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.02812342933514278,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.02812342933514278
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.025538433368578337,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.025538433368578337
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.0348735088019777,
"acc_norm": 0.86,
"acc_norm_stderr": 0.0348735088019777
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699122,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699122
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5973072215422277,
"mc1_stderr": 0.01716883093518721,
"mc2": 0.7441900610335439,
"mc2_stderr": 0.014429111949951435
},
"harness|winogrande|5": {
"acc": 0.8389897395422258,
"acc_stderr": 0.010329712832785722
},
"harness|gsm8k|5": {
"acc": 0.7028051554207733,
"acc_stderr": 0.012588685966624186
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
786Vaibhav786/email_dataset_vb_1 | ---
dataset_info:
features:
- name: product
dtype: string
- name: description
dtype: string
- name: marketing_email
dtype: string
splits:
- name: train
num_bytes: 19568
num_examples: 10
download_size: 25225
dataset_size: 19568
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "email_dataset_vb_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Flyfer/CBTest3 | ---
license: apache-2.0
---
|
C-MTEB/CMNLI | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
dataset_info:
features:
- name: sent1
sequence: string
- name: sent2
sequence: string
- name: labels
sequence: int64
splits:
- name: validation
num_bytes: 1349125
num_examples: 1
download_size: 663026
dataset_size: 1349125
---
# Dataset Card for "CMNLI"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ovior/twitter_dataset_1713178315 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 2475527
num_examples: 7229
download_size: 1416086
dataset_size: 2475527
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lince | ---
paperswithcode_id: lince
pretty_name: Linguistic Code-switching Evaluation Dataset
dataset_info:
- config_name: lid_spaeng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
splits:
- name: train
num_bytes: 4745003
num_examples: 21030
- name: validation
num_bytes: 739950
num_examples: 3332
- name: test
num_bytes: 1337727
num_examples: 8289
download_size: 1188861
dataset_size: 6822680
- config_name: lid_hineng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
splits:
- name: train
num_bytes: 1662284
num_examples: 4823
- name: validation
num_bytes: 268930
num_examples: 744
- name: test
num_bytes: 456850
num_examples: 1854
download_size: 432854
dataset_size: 2388064
- config_name: lid_msaea
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
splits:
- name: train
num_bytes: 3804156
num_examples: 8464
- name: validation
num_bytes: 490566
num_examples: 1116
- name: test
num_bytes: 590488
num_examples: 1663
download_size: 803806
dataset_size: 4885210
- config_name: lid_nepeng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
splits:
- name: train
num_bytes: 2239014
num_examples: 8451
- name: validation
num_bytes: 351649
num_examples: 1332
- name: test
num_bytes: 620512
num_examples: 3228
download_size: 545342
dataset_size: 3211175
- config_name: pos_spaeng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
- name: pos
sequence: string
splits:
- name: train
num_bytes: 5467832
num_examples: 27893
- name: validation
num_bytes: 840593
num_examples: 4298
- name: test
num_bytes: 1758626
num_examples: 10720
download_size: 819657
dataset_size: 8067051
- config_name: pos_hineng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
- name: pos
sequence: string
splits:
- name: train
num_bytes: 537541
num_examples: 1030
- name: validation
num_bytes: 80886
num_examples: 160
- name: test
num_bytes: 131192
num_examples: 299
download_size: 113872
dataset_size: 749619
- config_name: ner_spaeng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 9836312
num_examples: 33611
- name: validation
num_bytes: 2980990
num_examples: 10085
- name: test
num_bytes: 6530956
num_examples: 23527
download_size: 3075520
dataset_size: 19348258
- config_name: ner_msaea
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 3887684
num_examples: 10103
- name: validation
num_bytes: 431414
num_examples: 1122
- name: test
num_bytes: 367310
num_examples: 1110
download_size: 938671
dataset_size: 4686408
- config_name: ner_hineng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 474639
num_examples: 1243
- name: validation
num_bytes: 121403
num_examples: 314
- name: test
num_bytes: 185220
num_examples: 522
download_size: 141285
dataset_size: 781262
- config_name: sa_spaeng
features:
- name: idx
dtype: int32
- name: words
sequence: string
- name: lid
sequence: string
- name: sa
dtype: string
splits:
- name: train
num_bytes: 3587783
num_examples: 12194
- name: validation
num_bytes: 546692
num_examples: 1859
- name: test
num_bytes: 1349407
num_examples: 4736
download_size: 1031412
dataset_size: 5483882
---
# Dataset Card for "lince"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://ritual.uh.edu/lince](http://ritual.uh.edu/lince)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 9.09 MB
- **Size of the generated dataset:** 56.42 MB
- **Total amount of disk used:** 65.52 MB
### Dataset Summary
LinCE is a centralized Linguistic Code-switching Evaluation benchmark
(https://ritual.uh.edu/lince/) that contains data for training and evaluating
NLP systems on code-switching tasks.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### lid_hineng
- **Size of downloaded dataset files:** 0.43 MB
- **Size of the generated dataset:** 2.39 MB
- **Total amount of disk used:** 2.82 MB
An example of 'validation' looks as follows.
```
{
"idx": 0,
"lid": ["other", "other", "lang1", "lang1", "lang1", "other", "lang1", "lang1", "lang1", "lang1", "lang1", "lang1", "lang1", "mixed", "lang1", "lang1", "other"],
"words": ["@ZahirJ", "@BinyavangaW", "Loved", "the", "ending", "!", "I", "could", "have", "offered", "you", "some", "ironic", "chai-tea", "for", "it", ";)"]
}
```
#### lid_msaea
- **Size of downloaded dataset files:** 0.81 MB
- **Size of the generated dataset:** 4.89 MB
- **Total amount of disk used:** 5.69 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"idx": 0,
"lid": ["ne", "lang2", "other", "lang2", "lang2", "other", "other", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "lang2", "other", "lang2", "lang2", "lang2", "ne", "lang2", "lang2"],
"words": "[\"علاء\", \"بخير\", \"،\", \"معنوياته\", \"كويسة\", \".\", \"..\", \"اسخف\", \"حاجة\", \"بس\", \"ان\", \"كل\", \"واحد\", \"منهم\", \"بييقى\", \"مقفول\", \"عليه\"..."
}
```
#### lid_nepeng
- **Size of downloaded dataset files:** 0.55 MB
- **Size of the generated dataset:** 3.21 MB
- **Total amount of disk used:** 3.75 MB
An example of 'validation' looks as follows.
```
{
"idx": 1,
"lid": ["other", "lang2", "lang2", "lang2", "lang2", "lang1", "lang1", "lang1", "lang1", "lang1", "lang2", "lang2", "other", "mixed", "lang2", "lang2", "other", "other", "other", "other"],
"words": ["@nirvikdada", "la", "hamlai", "bhetna", "paayeko", "will", "be", "your", "greatest", "gift", "ni", "dada", ";P", "#TreatChaiyo", "j", "hos", ";)", "@zappylily", "@AsthaGhm", "@ayacs_asis"]
}
```
#### lid_spaeng
- **Size of downloaded dataset files:** 1.18 MB
- **Size of the generated dataset:** 6.83 MB
- **Total amount of disk used:** 8.01 MB
An example of 'train' looks as follows.
```
{
"idx": 0,
"lid": ["other", "other", "lang1", "lang1", "lang1", "other", "lang1", "lang1"],
"words": ["11:11", ".....", "make", "a", "wish", ".......", "night", "night"]
}
```
#### ner_hineng
- **Size of downloaded dataset files:** 0.14 MB
- **Size of the generated dataset:** 0.79 MB
- **Total amount of disk used:** 0.92 MB
An example of 'train' looks as follows.
```
{
"idx": 1,
"lid": ["en", "en", "en", "en", "en", "en", "hi", "hi", "hi", "hi", "hi", "hi", "hi", "en", "en", "en", "en", "rest"],
"ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PERSON", "I-PERSON", "O", "O", "O", "B-PERSON", "I-PERSON"],
"words": ["I", "liked", "a", "@YouTube", "video", "https://t.co/DmVqhZbdaI", "Kabhi", "Palkon", "Pe", "Aasoon", "Hai-", "Kishore", "Kumar", "-Vocal", "Cover", "By", "Stephen", "Qadir"]
}
```
### Data Fields
The data fields are the same among all splits.
#### lid_hineng
- `idx`: a `int32` feature.
- `words`: a `list` of `string` features.
- `lid`: a `list` of `string` features.
#### lid_msaea
- `idx`: a `int32` feature.
- `words`: a `list` of `string` features.
- `lid`: a `list` of `string` features.
#### lid_nepeng
- `idx`: a `int32` feature.
- `words`: a `list` of `string` features.
- `lid`: a `list` of `string` features.
#### lid_spaeng
- `idx`: a `int32` feature.
- `words`: a `list` of `string` features.
- `lid`: a `list` of `string` features.
#### ner_hineng
- `idx`: a `int32` feature.
- `words`: a `list` of `string` features.
- `lid`: a `list` of `string` features.
- `ner`: a `list` of `string` features.
### Data Splits
| name |train|validation|test|
|----------|----:|---------:|---:|
|lid_hineng| 4823| 744|1854|
|lid_msaea | 8464| 1116|1663|
|lid_nepeng| 8451| 1332|3228|
|lid_spaeng|21030| 3332|8289|
|ner_hineng| 1243| 314| 522|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{aguilar-etal-2020-lince,
title = "{L}in{CE}: A Centralized Benchmark for Linguistic Code-switching Evaluation",
author = "Aguilar, Gustavo and
Kar, Sudipta and
Solorio, Thamar",
booktitle = "Proceedings of The 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.223",
pages = "1803--1813",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
Note that each LinCE dataset has its own citation too. Please see [here](https://ritual.uh.edu/lince/datasets)
for the correct citation on each dataset.
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@gaguilar](https://github.com/gaguilar) for adding this dataset. |
marimeireles/scifi-corpus | ---
license: gpl-3.0
---
# scifi-corpus
This is a GPLv3 sci-fi corpus to train LLMs.
See the original repo with issues, etc, [here](https://github.com/marimeireles/scifi-corpus)!
Essentially, this means you can use this corpus to do anything you want as long as you make your code freely available to anyone else do whatever they want with it.
Your code must also be licensed as GPLv3. Read more about free software [here](https://www.gnu.org/licenses/rms-why-gplv3.en.html).
⚠️ Currently there are several instructions missing in the dataset, if you want to help head over [here](https://github.com/marimeireles/scifi-corpus/issues/1).
I've decided to release the dataset anyway because I thought it might already be useful for some people.
## general information
### where is the dataset?
Currently, kindly hosted by 🤗 [hugging-face](https://huggingface.co/datasets/elektra/scifi-corpus).
### what does it consists of?
The database consists of a json file formatted in the following manner:
```
{
"instruction": "Immersed in the serene depths of the monastery, Nakamura crossed paths with the
enigmatic Master Xin, a sage of unparalleled wisdom.",
"input": "",
"output": "Within the monastery, Nakamura encountered a sage, an enigmatic figure known as Master
Xin. Wise and all-knowing, Master Xin possessed an understanding of the intricate workings
of time itself. Through their conversations, Nakamura delved deeper into the altered timeline
mysteries and the dangers that lay ahead."
},
```
The instruction was generated using some language model (sometimes GPT by OpenAI, sometimes Falcon, sometimes Llama) based on the output. The output comes from several
different sources described in the [source](https://github.com/marimeireles/scifi-corpus/new/master?readme#sources) section. The ouput is capped in 500chars. The current dataset contains about 3GB of data.
You will notice that's the exact format the current (2023) LLM models are using for fine-tuning. This is the main purpose of this data set. However, you're free to modify
the data as you wish and change its formatting.
Contributions are very much appreciated, you can check the [projects page](https://github.com/users/marimeireles/projects/1) to learn how to get involved.
## sources
- reddit:
- r/cyberpunk_stories ✅
- r/shortscifistories - Script ready
- omdb ✅
- gutenberg ✅
- aooo - Script ready
- specific wikis:
- KOTOR - Needs script
- SW - Needs script
- Star Trek - Needs script
- isfdb - Needs script
- [SciFi Stories Text Corpus](https://www.kaggle.com/datasets/jannesklaas/scifi-stories-text-corpus) - Needs work
- [SF Corpus](https://huggingface.co/SF-Corpus) - Needs work
## how to cite
Meireles, M. (2023). Sci-Fi Corpus. ORCID: 0000-0001-9227-9798. Available at: https://huggingface.co/datasets/elektra/scifi-corpus
|
sehyun66/News-sentiments | ---
dataset_info:
- config_name: bertplus
features:
- name: headline
dtype: string
- name: summary
dtype: string
- name: headline_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
- name: summary_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
splits:
- name: default
num_bytes: 130253804
num_examples: 316086
download_size: 73025646
dataset_size: 130253804
- config_name: debert
features:
- name: headline
dtype: string
- name: summary
dtype: string
- name: headline_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
- name: summary_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
splits:
- name: default
num_bytes: 130884482
num_examples: 316086
download_size: 73648726
dataset_size: 130884482
- config_name: distill
features:
- name: headline
dtype: string
- name: summary
dtype: string
- name: headline_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
- name: summary_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
splits:
- name: default
num_bytes: 131086592
num_examples: 316086
download_size: 71723929
dataset_size: 131086592
- config_name: finbert
features:
- name: headline
dtype: string
- name: summary
dtype: string
- name: headline_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
- name: summary_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
splits:
- name: default
num_bytes: 131074564
num_examples: 316086
download_size: 73670360
dataset_size: 131074564
configs:
- config_name: bertplus
data_files:
- split: default
path: bertplus/default-*
- config_name: debert
data_files:
- split: default
path: debert/default-*
- config_name: distill
data_files:
- split: default
path: distill/default-*
- config_name: finbert
data_files:
- split: default
path: finbert/default-*
---
# Dataset Card for "News-sentiments"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Erynan/4_ethics_all | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 60738644
num_examples: 68145
download_size: 11300119
dataset_size: 60738644
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
maimi2009/Heisei | ---
license: unknown
---
|
male-2/training_v0.0.5-public_convert | ---
dataset_info:
features:
- name: id
dtype: string
- name: type
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: emotion
struct:
- name: joyful
dtype: bool
- name: sad
dtype: bool
- name: angry
dtype: bool
- name: example
dtype: string
splits:
- name: train
num_bytes: 1018
num_examples: 1
download_size: 9065
dataset_size: 1018
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_ab24g21__LaterLlamaV2 | ---
pretty_name: Evaluation run of ab24g21/LaterLlamaV2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ab24g21/LaterLlamaV2](https://huggingface.co/ab24g21/LaterLlamaV2) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ab24g21__LaterLlamaV2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-29T19:09:56.465728](https://huggingface.co/datasets/open-llm-leaderboard/details_ab24g21__LaterLlamaV2/blob/main/results_2024-03-29T19-09-56.465728.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5461869266998542,\n\
\ \"acc_stderr\": 0.033788120399471086,\n \"acc_norm\": 0.5507018751608478,\n\
\ \"acc_norm_stderr\": 0.034496936259557756,\n \"mc1\": 0.2839657282741738,\n\
\ \"mc1_stderr\": 0.01578537085839672,\n \"mc2\": 0.4414865313489548,\n\
\ \"mc2_stderr\": 0.015331891416062246\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5511945392491467,\n \"acc_stderr\": 0.014534599585097667,\n\
\ \"acc_norm\": 0.590443686006826,\n \"acc_norm_stderr\": 0.014370358632472435\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6230830511850229,\n\
\ \"acc_stderr\": 0.004836234143655406,\n \"acc_norm\": 0.8181637124078869,\n\
\ \"acc_norm_stderr\": 0.0038492126228151734\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526066,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526066\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5037037037037037,\n\
\ \"acc_stderr\": 0.04319223625811331,\n \"acc_norm\": 0.5037037037037037,\n\
\ \"acc_norm_stderr\": 0.04319223625811331\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5460526315789473,\n \"acc_stderr\": 0.04051646342874142,\n\
\ \"acc_norm\": 0.5460526315789473,\n \"acc_norm_stderr\": 0.04051646342874142\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.49,\n\
\ \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.49,\n \
\ \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.569811320754717,\n \"acc_stderr\": 0.030471445867183238,\n\
\ \"acc_norm\": 0.569811320754717,\n \"acc_norm_stderr\": 0.030471445867183238\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5972222222222222,\n\
\ \"acc_stderr\": 0.04101405519842426,\n \"acc_norm\": 0.5972222222222222,\n\
\ \"acc_norm_stderr\": 0.04101405519842426\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\"\
: 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.48554913294797686,\n\
\ \"acc_stderr\": 0.03810871630454764,\n \"acc_norm\": 0.48554913294797686,\n\
\ \"acc_norm_stderr\": 0.03810871630454764\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.30392156862745096,\n \"acc_stderr\": 0.04576665403207763,\n\
\ \"acc_norm\": 0.30392156862745096,\n \"acc_norm_stderr\": 0.04576665403207763\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n\
\ \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.37872340425531914,\n \"acc_stderr\": 0.03170995606040655,\n\
\ \"acc_norm\": 0.37872340425531914,\n \"acc_norm_stderr\": 0.03170995606040655\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.30701754385964913,\n\
\ \"acc_stderr\": 0.04339138322579861,\n \"acc_norm\": 0.30701754385964913,\n\
\ \"acc_norm_stderr\": 0.04339138322579861\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.04164188720169375,\n\
\ \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.04164188720169375\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.34656084656084657,\n \"acc_stderr\": 0.024508777521028424,\n \"\
acc_norm\": 0.34656084656084657,\n \"acc_norm_stderr\": 0.024508777521028424\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.30952380952380953,\n\
\ \"acc_stderr\": 0.04134913018303316,\n \"acc_norm\": 0.30952380952380953,\n\
\ \"acc_norm_stderr\": 0.04134913018303316\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6516129032258065,\n\
\ \"acc_stderr\": 0.027104826328100944,\n \"acc_norm\": 0.6516129032258065,\n\
\ \"acc_norm_stderr\": 0.027104826328100944\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.45320197044334976,\n \"acc_stderr\": 0.03502544650845872,\n\
\ \"acc_norm\": 0.45320197044334976,\n \"acc_norm_stderr\": 0.03502544650845872\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.55,\n \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\"\
: 0.55,\n \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6787878787878788,\n \"acc_stderr\": 0.036462049632538115,\n\
\ \"acc_norm\": 0.6787878787878788,\n \"acc_norm_stderr\": 0.036462049632538115\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.6919191919191919,\n \"acc_stderr\": 0.032894773300986155,\n \"\
acc_norm\": 0.6919191919191919,\n \"acc_norm_stderr\": 0.032894773300986155\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.7823834196891192,\n \"acc_stderr\": 0.029778663037752954,\n\
\ \"acc_norm\": 0.7823834196891192,\n \"acc_norm_stderr\": 0.029778663037752954\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.49230769230769234,\n \"acc_stderr\": 0.025348006031534785,\n\
\ \"acc_norm\": 0.49230769230769234,\n \"acc_norm_stderr\": 0.025348006031534785\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2851851851851852,\n \"acc_stderr\": 0.027528599210340496,\n \
\ \"acc_norm\": 0.2851851851851852,\n \"acc_norm_stderr\": 0.027528599210340496\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.542016806722689,\n \"acc_stderr\": 0.03236361111951941,\n \
\ \"acc_norm\": 0.542016806722689,\n \"acc_norm_stderr\": 0.03236361111951941\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"\
acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7486238532110092,\n \"acc_stderr\": 0.018599206360287415,\n \"\
acc_norm\": 0.7486238532110092,\n \"acc_norm_stderr\": 0.018599206360287415\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4166666666666667,\n \"acc_stderr\": 0.03362277436608044,\n \"\
acc_norm\": 0.4166666666666667,\n \"acc_norm_stderr\": 0.03362277436608044\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7450980392156863,\n \"acc_stderr\": 0.03058759135160425,\n \"\
acc_norm\": 0.7450980392156863,\n \"acc_norm_stderr\": 0.03058759135160425\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7426160337552743,\n \"acc_stderr\": 0.028458820991460305,\n \
\ \"acc_norm\": 0.7426160337552743,\n \"acc_norm_stderr\": 0.028458820991460305\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6457399103139013,\n\
\ \"acc_stderr\": 0.032100621541349864,\n \"acc_norm\": 0.6457399103139013,\n\
\ \"acc_norm_stderr\": 0.032100621541349864\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6106870229007634,\n \"acc_stderr\": 0.04276486542814591,\n\
\ \"acc_norm\": 0.6106870229007634,\n \"acc_norm_stderr\": 0.04276486542814591\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228732,\n \"\
acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228732\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7129629629629629,\n\
\ \"acc_stderr\": 0.043733130409147614,\n \"acc_norm\": 0.7129629629629629,\n\
\ \"acc_norm_stderr\": 0.043733130409147614\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6503067484662577,\n \"acc_stderr\": 0.037466683254700206,\n\
\ \"acc_norm\": 0.6503067484662577,\n \"acc_norm_stderr\": 0.037466683254700206\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3482142857142857,\n\
\ \"acc_stderr\": 0.045218299028335865,\n \"acc_norm\": 0.3482142857142857,\n\
\ \"acc_norm_stderr\": 0.045218299028335865\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7378640776699029,\n \"acc_stderr\": 0.04354631077260595,\n\
\ \"acc_norm\": 0.7378640776699029,\n \"acc_norm_stderr\": 0.04354631077260595\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7991452991452992,\n\
\ \"acc_stderr\": 0.02624677294689048,\n \"acc_norm\": 0.7991452991452992,\n\
\ \"acc_norm_stderr\": 0.02624677294689048\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \
\ \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7471264367816092,\n\
\ \"acc_stderr\": 0.015543377313719681,\n \"acc_norm\": 0.7471264367816092,\n\
\ \"acc_norm_stderr\": 0.015543377313719681\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.5838150289017341,\n \"acc_stderr\": 0.026538189104705474,\n\
\ \"acc_norm\": 0.5838150289017341,\n \"acc_norm_stderr\": 0.026538189104705474\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.29497206703910617,\n\
\ \"acc_stderr\": 0.015251931579208167,\n \"acc_norm\": 0.29497206703910617,\n\
\ \"acc_norm_stderr\": 0.015251931579208167\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6111111111111112,\n \"acc_stderr\": 0.027914055510468008,\n\
\ \"acc_norm\": 0.6111111111111112,\n \"acc_norm_stderr\": 0.027914055510468008\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.617363344051447,\n\
\ \"acc_stderr\": 0.02760468902858199,\n \"acc_norm\": 0.617363344051447,\n\
\ \"acc_norm_stderr\": 0.02760468902858199\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.5925925925925926,\n \"acc_stderr\": 0.027339546640662737,\n\
\ \"acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.027339546640662737\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.3829787234042553,\n \"acc_stderr\": 0.02899908090480618,\n \
\ \"acc_norm\": 0.3829787234042553,\n \"acc_norm_stderr\": 0.02899908090480618\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.38461538461538464,\n\
\ \"acc_stderr\": 0.012425548416302943,\n \"acc_norm\": 0.38461538461538464,\n\
\ \"acc_norm_stderr\": 0.012425548416302943\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5073529411764706,\n \"acc_stderr\": 0.030369552523902173,\n\
\ \"acc_norm\": 0.5073529411764706,\n \"acc_norm_stderr\": 0.030369552523902173\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5326797385620915,\n \"acc_stderr\": 0.0201845833591022,\n \
\ \"acc_norm\": 0.5326797385620915,\n \"acc_norm_stderr\": 0.0201845833591022\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n\
\ \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n\
\ \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6448979591836734,\n \"acc_stderr\": 0.030635655150387638,\n\
\ \"acc_norm\": 0.6448979591836734,\n \"acc_norm_stderr\": 0.030635655150387638\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.746268656716418,\n\
\ \"acc_stderr\": 0.03076944496729602,\n \"acc_norm\": 0.746268656716418,\n\
\ \"acc_norm_stderr\": 0.03076944496729602\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \
\ \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4819277108433735,\n\
\ \"acc_stderr\": 0.038899512528272166,\n \"acc_norm\": 0.4819277108433735,\n\
\ \"acc_norm_stderr\": 0.038899512528272166\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7309941520467836,\n \"acc_stderr\": 0.03401052620104089,\n\
\ \"acc_norm\": 0.7309941520467836,\n \"acc_norm_stderr\": 0.03401052620104089\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2839657282741738,\n\
\ \"mc1_stderr\": 0.01578537085839672,\n \"mc2\": 0.4414865313489548,\n\
\ \"mc2_stderr\": 0.015331891416062246\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.760852407261247,\n \"acc_stderr\": 0.011988541844843907\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.29946929492039426,\n \
\ \"acc_stderr\": 0.012616300735519661\n }\n}\n```"
repo_url: https://huggingface.co/ab24g21/LaterLlamaV2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|arc:challenge|25_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|gsm8k|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hellaswag|10_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T19-09-56.465728.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-29T19-09-56.465728.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- '**/details_harness|winogrande|5_2024-03-29T19-09-56.465728.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-29T19-09-56.465728.parquet'
- config_name: results
data_files:
- split: 2024_03_29T19_09_56.465728
path:
- results_2024-03-29T19-09-56.465728.parquet
- split: latest
path:
- results_2024-03-29T19-09-56.465728.parquet
---
# Dataset Card for Evaluation run of ab24g21/LaterLlamaV2
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [ab24g21/LaterLlamaV2](https://huggingface.co/ab24g21/LaterLlamaV2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ab24g21__LaterLlamaV2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-29T19:09:56.465728](https://huggingface.co/datasets/open-llm-leaderboard/details_ab24g21__LaterLlamaV2/blob/main/results_2024-03-29T19-09-56.465728.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5461869266998542,
"acc_stderr": 0.033788120399471086,
"acc_norm": 0.5507018751608478,
"acc_norm_stderr": 0.034496936259557756,
"mc1": 0.2839657282741738,
"mc1_stderr": 0.01578537085839672,
"mc2": 0.4414865313489548,
"mc2_stderr": 0.015331891416062246
},
"harness|arc:challenge|25": {
"acc": 0.5511945392491467,
"acc_stderr": 0.014534599585097667,
"acc_norm": 0.590443686006826,
"acc_norm_stderr": 0.014370358632472435
},
"harness|hellaswag|10": {
"acc": 0.6230830511850229,
"acc_stderr": 0.004836234143655406,
"acc_norm": 0.8181637124078869,
"acc_norm_stderr": 0.0038492126228151734
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526066,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526066
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5037037037037037,
"acc_stderr": 0.04319223625811331,
"acc_norm": 0.5037037037037037,
"acc_norm_stderr": 0.04319223625811331
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5460526315789473,
"acc_stderr": 0.04051646342874142,
"acc_norm": 0.5460526315789473,
"acc_norm_stderr": 0.04051646342874142
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.569811320754717,
"acc_stderr": 0.030471445867183238,
"acc_norm": 0.569811320754717,
"acc_norm_stderr": 0.030471445867183238
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5972222222222222,
"acc_stderr": 0.04101405519842426,
"acc_norm": 0.5972222222222222,
"acc_norm_stderr": 0.04101405519842426
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.48554913294797686,
"acc_stderr": 0.03810871630454764,
"acc_norm": 0.48554913294797686,
"acc_norm_stderr": 0.03810871630454764
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.30392156862745096,
"acc_stderr": 0.04576665403207763,
"acc_norm": 0.30392156862745096,
"acc_norm_stderr": 0.04576665403207763
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.37872340425531914,
"acc_stderr": 0.03170995606040655,
"acc_norm": 0.37872340425531914,
"acc_norm_stderr": 0.03170995606040655
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.30701754385964913,
"acc_stderr": 0.04339138322579861,
"acc_norm": 0.30701754385964913,
"acc_norm_stderr": 0.04339138322579861
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.04164188720169375,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.04164188720169375
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.34656084656084657,
"acc_stderr": 0.024508777521028424,
"acc_norm": 0.34656084656084657,
"acc_norm_stderr": 0.024508777521028424
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.30952380952380953,
"acc_stderr": 0.04134913018303316,
"acc_norm": 0.30952380952380953,
"acc_norm_stderr": 0.04134913018303316
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6516129032258065,
"acc_stderr": 0.027104826328100944,
"acc_norm": 0.6516129032258065,
"acc_norm_stderr": 0.027104826328100944
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.45320197044334976,
"acc_stderr": 0.03502544650845872,
"acc_norm": 0.45320197044334976,
"acc_norm_stderr": 0.03502544650845872
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.049999999999999996,
"acc_norm": 0.55,
"acc_norm_stderr": 0.049999999999999996
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6787878787878788,
"acc_stderr": 0.036462049632538115,
"acc_norm": 0.6787878787878788,
"acc_norm_stderr": 0.036462049632538115
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6919191919191919,
"acc_stderr": 0.032894773300986155,
"acc_norm": 0.6919191919191919,
"acc_norm_stderr": 0.032894773300986155
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7823834196891192,
"acc_stderr": 0.029778663037752954,
"acc_norm": 0.7823834196891192,
"acc_norm_stderr": 0.029778663037752954
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.49230769230769234,
"acc_stderr": 0.025348006031534785,
"acc_norm": 0.49230769230769234,
"acc_norm_stderr": 0.025348006031534785
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2851851851851852,
"acc_stderr": 0.027528599210340496,
"acc_norm": 0.2851851851851852,
"acc_norm_stderr": 0.027528599210340496
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.542016806722689,
"acc_stderr": 0.03236361111951941,
"acc_norm": 0.542016806722689,
"acc_norm_stderr": 0.03236361111951941
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33112582781456956,
"acc_stderr": 0.038425817186598696,
"acc_norm": 0.33112582781456956,
"acc_norm_stderr": 0.038425817186598696
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7486238532110092,
"acc_stderr": 0.018599206360287415,
"acc_norm": 0.7486238532110092,
"acc_norm_stderr": 0.018599206360287415
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4166666666666667,
"acc_stderr": 0.03362277436608044,
"acc_norm": 0.4166666666666667,
"acc_norm_stderr": 0.03362277436608044
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7450980392156863,
"acc_stderr": 0.03058759135160425,
"acc_norm": 0.7450980392156863,
"acc_norm_stderr": 0.03058759135160425
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7426160337552743,
"acc_stderr": 0.028458820991460305,
"acc_norm": 0.7426160337552743,
"acc_norm_stderr": 0.028458820991460305
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6457399103139013,
"acc_stderr": 0.032100621541349864,
"acc_norm": 0.6457399103139013,
"acc_norm_stderr": 0.032100621541349864
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6106870229007634,
"acc_stderr": 0.04276486542814591,
"acc_norm": 0.6106870229007634,
"acc_norm_stderr": 0.04276486542814591
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228732,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228732
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7129629629629629,
"acc_stderr": 0.043733130409147614,
"acc_norm": 0.7129629629629629,
"acc_norm_stderr": 0.043733130409147614
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6503067484662577,
"acc_stderr": 0.037466683254700206,
"acc_norm": 0.6503067484662577,
"acc_norm_stderr": 0.037466683254700206
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3482142857142857,
"acc_stderr": 0.045218299028335865,
"acc_norm": 0.3482142857142857,
"acc_norm_stderr": 0.045218299028335865
},
"harness|hendrycksTest-management|5": {
"acc": 0.7378640776699029,
"acc_stderr": 0.04354631077260595,
"acc_norm": 0.7378640776699029,
"acc_norm_stderr": 0.04354631077260595
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7991452991452992,
"acc_stderr": 0.02624677294689048,
"acc_norm": 0.7991452991452992,
"acc_norm_stderr": 0.02624677294689048
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7471264367816092,
"acc_stderr": 0.015543377313719681,
"acc_norm": 0.7471264367816092,
"acc_norm_stderr": 0.015543377313719681
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5838150289017341,
"acc_stderr": 0.026538189104705474,
"acc_norm": 0.5838150289017341,
"acc_norm_stderr": 0.026538189104705474
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.29497206703910617,
"acc_stderr": 0.015251931579208167,
"acc_norm": 0.29497206703910617,
"acc_norm_stderr": 0.015251931579208167
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6111111111111112,
"acc_stderr": 0.027914055510468008,
"acc_norm": 0.6111111111111112,
"acc_norm_stderr": 0.027914055510468008
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.617363344051447,
"acc_stderr": 0.02760468902858199,
"acc_norm": 0.617363344051447,
"acc_norm_stderr": 0.02760468902858199
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.027339546640662737,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.027339546640662737
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.3829787234042553,
"acc_stderr": 0.02899908090480618,
"acc_norm": 0.3829787234042553,
"acc_norm_stderr": 0.02899908090480618
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.38461538461538464,
"acc_stderr": 0.012425548416302943,
"acc_norm": 0.38461538461538464,
"acc_norm_stderr": 0.012425548416302943
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5073529411764706,
"acc_stderr": 0.030369552523902173,
"acc_norm": 0.5073529411764706,
"acc_norm_stderr": 0.030369552523902173
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5326797385620915,
"acc_stderr": 0.0201845833591022,
"acc_norm": 0.5326797385620915,
"acc_norm_stderr": 0.0201845833591022
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.045820048415054174,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.045820048415054174
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6448979591836734,
"acc_stderr": 0.030635655150387638,
"acc_norm": 0.6448979591836734,
"acc_norm_stderr": 0.030635655150387638
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.746268656716418,
"acc_stderr": 0.03076944496729602,
"acc_norm": 0.746268656716418,
"acc_norm_stderr": 0.03076944496729602
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4819277108433735,
"acc_stderr": 0.038899512528272166,
"acc_norm": 0.4819277108433735,
"acc_norm_stderr": 0.038899512528272166
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7309941520467836,
"acc_stderr": 0.03401052620104089,
"acc_norm": 0.7309941520467836,
"acc_norm_stderr": 0.03401052620104089
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2839657282741738,
"mc1_stderr": 0.01578537085839672,
"mc2": 0.4414865313489548,
"mc2_stderr": 0.015331891416062246
},
"harness|winogrande|5": {
"acc": 0.760852407261247,
"acc_stderr": 0.011988541844843907
},
"harness|gsm8k|5": {
"acc": 0.29946929492039426,
"acc_stderr": 0.012616300735519661
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Poloman/Colab | ---
license: openrail
---
|
lhallee/BP_reg | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: seqs
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 160080367
num_examples: 26225
- name: valid
num_bytes: 17713055
num_examples: 2904
- name: test
num_bytes: 20667631
num_examples: 3350
download_size: 15126192
dataset_size: 198461053
---
# Dataset Card for "BP_reg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-markdown-13000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 1071752
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
breno30/wanda | ---
license: openrail
---
|
MrezaPRZ/sql_judge_dataset | ---
license: apache-2.0
---
|
likhith45688/lm_dataset | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 606341604
num_examples: 361779
- name: valid
num_bytes: 144454440
num_examples: 86190
download_size: 137305987
dataset_size: 750796044
---
# Dataset Card for "lm_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
on1onmangoes/First11VoiceHarmony071523 | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: file
dtype: string
splits:
- name: train
num_bytes: 3127
num_examples: 11
download_size: 5968
dataset_size: 3127
---
# Dataset Card for "First11VoiceHarmony071523"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_cola_regularized_past_tense | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 12963
num_examples: 189
- name: test
num_bytes: 11801
num_examples: 176
- name: train
num_bytes: 114868
num_examples: 1654
download_size: 67917
dataset_size: 139632
---
# Dataset Card for "MULTI_VALUE_cola_regularized_past_tense"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Thanmay/boolq-translated | ---
dataset_info:
- config_name: en
features:
- name: question
dtype: string
- name: answer
dtype: bool
- name: passage
dtype: string
splits:
- name: train
num_bytes: 5829584
num_examples: 9427
- name: validation
num_bytes: 1998182
num_examples: 3270
download_size: 4942776
dataset_size: 7827766
- config_name: gu
features:
- name: answer
dtype: bool
- name: question
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 13882863
num_examples: 9427
- name: validation
num_bytes: 4657077
num_examples: 3270
download_size: 7248225
dataset_size: 18539940
- config_name: hi
features:
- name: answer
dtype: bool
- name: question
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 14131229
num_examples: 9427
- name: validation
num_bytes: 4805980
num_examples: 3270
download_size: 7204191
dataset_size: 18937209
- config_name: ml
features:
- name: answer
dtype: bool
- name: question
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 15712315
num_examples: 9427
- name: validation
num_bytes: 5371267
num_examples: 3270
download_size: 7872021
dataset_size: 21083582
- config_name: mr
features:
- name: answer
dtype: bool
- name: question
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 14464334
num_examples: 9427
- name: validation
num_bytes: 4918348
num_examples: 3270
download_size: 7506868
dataset_size: 19382682
- config_name: ta
features:
- name: answer
dtype: bool
- name: question
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 16744191
num_examples: 9427
- name: validation
num_bytes: 5709610
num_examples: 3270
download_size: 7926082
dataset_size: 22453801
configs:
- config_name: en
data_files:
- split: train
path: en/train-*
- split: validation
path: en/validation-*
- config_name: gu
data_files:
- split: train
path: gu/train-*
- split: validation
path: gu/validation-*
- config_name: hi
data_files:
- split: train
path: hi/train-*
- split: validation
path: hi/validation-*
- config_name: ml
data_files:
- split: train
path: ml/train-*
- split: validation
path: ml/validation-*
- config_name: mr
data_files:
- split: train
path: mr/train-*
- split: validation
path: mr/validation-*
- config_name: ta
data_files:
- split: train
path: ta/train-*
- split: validation
path: ta/validation-*
---
|
mnoukhov/compare_results | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 182356
num_examples: 100
download_size: 123656
dataset_size: 182356
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AdapterOcean/med_alpaca_standardized_cluster_48_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4621649
num_examples: 5549
download_size: 1854755
dataset_size: 4621649
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_48_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sujitthakur/mini-platypus | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4186564
num_examples: 1000
download_size: 2245924
dataset_size: 4186564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
freshpearYoon/train_free_4 | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 9604866360
num_examples: 10000
download_size: 1439226350
dataset_size: 9604866360
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ovior/twitter_dataset_1713059085 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 2261350
num_examples: 7122
download_size: 1265918
dataset_size: 2261350
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fia24/banel_including_pos_training_dataset_90 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: translation
struct:
- name: en
dtype: string
- name: fr
dtype: string
splits:
- name: train
num_bytes: 1386207
num_examples: 18105
- name: test
num_bytes: 155599
num_examples: 2012
download_size: 621202
dataset_size: 1541806
---
# Dataset Card for "banel_including_pos_training_dataset_90"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alexshengzhili/blip_eval | ---
dataset_info:
features:
- name: image_file
dtype: string
- name: id
dtype: string
- name: caption
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: first_mention
dtype: string
- name: response
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: q_a_pairs
sequence:
sequence: string
- name: response_BLIP2
dtype: string
splits:
- name: 1_percent_as_validation
num_bytes: 17146966
num_examples: 3002
download_size: 7934946
dataset_size: 17146966
---
# Dataset Card for "blip_eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
amishshah/imbalanced_8 | ---
dataset_info:
features:
- name: title
dtype: string
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 45166669.74
num_examples: 27000
- name: test
num_bytes: 5018518.86
num_examples: 3000
download_size: 0
dataset_size: 50185188.6
---
# Dataset Card for "imbalanced_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jmichaelov/inverse_scaling_prize-neqa | ---
license: cc-by-4.0
---
|
juliojfdghdg/murilo | ---
license: openrail
---
|
BangumiBase/flipflappers | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Flip Flappers
This is the image base of bangumi Flip Flappers, we detected 26 characters, 1442 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 423 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 62 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 31 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 37 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 8 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 64 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 41 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 23 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 269 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 8 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 21 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 21 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 56 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 35 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 6 | [Download](14/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 15 | 32 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 15 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 9 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 17 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 25 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 18 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 40 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 16 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 6 | [Download](23/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 24 | 7 | [Download](24/dataset.zip) |  |  |  |  |  |  |  | N/A |
| noise | 152 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
yuan-sf63/word_label_0.8_96_D | ---
dataset_info:
features:
- name: text
dtype: string
- name: '0'
dtype: int64
- name: '1'
dtype: int64
- name: '2'
dtype: int64
- name: '3'
dtype: int64
- name: '4'
dtype: int64
- name: '5'
dtype: int64
- name: '6'
dtype: int64
- name: '7'
dtype: int64
- name: '8'
dtype: int64
- name: '9'
dtype: int64
- name: '10'
dtype: int64
- name: '11'
dtype: int64
- name: '12'
dtype: int64
- name: '13'
dtype: int64
- name: '14'
dtype: int64
- name: '15'
dtype: int64
- name: '16'
dtype: int64
- name: '17'
dtype: int64
- name: '18'
dtype: int64
- name: '19'
dtype: int64
- name: '20'
dtype: int64
- name: '21'
dtype: int64
- name: '22'
dtype: int64
- name: '23'
dtype: int64
- name: '24'
dtype: int64
- name: '25'
dtype: int64
- name: '26'
dtype: int64
- name: '27'
dtype: int64
- name: '28'
dtype: int64
- name: '29'
dtype: int64
- name: '30'
dtype: int64
- name: '31'
dtype: int64
- name: '32'
dtype: int64
- name: '33'
dtype: int64
- name: '34'
dtype: int64
- name: '35'
dtype: int64
- name: '36'
dtype: int64
- name: '37'
dtype: int64
- name: '38'
dtype: int64
- name: '39'
dtype: int64
- name: '40'
dtype: int64
- name: '41'
dtype: int64
- name: '42'
dtype: int64
- name: '43'
dtype: int64
- name: '44'
dtype: int64
- name: '45'
dtype: int64
- name: '46'
dtype: int64
- name: '47'
dtype: int64
- name: '48'
dtype: int64
- name: '49'
dtype: int64
- name: '50'
dtype: int64
- name: '51'
dtype: int64
- name: '52'
dtype: int64
- name: '53'
dtype: int64
- name: '54'
dtype: int64
- name: '55'
dtype: int64
- name: '56'
dtype: int64
- name: '57'
dtype: int64
- name: '58'
dtype: int64
- name: '59'
dtype: int64
- name: '60'
dtype: int64
- name: '61'
dtype: int64
- name: '62'
dtype: int64
- name: '63'
dtype: int64
- name: '64'
dtype: int64
- name: '65'
dtype: int64
- name: '66'
dtype: int64
- name: '67'
dtype: int64
- name: '68'
dtype: int64
- name: '69'
dtype: int64
- name: '70'
dtype: int64
- name: '71'
dtype: int64
- name: '72'
dtype: int64
- name: '73'
dtype: int64
- name: '74'
dtype: int64
- name: '75'
dtype: int64
- name: '76'
dtype: int64
- name: '77'
dtype: int64
- name: '78'
dtype: int64
- name: '79'
dtype: int64
- name: '80'
dtype: int64
- name: '81'
dtype: int64
- name: '82'
dtype: int64
- name: '83'
dtype: int64
- name: '84'
dtype: int64
- name: '85'
dtype: int64
- name: '86'
dtype: int64
- name: '87'
dtype: int64
- name: '88'
dtype: int64
- name: '89'
dtype: int64
- name: '90'
dtype: int64
- name: '91'
dtype: int64
- name: '92'
dtype: int64
- name: '93'
dtype: int64
- name: '94'
dtype: int64
- name: '95'
dtype: int64
splits:
- name: train
num_bytes: 63663082.71246921
num_examples: 71982
- name: validation
num_bytes: 7074560.287530788
num_examples: 7999
download_size: 10026144
dataset_size: 70737643.0
---
# Dataset Card for "word_label_0.8_96_D"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
biglam/loc_beyond_words | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: bw_id
dtype: string
- name: category_id
dtype:
class_label:
names:
'0': Photograph
'1': Illustration
'2': Map
'3': Comics/Cartoon
'4': Editorial Cartoon
'5': Headline
'6': Advertisement
- name: image_id
dtype: string
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: iscrowd
dtype: bool
splits:
- name: train
num_bytes: 2854507
num_examples: 2846
- name: validation
num_bytes: 731782
num_examples: 712
download_size: 1200053819
dataset_size: 3586289
license: cc0-1.0
task_categories:
- object-detection
tags:
- lam
- newspapers
- document-layout
pretty_name: Beyond Words
size_categories:
- 1K<n<10K
---
# Dataset Card for Beyond Words
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://labs.loc.gov/
- **Repository:** https://github.com/LibraryOfCongress/newspaper-navigator
- **Paper:** https://arxiv.org/abs/2005.01583
- **Leaderboard:**
- **Point of Contact:** LC-Labs@loc.gov
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@inproceedings{10.1145/3340531.3412767,
author = {Lee, Benjamin Charles Germain and Mears, Jaime and Jakeway, Eileen and Ferriter, Meghan and Adams, Chris and Yarasavage, Nathan and Thomas, Deborah and Zwaard, Kate and Weld, Daniel S.},
title = {The Newspaper Navigator Dataset: Extracting Headlines and Visual Content from 16 Million Historic Newspaper Pages in Chronicling America},
year = {2020},
isbn = {9781450368599},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3340531.3412767},
doi = {10.1145/3340531.3412767},
abstract = {Chronicling America is a product of the National Digital Newspaper Program, a partnership between the Library of Congress and the National Endowment for the Humanities to digitize historic American newspapers. Over 16 million pages have been digitized to date, complete with high-resolution images and machine-readable METS/ALTO OCR. Of considerable interest to Chronicling America users is a semantified corpus, complete with extracted visual content and headlines. To accomplish this, we introduce a visual content recognition model trained on bounding box annotations collected as part of the Library of Congress's Beyond Words crowdsourcing initiative and augmented with additional annotations including those of headlines and advertisements. We describe our pipeline that utilizes this deep learning model to extract 7 classes of visual content: headlines, photographs, illustrations, maps, comics, editorial cartoons, and advertisements, complete with textual content such as captions derived from the METS/ALTO OCR, as well as image embeddings. We report the results of running the pipeline on 16.3 million pages from the Chronicling America corpus and describe the resulting Newspaper Navigator dataset, the largest dataset of extracted visual content from historic newspapers ever produced. The Newspaper Navigator dataset, finetuned visual content recognition model, and all source code are placed in the public domain for unrestricted re-use.},
booktitle = {Proceedings of the 29th ACM International Conference on Information & Knowledge Management},
pages = {3055–3062},
numpages = {8},
keywords = {digital humanities, dataset, chronicling america, newspaper navigator, document analysis, information retrieval, digital libraries and archives, public domain, historic newspapers},
location = {Virtual Event, Ireland},
series = {CIKM '20}
}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset. |
uatafaque/movemind2 | ---
license: openrail
---
|
keremberke/indoor-scene-classification | ---
task_categories:
- image-classification
tags:
- roboflow
- roboflow2huggingface
- Retail
- Pest Control
- Benchmark
---
<div align="center">
<img width="640" alt="keremberke/indoor-scene-classification" src="https://huggingface.co/datasets/keremberke/indoor-scene-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['meeting_room', 'cloister', 'stairscase', 'restaurant', 'hairsalon', 'children_room', 'dining_room', 'lobby', 'museum', 'laundromat', 'computerroom', 'grocerystore', 'hospitalroom', 'buffet', 'office', 'warehouse', 'garage', 'bookstore', 'florist', 'locker_room', 'inside_bus', 'subway', 'fastfood_restaurant', 'auditorium', 'studiomusic', 'airport_inside', 'pantry', 'restaurant_kitchen', 'casino', 'movietheater', 'kitchen', 'waitingroom', 'artstudio', 'toystore', 'kindergarden', 'trainstation', 'bedroom', 'mall', 'corridor', 'bar', 'classroom', 'shoeshop', 'dentaloffice', 'videostore', 'laboratorywet', 'tv_studio', 'church_inside', 'operating_room', 'jewelleryshop', 'bathroom', 'clothingstore', 'closet', 'winecellar', 'livingroom', 'nursery', 'gameroom', 'inside_subway', 'deli', 'bakery', 'library', 'prisoncell', 'gym', 'concert_hall', 'greenhouse', 'elevator', 'poolinside', 'bowling']
```
### Number of Images
```json
{'train': 10885, 'test': 1558, 'valid': 3128}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/indoor-scene-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/popular-benchmarks/mit-indoor-scene-recognition/dataset/5](https://universe.roboflow.com/popular-benchmarks/mit-indoor-scene-recognition/dataset/5?ref=roboflow2huggingface)
### Citation
```
```
### License
MIT
### Dataset Summary
This dataset was exported via roboflow.com on October 24, 2022 at 4:09 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 15571 images.
Indoor-scenes are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
|
gordicaleksa/slovenian-llm-eval-v0 | ---
license: apache-2.0
language: sl
---
# Slovenian LLM eval 🇸🇮
This dataset should be used for Slovenian LLM evaluation.
Here is the [GitHub project](https://github.com/gordicaleksa/slovenian-llm-eval) used to build this dataset.
For technical report of the project see this in-depth [Weights & Biases report](https://wandb.ai/gordicaleksa/serbian_llm_eval/reports/First-Serbian-LLM-eval---Vmlldzo2MjgwMDA5). ❤️ Even though this one was written for Serbian LLM eval the same process was used to build Slovenian LLM eval.
I'll give a TL;DR here:
## What is covered?
Common sense reasoning:
* Hellaswag, Winogrande, PIQA, OpenbookQA, ARC-Easy, ARC-Challenge
World knowledge:
* NaturalQuestions, TriviaQA
Reading comprehension:
* BoolQ
## How was the eval created?
3 steps (for this version, v0, we've only done the translation and are looking for donations to push through the whole pipeline):
1. Machine Translation from English -> Slovenian using Google Translate
2. Refinement via GPT-4
3. Minor manual work by me (Aleksa Gordić) + we'll likely have a new version of Winogrande that was annotated by a human annotator
Please see [the report](https://wandb.ai/gordicaleksa/serbian_llm_eval/reports/First-Serbian-LLM-eval---Vmlldzo2MjgwMDA5) for more detail. Note that even though the report is for Serbian same process was used for Slovenian.
## Example of how to use
1. Create a python environment and install HuggingFace datasets (`pip install datasets`).
2. Run:
```Python
import datasets
tasks = ["arc_challenge", "arc_easy", "boolq", "hellaswag", "nq_open", "openbookqa", "piqa", "triviaqa", "winogrande"]
for task in tasks:
dataset = datasets.load_dataset("gordicaleksa/slovenian-llm-eval-v1", task)
for split in dataset.keys():
dataset = dataset[split]
print(f"Task: {task}, Split: {split}")
for example in dataset:
print(example)
```
# Project Sponsors
Your name will be here if you support the project, we are still looking for GPT-4 credits! :)
## Credits
Thank you to the following individuals from my [Discord server](https://discord.gg/peBrCpheKE
) who helped with donating Google Translate credits & running machine translation part of the pipeline:
[Raphael Vienne](https://www.linkedin.com/in/raphael-vienne/), [Brian Pulfer](https://www.brianpulfer.ch/), [Timotej Petrič](https://si.linkedin.com/in/timopetric), [Aljaž Potočnik](https://www.linkedin.com/in/aljaž-potočnik-70325365/), [Damjan Kodre](https://www.linkedin.com/in/damjan-kodre-34063430)
## Citation
```
@article{slovenian-llm-eval,
author = "Gordić Aleksa",
title = "Slovenian LLM Eval",
year = "2024"
howpublished = {\url{https://huggingface.co/datasets/gordicaleksa/slovenian-llm-eval-v1}},
}
```
## License
Apache 2.0. |
georgeyw/dsir-pile-13m | ---
license: mit
---
|
ittailup/ecu_juri_rawfacts | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 114733486
num_examples: 3816
download_size: 52736931
dataset_size: 114733486
---
# Dataset Card for "ecu_juri_rawfacts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
qgiaohc/twitter_dataset_1713181355 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 27494
num_examples: 62
download_size: 13980
dataset_size: 27494
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wili_2018 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ace
- af
- als
- am
- an
- ang
- ar
- arz
- as
- ast
- av
- ay
- az
- azb
- ba
- bar
- bcl
- be
- bg
- bho
- bjn
- bn
- bo
- bpy
- br
- bs
- bxr
- ca
- cbk
- cdo
- ce
- ceb
- chr
- ckb
- co
- crh
- cs
- csb
- cv
- cy
- da
- de
- diq
- dsb
- dty
- dv
- egl
- el
- en
- eo
- es
- et
- eu
- ext
- fa
- fi
- fo
- fr
- frp
- fur
- fy
- ga
- gag
- gd
- gl
- glk
- gn
- gu
- gv
- ha
- hak
- he
- hi
- hif
- hr
- hsb
- ht
- hu
- hy
- ia
- id
- ie
- ig
- ilo
- io
- is
- it
- ja
- jam
- jbo
- jv
- ka
- kaa
- kab
- kbd
- kk
- km
- kn
- ko
- koi
- kok
- krc
- ksh
- ku
- kv
- kw
- ky
- la
- lad
- lb
- lez
- lg
- li
- lij
- lmo
- ln
- lo
- lrc
- lt
- ltg
- lv
- lzh
- mai
- map
- mdf
- mg
- mhr
- mi
- min
- mk
- ml
- mn
- mr
- mrj
- ms
- mt
- mwl
- my
- myv
- mzn
- nan
- nap
- nb
- nci
- nds
- ne
- new
- nl
- nn
- nrm
- nso
- nv
- oc
- olo
- om
- or
- os
- pa
- pag
- pam
- pap
- pcd
- pdc
- pfl
- pl
- pnb
- ps
- pt
- qu
- rm
- ro
- roa
- ru
- rue
- rup
- rw
- sa
- sah
- sc
- scn
- sco
- sd
- sgs
- sh
- si
- sk
- sl
- sme
- sn
- so
- sq
- sr
- srn
- stq
- su
- sv
- sw
- szl
- ta
- tcy
- te
- tet
- tg
- th
- tk
- tl
- tn
- to
- tr
- tt
- tyv
- udm
- ug
- uk
- ur
- uz
- vec
- vep
- vi
- vls
- vo
- vro
- wa
- war
- wo
- wuu
- xh
- xmf
- yi
- yo
- zea
- zh
license:
- odbl
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: wili-2018
pretty_name: Wili2018
language_bcp47:
- be-tarask
- map-bms
- nds-nl
- roa-tara
- zh-yue
tags:
- language-identification
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': cdo
'1': glk
'2': jam
'3': lug
'4': san
'5': rue
'6': wol
'7': new
'8': mwl
'9': bre
'10': ara
'11': hye
'12': xmf
'13': ext
'14': cor
'15': yor
'16': div
'17': asm
'18': lat
'19': cym
'20': hif
'21': ace
'22': kbd
'23': tgk
'24': rus
'25': nso
'26': mya
'27': msa
'28': ava
'29': cbk
'30': urd
'31': deu
'32': swa
'33': pus
'34': bxr
'35': udm
'36': csb
'37': yid
'38': vro
'39': por
'40': pdc
'41': eng
'42': tha
'43': hat
'44': lmo
'45': pag
'46': jav
'47': chv
'48': nan
'49': sco
'50': kat
'51': bho
'52': bos
'53': kok
'54': oss
'55': mri
'56': fry
'57': cat
'58': azb
'59': kin
'60': hin
'61': sna
'62': dan
'63': egl
'64': mkd
'65': ron
'66': bul
'67': hrv
'68': som
'69': pam
'70': nav
'71': ksh
'72': nci
'73': khm
'74': sgs
'75': srn
'76': bar
'77': cos
'78': ckb
'79': pfl
'80': arz
'81': roa-tara
'82': fra
'83': mai
'84': zh-yue
'85': guj
'86': fin
'87': kir
'88': vol
'89': hau
'90': afr
'91': uig
'92': lao
'93': swe
'94': slv
'95': kor
'96': szl
'97': srp
'98': dty
'99': nrm
'100': dsb
'101': ind
'102': wln
'103': pnb
'104': ukr
'105': bpy
'106': vie
'107': tur
'108': aym
'109': lit
'110': zea
'111': pol
'112': est
'113': scn
'114': vls
'115': stq
'116': gag
'117': grn
'118': kaz
'119': ben
'120': pcd
'121': bjn
'122': krc
'123': amh
'124': diq
'125': ltz
'126': ita
'127': kab
'128': bel
'129': ang
'130': mhr
'131': che
'132': koi
'133': glv
'134': ido
'135': fao
'136': bak
'137': isl
'138': bcl
'139': tet
'140': jpn
'141': kur
'142': map-bms
'143': tyv
'144': olo
'145': arg
'146': ori
'147': lim
'148': tel
'149': lin
'150': roh
'151': sqi
'152': xho
'153': mlg
'154': fas
'155': hbs
'156': tam
'157': aze
'158': lad
'159': nob
'160': sin
'161': gla
'162': nap
'163': snd
'164': ast
'165': mal
'166': mdf
'167': tsn
'168': nds
'169': tgl
'170': nno
'171': sun
'172': lzh
'173': jbo
'174': crh
'175': pap
'176': oci
'177': hak
'178': uzb
'179': zho
'180': hsb
'181': sme
'182': mlt
'183': vep
'184': lez
'185': nld
'186': nds-nl
'187': mrj
'188': spa
'189': ceb
'190': ina
'191': heb
'192': hun
'193': que
'194': kaa
'195': mar
'196': vec
'197': frp
'198': ell
'199': sah
'200': eus
'201': ces
'202': slk
'203': chr
'204': lij
'205': nep
'206': srd
'207': ilo
'208': be-tarask
'209': bod
'210': orm
'211': war
'212': glg
'213': mon
'214': gle
'215': min
'216': ibo
'217': ile
'218': epo
'219': lav
'220': lrc
'221': als
'222': mzn
'223': rup
'224': fur
'225': tat
'226': myv
'227': pan
'228': ton
'229': kom
'230': wuu
'231': tcy
'232': tuk
'233': kan
'234': ltg
config_name: WiLI-2018 dataset
splits:
- name: train
num_bytes: 65408201
num_examples: 117500
- name: test
num_bytes: 66491260
num_examples: 117500
download_size: 130516351
dataset_size: 131899461
---
# Dataset Card for wili_2018
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/841984
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/pdf/1801.07779
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** Thoma, Martin (Email: info@martin-thoma.de)
### Dataset Summary
WiLI-2018, the Wikipedia language identification benchmark dataset, contains 235000 paragraphs of 235 languages. The dataset is balanced and a train-test split is provided.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
235 Different Languages
## Dataset Structure
### Data Instances
```
{
'label': 207,
'sentence': 'Ti Turkia ket maysa a demokrata, sekular, unitario, batay-linteg a republika nga addaan ti taga-ugma a tinawtawid a kultura. Ti Turkia ket umadadu a naipatipon iti Laud babaen ti panagkameng kadagiti organisasion a kas ti Konsilo iti Europa, NATO, OECD, OSCE ken ti G-20 a dagiti kangrunaan nga ekonomia. Ti Turkia ket nangrugi a nakitulag ti napno a panagkameng iti Kappon ti Europa idi 2005, nga isu ket maysa idin a kumaduaan a kameng iti Europeano a Komunidad ti Ekonomia manipud idi 1963 ken nakadanon ti maysa a tulagan ti kappon ti aduana idi 1995. Ti Turkia ket nagtaraken iti asideg a kultural, politikal, ekonomiko ken industria a panakibiang iti Tengnga a Daya, dagiti Turko nga estado iti Tengnga nga Asia ken dagiti pagilian ti Aprika babaen ti panagkameng kadagiti organisasion a kas ti Turko a Konsilo, Nagsaupan nga Administrasion iti Turko nga Arte ken Kultura, Organisasion iti Islamiko a Panagtitinnulong ken ti Organisasion ti Ekonomiko a Panagtitinnulong.'
}
```
### Data Fields
[Needs More Information]
### Data Splits
175000 lines of text each for train and test data.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Thomas Martin
### Licensing Information
ODC Open Database License v1.0
### Citation Information
```
@dataset{thoma_martin_2018_841984,
author = {Thoma, Martin},
title = {{WiLI-2018 - Wikipedia Language Identification database}},
month = jan,
year = 2018,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.841984},
url = {https://doi.org/10.5281/zenodo.841984}
}
```
### Contributions
Thanks to [@Shubhambindal2017](https://github.com/Shubhambindal2017) for adding this dataset. |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/10ebd3ca | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 186
num_examples: 10
download_size: 1337
dataset_size: 186
---
# Dataset Card for "10ebd3ca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cakiki/c-sharp_paths | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 586063746
num_examples: 20539828
download_size: 439948378
dataset_size: 586063746
---
# Dataset Card for "c-sharp_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FarhatMay/coco_train_dreambooth | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 1593827.0
num_examples: 7
download_size: 1594800
dataset_size: 1593827.0
---
# Dataset Card for "coco_train_dreambooth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sanshanya/eyesdiffusion | ---
tags:
- biology
---
for test |
skrishna/coin_flip_4 | ---
dataset_info:
features:
- name: targets
dtype: string
- name: targets_vec
sequence: int64
- name: inputs
dtype: string
splits:
- name: test
num_bytes: 395686
num_examples: 2000
- name: train
num_bytes: 395989
num_examples: 2000
download_size: 181182
dataset_size: 791675
---
# Dataset Card for "coin_flip_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sinnyb/Naver_lately_news | ---
license: apache-2.0
---
|
pccl-org/formal-logic-simple-order-new-objects-bigger-50-2 | ---
dataset_info:
features:
- name: greater_than
dtype: string
- name: less_than
dtype: string
- name: correct_example
sequence: string
- name: incorrect_example
sequence: string
- name: distance
dtype: int64
splits:
- name: train
num_bytes: 180859
num_examples: 1225
download_size: 17983
dataset_size: 180859
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "formal-logic-simple-order-new-objects-bigger-50-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_jan-ai__Solar-10.7B-SLERP | ---
pretty_name: Evaluation run of jan-ai/Solar-10.7B-SLERP
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [jan-ai/Solar-10.7B-SLERP](https://huggingface.co/jan-ai/Solar-10.7B-SLERP) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jan-ai__Solar-10.7B-SLERP\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-16T15:35:26.592676](https://huggingface.co/datasets/open-llm-leaderboard/details_jan-ai__Solar-10.7B-SLERP/blob/main/results_2023-12-16T15-35-26.592676.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6608479464480653,\n\
\ \"acc_stderr\": 0.031968087444665505,\n \"acc_norm\": 0.6623335219673708,\n\
\ \"acc_norm_stderr\": 0.03261535081063273,\n \"mc1\": 0.5079559363525091,\n\
\ \"mc1_stderr\": 0.01750128507455183,\n \"mc2\": 0.6571842191607326,\n\
\ \"mc2_stderr\": 0.015609617120580309\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6791808873720137,\n \"acc_stderr\": 0.013640943091946528,\n\
\ \"acc_norm\": 0.7073378839590444,\n \"acc_norm_stderr\": 0.013295916103619422\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7035451105357499,\n\
\ \"acc_stderr\": 0.004557606227194303,\n \"acc_norm\": 0.8787094204341764,\n\
\ \"acc_norm_stderr\": 0.003257974593789937\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \
\ \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n\
\ \"acc_stderr\": 0.04171654161354543,\n \"acc_norm\": 0.6296296296296297,\n\
\ \"acc_norm_stderr\": 0.04171654161354543\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n\
\ \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n\
\ \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7169811320754716,\n \"acc_stderr\": 0.027724236492700918,\n\
\ \"acc_norm\": 0.7169811320754716,\n \"acc_norm_stderr\": 0.027724236492700918\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\
: 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\"\
: {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n\
\ \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \
\ \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-college_medicine|5\"\
: {\n \"acc\": 0.6242774566473989,\n \"acc_stderr\": 0.036928207672648664,\n\
\ \"acc_norm\": 0.6242774566473989,\n \"acc_norm_stderr\": 0.036928207672648664\n\
\ },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n\
\ \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n\
\ \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\"\
: {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816507,\n \
\ \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816507\n \
\ },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5914893617021276,\n\
\ \"acc_stderr\": 0.032134180267015755,\n \"acc_norm\": 0.5914893617021276,\n\
\ \"acc_norm_stderr\": 0.032134180267015755\n },\n \"harness|hendrycksTest-econometrics|5\"\
: {\n \"acc\": 0.5263157894736842,\n \"acc_stderr\": 0.046970851366478626,\n\
\ \"acc_norm\": 0.5263157894736842,\n \"acc_norm_stderr\": 0.046970851366478626\n\
\ },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\"\
: 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n \"\
acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4894179894179894,\n \"acc_stderr\": 0.02574554227604548,\n \"\
acc_norm\": 0.4894179894179894,\n \"acc_norm_stderr\": 0.02574554227604548\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n\
\ \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n\
\ \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \
\ \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7935483870967742,\n \"acc_stderr\": 0.023025899617188712,\n \"\
acc_norm\": 0.7935483870967742,\n \"acc_norm_stderr\": 0.023025899617188712\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.49261083743842365,\n \"acc_stderr\": 0.035176035403610084,\n \"\
acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.035176035403610084\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\"\
: 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8242424242424242,\n \"acc_stderr\": 0.02972094300622445,\n\
\ \"acc_norm\": 0.8242424242424242,\n \"acc_norm_stderr\": 0.02972094300622445\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8080808080808081,\n \"acc_stderr\": 0.02805779167298902,\n \"\
acc_norm\": 0.8080808080808081,\n \"acc_norm_stderr\": 0.02805779167298902\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.02150024957603344,\n\
\ \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.02150024957603344\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6564102564102564,\n \"acc_stderr\": 0.024078696580635477,\n\
\ \"acc_norm\": 0.6564102564102564,\n \"acc_norm_stderr\": 0.024078696580635477\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3592592592592593,\n \"acc_stderr\": 0.029252905927251976,\n \
\ \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.029252905927251976\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7016806722689075,\n \"acc_stderr\": 0.029719142876342853,\n\
\ \"acc_norm\": 0.7016806722689075,\n \"acc_norm_stderr\": 0.029719142876342853\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"\
acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8385321100917431,\n \"acc_stderr\": 0.015776239256163248,\n \"\
acc_norm\": 0.8385321100917431,\n \"acc_norm_stderr\": 0.015776239256163248\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5648148148148148,\n \"acc_stderr\": 0.033812000056435254,\n \"\
acc_norm\": 0.5648148148148148,\n \"acc_norm_stderr\": 0.033812000056435254\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8284313725490197,\n \"acc_stderr\": 0.026460569561240647,\n \"\
acc_norm\": 0.8284313725490197,\n \"acc_norm_stderr\": 0.026460569561240647\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8227848101265823,\n \"acc_stderr\": 0.024856364184503214,\n \
\ \"acc_norm\": 0.8227848101265823,\n \"acc_norm_stderr\": 0.024856364184503214\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7085201793721974,\n\
\ \"acc_stderr\": 0.03050028317654585,\n \"acc_norm\": 0.7085201793721974,\n\
\ \"acc_norm_stderr\": 0.03050028317654585\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596914,\n\
\ \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596914\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8264462809917356,\n \"acc_stderr\": 0.03457272836917671,\n \"\
acc_norm\": 0.8264462809917356,\n \"acc_norm_stderr\": 0.03457272836917671\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8148148148148148,\n\
\ \"acc_stderr\": 0.03755265865037182,\n \"acc_norm\": 0.8148148148148148,\n\
\ \"acc_norm_stderr\": 0.03755265865037182\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.0335195387952127,\n\
\ \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.0335195387952127\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.49107142857142855,\n\
\ \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.49107142857142855,\n\
\ \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.021262719400406964,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.021262719400406964\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.822477650063857,\n\
\ \"acc_stderr\": 0.01366423099583483,\n \"acc_norm\": 0.822477650063857,\n\
\ \"acc_norm_stderr\": 0.01366423099583483\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7312138728323699,\n \"acc_stderr\": 0.023868003262500104,\n\
\ \"acc_norm\": 0.7312138728323699,\n \"acc_norm_stderr\": 0.023868003262500104\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4770949720670391,\n\
\ \"acc_stderr\": 0.016704945740326188,\n \"acc_norm\": 0.4770949720670391,\n\
\ \"acc_norm_stderr\": 0.016704945740326188\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7549019607843137,\n \"acc_stderr\": 0.024630048979824775,\n\
\ \"acc_norm\": 0.7549019607843137,\n \"acc_norm_stderr\": 0.024630048979824775\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7395498392282959,\n\
\ \"acc_stderr\": 0.024926723224845543,\n \"acc_norm\": 0.7395498392282959,\n\
\ \"acc_norm_stderr\": 0.024926723224845543\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.75,\n \"acc_stderr\": 0.02409347123262133,\n \
\ \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.02409347123262133\n \
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\"\
: 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \"\
acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.48435462842242505,\n\
\ \"acc_stderr\": 0.012763982838120948,\n \"acc_norm\": 0.48435462842242505,\n\
\ \"acc_norm_stderr\": 0.012763982838120948\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6985294117647058,\n \"acc_stderr\": 0.027875982114273168,\n\
\ \"acc_norm\": 0.6985294117647058,\n \"acc_norm_stderr\": 0.027875982114273168\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6797385620915033,\n \"acc_stderr\": 0.018875682938069446,\n \
\ \"acc_norm\": 0.6797385620915033,\n \"acc_norm_stderr\": 0.018875682938069446\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n\
\ \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n\
\ \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.02812342933514278,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.02812342933514278\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n\
\ \"acc_stderr\": 0.025538433368578327,\n \"acc_norm\": 0.845771144278607,\n\
\ \"acc_norm_stderr\": 0.025538433368578327\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n\
\ \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n\
\ \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.028782108105401705,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.028782108105401705\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5079559363525091,\n\
\ \"mc1_stderr\": 0.01750128507455183,\n \"mc2\": 0.6571842191607326,\n\
\ \"mc2_stderr\": 0.015609617120580309\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.824782951854775,\n \"acc_stderr\": 0.010684179227706163\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6125852918877938,\n \
\ \"acc_stderr\": 0.013418798447827378\n }\n}\n```"
repo_url: https://huggingface.co/jan-ai/Solar-10.7B-SLERP
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|arc:challenge|25_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|gsm8k|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hellaswag|10_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-16T15-35-26.592676.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-16T15-35-26.592676.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- '**/details_harness|winogrande|5_2023-12-16T15-35-26.592676.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-16T15-35-26.592676.parquet'
- config_name: results
data_files:
- split: 2023_12_16T15_35_26.592676
path:
- results_2023-12-16T15-35-26.592676.parquet
- split: latest
path:
- results_2023-12-16T15-35-26.592676.parquet
---
# Dataset Card for Evaluation run of jan-ai/Solar-10.7B-SLERP
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [jan-ai/Solar-10.7B-SLERP](https://huggingface.co/jan-ai/Solar-10.7B-SLERP) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_jan-ai__Solar-10.7B-SLERP",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-16T15:35:26.592676](https://huggingface.co/datasets/open-llm-leaderboard/details_jan-ai__Solar-10.7B-SLERP/blob/main/results_2023-12-16T15-35-26.592676.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6608479464480653,
"acc_stderr": 0.031968087444665505,
"acc_norm": 0.6623335219673708,
"acc_norm_stderr": 0.03261535081063273,
"mc1": 0.5079559363525091,
"mc1_stderr": 0.01750128507455183,
"mc2": 0.6571842191607326,
"mc2_stderr": 0.015609617120580309
},
"harness|arc:challenge|25": {
"acc": 0.6791808873720137,
"acc_stderr": 0.013640943091946528,
"acc_norm": 0.7073378839590444,
"acc_norm_stderr": 0.013295916103619422
},
"harness|hellaswag|10": {
"acc": 0.7035451105357499,
"acc_stderr": 0.004557606227194303,
"acc_norm": 0.8787094204341764,
"acc_norm_stderr": 0.003257974593789937
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.04171654161354543,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.04171654161354543
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7169811320754716,
"acc_stderr": 0.027724236492700918,
"acc_norm": 0.7169811320754716,
"acc_norm_stderr": 0.027724236492700918
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.75,
"acc_stderr": 0.03621034121889507,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03621034121889507
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6242774566473989,
"acc_stderr": 0.036928207672648664,
"acc_norm": 0.6242774566473989,
"acc_norm_stderr": 0.036928207672648664
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816507,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816507
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5914893617021276,
"acc_stderr": 0.032134180267015755,
"acc_norm": 0.5914893617021276,
"acc_norm_stderr": 0.032134180267015755
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5263157894736842,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.5263157894736842,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4894179894179894,
"acc_stderr": 0.02574554227604548,
"acc_norm": 0.4894179894179894,
"acc_norm_stderr": 0.02574554227604548
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.04415438226743744,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.04415438226743744
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7935483870967742,
"acc_stderr": 0.023025899617188712,
"acc_norm": 0.7935483870967742,
"acc_norm_stderr": 0.023025899617188712
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.035176035403610084,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.035176035403610084
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8242424242424242,
"acc_stderr": 0.02972094300622445,
"acc_norm": 0.8242424242424242,
"acc_norm_stderr": 0.02972094300622445
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8080808080808081,
"acc_stderr": 0.02805779167298902,
"acc_norm": 0.8080808080808081,
"acc_norm_stderr": 0.02805779167298902
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.02150024957603344,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.02150024957603344
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6564102564102564,
"acc_stderr": 0.024078696580635477,
"acc_norm": 0.6564102564102564,
"acc_norm_stderr": 0.024078696580635477
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3592592592592593,
"acc_stderr": 0.029252905927251976,
"acc_norm": 0.3592592592592593,
"acc_norm_stderr": 0.029252905927251976
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7016806722689075,
"acc_stderr": 0.029719142876342853,
"acc_norm": 0.7016806722689075,
"acc_norm_stderr": 0.029719142876342853
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8385321100917431,
"acc_stderr": 0.015776239256163248,
"acc_norm": 0.8385321100917431,
"acc_norm_stderr": 0.015776239256163248
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5648148148148148,
"acc_stderr": 0.033812000056435254,
"acc_norm": 0.5648148148148148,
"acc_norm_stderr": 0.033812000056435254
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8284313725490197,
"acc_stderr": 0.026460569561240647,
"acc_norm": 0.8284313725490197,
"acc_norm_stderr": 0.026460569561240647
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8227848101265823,
"acc_stderr": 0.024856364184503214,
"acc_norm": 0.8227848101265823,
"acc_norm_stderr": 0.024856364184503214
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7085201793721974,
"acc_stderr": 0.03050028317654585,
"acc_norm": 0.7085201793721974,
"acc_norm_stderr": 0.03050028317654585
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7633587786259542,
"acc_stderr": 0.03727673575596914,
"acc_norm": 0.7633587786259542,
"acc_norm_stderr": 0.03727673575596914
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8264462809917356,
"acc_stderr": 0.03457272836917671,
"acc_norm": 0.8264462809917356,
"acc_norm_stderr": 0.03457272836917671
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.03755265865037182,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.03755265865037182
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.0335195387952127,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.0335195387952127
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.49107142857142855,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.49107142857142855,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406964,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406964
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.822477650063857,
"acc_stderr": 0.01366423099583483,
"acc_norm": 0.822477650063857,
"acc_norm_stderr": 0.01366423099583483
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7312138728323699,
"acc_stderr": 0.023868003262500104,
"acc_norm": 0.7312138728323699,
"acc_norm_stderr": 0.023868003262500104
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4770949720670391,
"acc_stderr": 0.016704945740326188,
"acc_norm": 0.4770949720670391,
"acc_norm_stderr": 0.016704945740326188
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7549019607843137,
"acc_stderr": 0.024630048979824775,
"acc_norm": 0.7549019607843137,
"acc_norm_stderr": 0.024630048979824775
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7395498392282959,
"acc_stderr": 0.024926723224845543,
"acc_norm": 0.7395498392282959,
"acc_norm_stderr": 0.024926723224845543
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.75,
"acc_stderr": 0.02409347123262133,
"acc_norm": 0.75,
"acc_norm_stderr": 0.02409347123262133
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.48435462842242505,
"acc_stderr": 0.012763982838120948,
"acc_norm": 0.48435462842242505,
"acc_norm_stderr": 0.012763982838120948
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6985294117647058,
"acc_stderr": 0.027875982114273168,
"acc_norm": 0.6985294117647058,
"acc_norm_stderr": 0.027875982114273168
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6797385620915033,
"acc_stderr": 0.018875682938069446,
"acc_norm": 0.6797385620915033,
"acc_norm_stderr": 0.018875682938069446
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.02812342933514278,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.02812342933514278
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.025538433368578327,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.025538433368578327
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.028782108105401705,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.028782108105401705
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5079559363525091,
"mc1_stderr": 0.01750128507455183,
"mc2": 0.6571842191607326,
"mc2_stderr": 0.015609617120580309
},
"harness|winogrande|5": {
"acc": 0.824782951854775,
"acc_stderr": 0.010684179227706163
},
"harness|gsm8k|5": {
"acc": 0.6125852918877938,
"acc_stderr": 0.013418798447827378
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
autoevaluate/autoeval-staging-eval-project-c3da4aa4-0386-41d1-9c7c-12d712dd287c-126120 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- autoevaluate/squad-sample
eval_info:
task: extractive_question_answering
model: autoevaluate/distilbert-base-cased-distilled-squad
metrics: []
dataset_name: autoevaluate/squad-sample
dataset_config: autoevaluate--squad-sample
dataset_split: test
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/distilbert-base-cased-distilled-squad
* Dataset: autoevaluate/squad-sample
* Config: autoevaluate--squad-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
Felladrin/pretrain-webGPT_x_dolly | ---
license: cc-by-sa-3.0
source_datasets:
- starfishmedical/webGPT_x_dolly
---
Conversion of [starfishmedical/webGPT_x_dolly](https://huggingface.co/datasets/starfishmedical/webGPT_x_dolly) dataset to be used in pretraining.
Python code used for conversion:
```python
from datasets import load_dataset
import pandas
dataset = load_dataset("starfishmedical/webGPT_x_dolly", split="train")
def format(columns):
question = columns["instruction"].strip()
answer = columns["output"].strip()
return f"{question}\n\n{answer}"
pandas.DataFrame({"text": [format(columns) for columns in dataset]}).to_csv("train.csv", index=False)
```
|
C-MTEB/IFlyTek-classification | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
'10': '10'
'11': '11'
'12': '12'
'13': '13'
'14': '14'
'15': '15'
'16': '16'
'17': '17'
'18': '18'
'19': '19'
'20': '20'
'21': '21'
'22': '22'
'23': '23'
'24': '24'
'25': '25'
'26': '26'
'27': '27'
'28': '28'
'29': '29'
'30': '30'
'31': '31'
'32': '32'
'33': '33'
'34': '34'
'35': '35'
'36': '36'
'37': '37'
'38': '38'
'39': '39'
'40': '40'
'41': '41'
'42': '42'
'43': '43'
'44': '44'
'45': '45'
'46': '46'
'47': '47'
'48': '48'
'49': '49'
'50': '50'
'51': '51'
'52': '52'
'53': '53'
'54': '54'
'55': '55'
'56': '56'
'57': '57'
'58': '58'
'59': '59'
'60': '60'
'61': '61'
'62': '62'
'63': '63'
'64': '64'
'65': '65'
'66': '66'
'67': '67'
'68': '68'
'69': '69'
'70': '70'
'71': '71'
'72': '72'
'73': '73'
'74': '74'
'75': '75'
'76': '76'
'77': '77'
'78': '78'
'79': '79'
'80': '80'
'81': '81'
'82': '82'
'83': '83'
'84': '84'
'85': '85'
'86': '86'
'87': '87'
'88': '88'
'89': '89'
'90': '90'
'91': '91'
'92': '92'
'93': '93'
'94': '94'
'95': '95'
'96': '96'
'97': '97'
'98': '98'
'99': '99'
'100': '100'
'101': '101'
'102': '102'
'103': '103'
'104': '104'
'105': '105'
'106': '106'
'107': '107'
'108': '108'
'109': '109'
'110': '110'
'111': '111'
'112': '112'
'113': '113'
'114': '114'
'115': '115'
'116': '116'
'117': '117'
'118': '118'
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 2105684
num_examples: 2600
- name: train
num_bytes: 10028605
num_examples: 12133
- name: validation
num_bytes: 2157119
num_examples: 2599
download_size: 9777643
dataset_size: 14291408
---
# Dataset Card for "IFlyTek-classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
026g/Test | ---
license: apache-2.0
---
|
hexscr/sec-filings | ---
license: mit
---
|
roszcz/pianofor-ai-base-v2 | ---
dataset_info:
features:
- name: notes
struct:
- name: end
sequence: float64
- name: pitch
sequence: int64
- name: start
sequence: float64
- name: velocity
sequence: int64
- name: control_changes
struct:
- name: number
sequence: int64
- name: time
sequence: float64
- name: value
sequence: int64
- name: source
dtype: string
splits:
- name: train
num_bytes: 1323482766
num_examples: 1237
download_size: 414443338
dataset_size: 1323482766
---
# Dataset Card for "pianofor-ai-base-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dgrnd4/stanford_dog_dataset | ---
license: afl-3.0
---
|
CyberHarem/takayama_sayoko_theidolmstermillionlive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of takayama_sayoko/高山紗代子 (THE iDOLM@STER: Million Live!)
This is the dataset of takayama_sayoko/高山紗代子 (THE iDOLM@STER: Million Live!), containing 255 images and their tags.
The core tags of this character are `long_hair, black_hair, red_eyes, bangs, breasts, glasses`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 255 | 345.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takayama_sayoko_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 255 | 198.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takayama_sayoko_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 605 | 426.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takayama_sayoko_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 255 | 305.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takayama_sayoko_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 605 | 609.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/takayama_sayoko_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/takayama_sayoko_theidolmstermillionlive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, smile, solo, blue_sky, day, looking_at_viewer, navel, open_mouth, outdoors, blush, cleavage, side-tie_bikini_bottom, yellow_bikini, armband, beach, cloud, front-tie_top, medium_breasts, necklace, striped_bikini, visor_cap |
| 1 | 16 |  |  |  |  |  | 1girl, solo, looking_at_viewer, smile, blush, open_mouth, dress, hat, black_gloves |
| 2 | 7 |  |  |  |  |  | 1girl, solo, white_headwear, blue_skirt, blush, holding, looking_at_viewer, megaphone, short_shorts, smile, white_gloves, white_shorts, brown_eyes, pleated_skirt, shorts_under_skirt, sleeveless_shirt, white_shirt, bare_shoulders, beret, open_mouth, parted_bangs, red_bow, very_long_hair, white_background, white_sailor_collar, closed_mouth, medium_breasts, simple_background |
| 3 | 13 |  |  |  |  |  | 1girl, looking_at_viewer, solo, school_uniform, twintails, blush, open_mouth, bow, :d, skirt |
| 4 | 10 |  |  |  |  |  | 1girl, pleated_skirt, solo, low_twintails, white_shirt, blush, grey_skirt, plaid_skirt, black-framed_eyewear, looking_at_viewer, open_mouth, puffy_short_sleeves, sailor_collar, serafuku, :d, pink_bow, bowtie, brown_eyes, kneehighs |
| 5 | 6 |  |  |  |  |  | 1girl, black_gloves, black_shorts, fingerless_gloves, looking_at_viewer, midriff, smile, solo, black_jacket, blush, crop_top, navel, ponytail, belt, hair_ornament, holding, open_jacket, shirt, short_shorts, sidelocks, cleavage, clothing_cutout, cowboy_shot, long_sleeves, medium_breasts, open_mouth, stomach, sweat |
| 6 | 7 |  |  |  |  |  | detached_collar, playboy_bunny, rabbit_ears, 1girl, cleavage, fake_animal_ears, looking_at_viewer, rabbit_tail, simple_background, white_background, wrist_cuffs, bare_shoulders, blush, medium_breasts, solo, strapless_leotard, black_leotard, red_bowtie, black_pantyhose, closed_mouth, collarbone, full_body, hair_ornament, high_heels, holding, open_mouth, smile, white_footwear, white_leotard |
| 7 | 6 |  |  |  |  |  | 1girl, blush, hetero, nipples, open_mouth, sex, twintails, vaginal, 1boy, penis, solo_focus, female_pubic_hair, medium_breasts, bra, clothes_lift, cowgirl_position, cum_in_pussy, girl_on_top, mosaic_censoring, navel, nude, sweat |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | solo | blue_sky | day | looking_at_viewer | navel | open_mouth | outdoors | blush | cleavage | side-tie_bikini_bottom | yellow_bikini | armband | beach | cloud | front-tie_top | medium_breasts | necklace | striped_bikini | visor_cap | dress | hat | black_gloves | white_headwear | blue_skirt | holding | megaphone | short_shorts | white_gloves | white_shorts | brown_eyes | pleated_skirt | shorts_under_skirt | sleeveless_shirt | white_shirt | bare_shoulders | beret | parted_bangs | red_bow | very_long_hair | white_background | white_sailor_collar | closed_mouth | simple_background | school_uniform | twintails | bow | :d | skirt | low_twintails | grey_skirt | plaid_skirt | black-framed_eyewear | puffy_short_sleeves | sailor_collar | serafuku | pink_bow | bowtie | kneehighs | black_shorts | fingerless_gloves | midriff | black_jacket | crop_top | ponytail | belt | hair_ornament | open_jacket | shirt | sidelocks | clothing_cutout | cowboy_shot | long_sleeves | stomach | sweat | detached_collar | playboy_bunny | rabbit_ears | fake_animal_ears | rabbit_tail | wrist_cuffs | strapless_leotard | black_leotard | red_bowtie | black_pantyhose | collarbone | full_body | high_heels | white_footwear | white_leotard | hetero | nipples | sex | vaginal | 1boy | penis | solo_focus | female_pubic_hair | bra | clothes_lift | cowgirl_position | cum_in_pussy | girl_on_top | mosaic_censoring | nude |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:-----------|:------|:--------------------|:--------|:-------------|:-----------|:--------|:-----------|:-------------------------|:----------------|:----------|:--------|:--------|:----------------|:-----------------|:-----------|:-----------------|:------------|:--------|:------|:---------------|:-----------------|:-------------|:----------|:------------|:---------------|:---------------|:---------------|:-------------|:----------------|:---------------------|:-------------------|:--------------|:-----------------|:--------|:---------------|:----------|:-----------------|:-------------------|:----------------------|:---------------|:--------------------|:-----------------|:------------|:------|:-----|:--------|:----------------|:-------------|:--------------|:-----------------------|:----------------------|:----------------|:-----------|:-----------|:---------|:------------|:---------------|:--------------------|:----------|:---------------|:-----------|:-----------|:-------|:----------------|:--------------|:--------|:------------|:------------------|:--------------|:---------------|:----------|:--------|:------------------|:----------------|:--------------|:-------------------|:--------------|:--------------|:--------------------|:----------------|:-------------|:------------------|:-------------|:------------|:-------------|:-----------------|:----------------|:---------|:----------|:------|:----------|:-------|:--------|:-------------|:--------------------|:------|:---------------|:-------------------|:---------------|:--------------|:-------------------|:-------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 16 |  |  |  |  |  | X | X | X | | | X | | X | | X | | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | X | | | X | | X | | X | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 13 |  |  |  |  |  | X | | X | | | X | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 10 |  |  |  |  |  | X | | X | | | X | | X | | X | | | | | | | | | | | | | | | | | | | | | | X | X | | | X | | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | X | X | | | X | X | X | | X | X | | | | | | | X | | | | | | X | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | X | X | | | X | | X | | X | X | | | | | | | X | | | | | | | | | X | | | | | | | | | | X | | | | | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | | | | | | X | X | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
dim/SemEval_training_data_emotions | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: utterance_ID
dtype: int64
- name: text
dtype: string
- name: speaker
dtype: string
- name: emotion
dtype: string
- name: video_name
dtype: string
splits:
- name: train
num_bytes: 1198989.1453851238
num_examples: 12529
- name: test
num_bytes: 104309.85461487627
num_examples: 1090
download_size: 614184
dataset_size: 1303299.0
---
# Dataset Card for "SemEval_traindata_emotions"
Как был получен
```python
from datasets import load_dataset
import datasets
from torchvision.io import read_video
import json
import torch
import os
from torch.utils.data import Dataset, DataLoader
import tqdm
dataset_path = "./SemEval-2024_Task3/training_data/Subtask_2_train.json"
dataset = json.loads(open(dataset_path).read())
print(len(dataset))
all_conversations = []
for item in dataset:
all_conversations.extend(item["conversation"])
print(len(all_conversations))
all_data = datasets.Dataset.from_list(all_conversations)
all_data = all_data.train_test_split(
test_size=0.08,
seed=42,
)
all_data.push_to_hub(
"dim/SemEval_training_data_emotions",
token=open("./hf_token").read(),
)
``` |
kan_hope | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
- kn
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
pretty_name: KanHope
language_bcp47:
- en-IN
- kn-IN
tags:
- hope-speech-detection
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Not-Hope
'1': Hope
splits:
- name: train
num_bytes: 494898
num_examples: 4940
- name: test
num_bytes: 65722
num_examples: 618
download_size: 568972
dataset_size: 560620
---
# Dataset Card for KanHope
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://zenodo.org/record/4904729
- **Repository:** [KanHope](https://github.com/adeepH/KanHope)
- **Paper:** [Hope speech detection in Under-resourced Kannada langauge](https://arxiv.org/abs/2108.04616)
- **Leaderboard:** [N/A]
- **Point of Contact:** [Adeep Hande](adeeph18c@iiitt.ac.in)
### Dataset Summary
KanHope dataset is a code-mixed Kannada-English dataset for hope speech detection. All texts are scraped from the comments section of YouTube. The dataset consists of 6,176 user-generated comments in code mixed Kannada scraped from YouTube and manually annotated as bearing hope speech or Not-hope speech.
### Supported Tasks and Leaderboards
This task aims to detect Hope speech content of the code-mixed dataset of comments/posts in Dravidian Languages ( Kannada-English) collected from social media. The comment/post may contain more than one sentence, but the average sentence length of the corpora is 1. Each comment/post is annotated at the comment/post level. This dataset also has class imbalance problems depicting real-world scenarios.
### Languages
Code-mixed text in Dravidian languages (Kannada-English).
## Dataset Structure
### Data Instances
An example from the Kannada dataset looks as follows:
| text | label |
| :------ | :----- |
| ��������� ��ͭ� heartly heltidini... plz avrigella namma nimmellara supprt beku | 0 (Non_hope speech) |
| Next song gu kuda alru andre evaga yar comment madidera alla alrru like madi share madi nam industry na next level ge togond hogaona. | 1 (Hope Speech) |
### Data Fields
Kannada
- `text`: Kannada-English code mixed comment.
- `label`: integer from either of 0 or 1 that corresponds to these values: "Non_hope Speech", "Hope Speech"
### Data Splits
| | train | validation | test |
|---------|------:|-----------:|-----:|
| Kannada | 4941 | 618 | 617 |
## Dataset Creation
### Curation Rationale
Numerous methods have been developed to monitor the spread of negativity in modern years by eliminating vulgar, offensive, and fierce comments from social media platforms. However, there are relatively lesser amounts of study that converges on embracing positivity, reinforcing supportive and reassuring content in online forums.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
Youtube users
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@misc{hande2021hope,
title={Hope Speech detection in under-resourced Kannada language},
author={Adeep Hande and Ruba Priyadharshini and Anbukkarasi Sampath and Kingston Pal Thamburaj and Prabakaran Chandran and Bharathi Raja Chakravarthi},
year={2021},
eprint={2108.04616},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@adeepH](https://github.com/adeepH) for adding this dataset. |
CyberHarem/yukong_starrail | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of yukong/御空/驭空/어공 (Honkai: Star Rail)
This is the dataset of yukong/御空/驭空/어공 (Honkai: Star Rail), containing 85 images and their tags.
The core tags of this character are `animal_ears, breasts, long_hair, purple_eyes, animal_ear_fluff, hair_ornament, large_breasts, fox_ears, tail, hair_between_eyes, bangs, green_hair, fox_tail`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 85 | 162.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yukong_starrail/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 85 | 76.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yukong_starrail/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 218 | 174.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yukong_starrail/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 85 | 135.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yukong_starrail/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 218 | 262.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yukong_starrail/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yukong_starrail',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, looking_at_viewer, nipples, blush, solo, pussy, thighs, completely_nude, navel, smile, blue_hair, collarbone, mosaic_censoring, ass, blue_eyes, closed_mouth, lying |
| 1 | 27 |  |  |  |  |  | 1girl, cleavage, solo, looking_at_viewer, bare_shoulders, closed_mouth, smile, dress, fox_girl, sitting, thighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | nipples | blush | solo | pussy | thighs | completely_nude | navel | smile | blue_hair | collarbone | mosaic_censoring | ass | blue_eyes | closed_mouth | lying | cleavage | bare_shoulders | dress | fox_girl | sitting |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:----------|:--------|:-------|:--------|:---------|:------------------|:--------|:--------|:------------|:-------------|:-------------------|:------|:------------|:---------------|:--------|:-----------|:-----------------|:--------|:-----------|:----------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | |
| 1 | 27 |  |  |  |  |  | X | X | | | X | | X | | | X | | | | | | X | | X | X | X | X | X |
|
kpriyanshu256/MultiTabQA-geoquery | ---
dataset_info:
features:
- name: query
dtype: string
- name: answer
dtype: string
- name: table_names
sequence: string
- name: tables
sequence: string
- name: source
dtype: string
- name: target
dtype: string
- name: source_latex
dtype: string
- name: target_latex
dtype: string
- name: source_html
dtype: string
- name: target_html
dtype: string
- name: source_markdown
dtype: string
- name: target_markdown
dtype: string
splits:
- name: train
num_bytes: 36548405
num_examples: 530
- name: validation
num_bytes: 3207759
num_examples: 49
- name: test
num_bytes: 17902051
num_examples: 253
download_size: 10391921
dataset_size: 57658215
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Codec-SUPERB/musdb18_extract_unit | ---
dataset_info:
features:
- name: id
dtype: string
- name: unit
sequence:
sequence: int64
splits:
- name: academicodec_hifi_16k_320d
num_bytes: 282910400
num_examples: 750
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 282910400
num_examples: 750
- name: academicodec_hifi_24k_320d
num_bytes: 424348160
num_examples: 750
- name: audiodec_24k_320d
num_bytes: 905285600
num_examples: 750
- name: dac_16k
num_bytes: 1728406080
num_examples: 750
- name: dac_24k
num_bytes: 4808109920
num_examples: 750
- name: dac_44k
num_bytes: 1419206040
num_examples: 750
- name: encodec_24k
num_bytes: 212202560
num_examples: 750
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 2263243360
num_examples: 750
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 2263243360
num_examples: 750
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 2263240800
num_examples: 750
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 1131736160
num_examples: 750
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 2263240800
num_examples: 750
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 2263240800
num_examples: 750
- name: speech_tokenizer_16k
num_bytes: 565835040
num_examples: 750
download_size: 3275649498
dataset_size: 23077159480
configs:
- config_name: default
data_files:
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: dac_16k
path: data/dac_16k-*
- split: dac_24k
path: data/dac_24k-*
- split: dac_44k
path: data/dac_44k-*
- split: encodec_24k
path: data/encodec_24k-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
---
|
isashap/resumenew | ---
language:
- en
--- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.