datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
jasonkstevens/pippa-llama2-chat | ---
license: agpl-3.0
---
|
Thermostatic/ShareGPT_NeuralTranslate_v0.1 | ---
license: mit
---
|
zixianma/mnms | ---
license: mit
configs:
- config_name: default
data_files:
- split: test_human_verified_filtered
path: test_human_verified_filtered.json
- split: test_human_verified
path: test_human_verified.json
- split: test_raw
path: test_raw.json
task_categories:
- text-generation
language:
- en
pretty_name: m&ms
size_categories:
- 1K<n<10K
---
# Dataset Card for m&ms
m&ms is a dataset of multi-step multi-modal tasks and corresponding task plans.
<img src="dataset_examples.png" width=1000>
<!--  -->
## Dataset Details
This dataset contains 4K+ multi-step multi-modal tasks involving 33 tools that include 13 multi-modal models, 9 (free) public APIs, and 11 image processing modules.
For each of these task queries, we provide automatically generated plans using this realistic toolset.
We further provide a high-quality subset of 1,565 human-verified task plans and 882 human-verified, filtered, and correctly executable plans.
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** [https://github.com/RAIVNLab/mnms](https://github.com/RAIVNLab/mnms)
- **Paper:** [https://arxiv.org/abs/2403.11085](https://arxiv.org/abs/2403.11085)
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
The intended use of this dataset is to evaluate large language model (LLM) agents on their tool-use abilities for multi-step multi-modal tasks.
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
To use this dataset, you can first obtain plan predictions from LLM agents on the user requests in either JSON or Python code format,
and then evaluate the predicted plans against the label plans or code in this dataset.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
This dataset should not be used for training models.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
The data inputs to the plans can be accessed [here](https://github.com/RAIVNLab/mnms/tree/main/execution/data). They are sampled from various existing datasets, including ImageNet, sst2, SQUAD, C4, CNN daily news,
COCO, COCO-Text v2.0, GQA, Visual Genome, MagicBrush, and librispeech.
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
<img src="dataset_gen.png" width=1000>
<!--  -->
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Our dataset has the following limitations:
- The user requests might be biased as they are generated by GPT-4 and do not necessarily represent real-world user requests;
- The task plans are all sequential and require 1-3 tools to solve.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{ma2024mms,
title={m&m's: A Benchmark to Evaluate Tool-Use for multi-step multi-modal Tasks},
author={Zixian Ma and Weikai Huang and Jieyu Zhang and Tanmay Gupta and Ranjay Krishna},
year={2024},
eprint={2403.11085},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
florianLabaye/dataset_relation_extraction_2 | ---
dataset_info:
features:
- name: triplets
sequence: string
- name: passage
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 9874680.793757712
num_examples: 24533
download_size: 10748433
dataset_size: 9874680.793757712
---
# Dataset Card for "dataset_relation_extraction_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arabic_pos_dialect | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ar
license:
- apache-2.0
multilinguality:
- multilingual
size_categories:
- n<1K
source_datasets:
- extended
task_categories:
- token-classification
task_ids:
- part-of-speech
pretty_name: Arabic POS Dialect
dataset_info:
- config_name: egy
features:
- name: fold
dtype: int32
- name: subfold
dtype: string
- name: words
sequence: string
- name: segments
sequence: string
- name: pos_tags
sequence: string
splits:
- name: train
num_bytes: 269629
num_examples: 350
download_size: 89684
dataset_size: 269629
- config_name: glf
features:
- name: fold
dtype: int32
- name: subfold
dtype: string
- name: words
sequence: string
- name: segments
sequence: string
- name: pos_tags
sequence: string
splits:
- name: train
num_bytes: 239883
num_examples: 350
download_size: 89178
dataset_size: 239883
- config_name: lev
features:
- name: fold
dtype: int32
- name: subfold
dtype: string
- name: words
sequence: string
- name: segments
sequence: string
- name: pos_tags
sequence: string
splits:
- name: train
num_bytes: 263102
num_examples: 350
download_size: 97055
dataset_size: 263102
- config_name: mgr
features:
- name: fold
dtype: int32
- name: subfold
dtype: string
- name: words
sequence: string
- name: segments
sequence: string
- name: pos_tags
sequence: string
splits:
- name: train
num_bytes: 245717
num_examples: 350
download_size: 90503
dataset_size: 245717
configs:
- config_name: egy
data_files:
- split: train
path: egy/train-*
- config_name: glf
data_files:
- split: train
path: glf/train-*
- config_name: lev
data_files:
- split: train
path: lev/train-*
- config_name: mgr
data_files:
- split: train
path: mgr/train-*
---
# Dataset Card for Arabic POS Dialect
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://alt.qcri.org/resources/da_resources/
- **Repository:** https://github.com/qcri/dialectal_arabic_resources
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2018/pdf/562.pdf
- **Contacts:**
- Ahmed Abdelali < aabdelali @ hbku dot edu dot qa >
- Kareem Darwish < kdarwish @ hbku dot edu dot qa >
- Hamdy Mubarak < hmubarak @ hbku dot edu dot qa >
### Dataset Summary
This dataset was created to support part of speech (POS) tagging in dialects of Arabic. It contains sets of 350 manually segmented and POS tagged tweets for each of four dialects: Egyptian, Levantine, Gulf, and Maghrebi.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Arabic token segmentation and part of speech tagging in Arabic dialects. Success on this task is typically measured by achieving a high accuracy over a held out dataset. Darwish et al. (2018) train a CRF model across all four dialects and achieve an average accuracy of 89.3%.
### Languages
The BCP-47 code is ar-Arab. The dataset consists of four dialects of Arabic, Egyptian (EGY), Levantine (LEV), Gulf (GLF), and Maghrebi (MGR), written in Arabic script.
## Dataset Structure
### Data Instances
Below is a partial example from the Egyptian set:
```
- `Fold`: 4
- `SubFold`: A
- `Word`: [ليه, لما, تحب, حد, من, قلبك, ...]
- `Segmentation`: [ليه, لما, تحب, حد, من, قلب+ك, ...]
- `POS`: [PART, PART, V, NOUN, PREP, NOUN+PRON, ...]
```
### Data Fields
The `fold` and the `subfold` fields refer to the crossfold validation splits used by Darwish et al., which can be generated using this [script](https://github.com/qcri/dialectal_arabic_resources/blob/master/generate_splits.sh).
- `fold`: An int32 indicating which fold the instance was in for the crossfold validation
- `subfold`: A string, either 'A' or 'B', indicating which subfold the instance was in for the crossfold validation
- `words`: A sequence of strings of the unsegmented token
- `segments`: A sequence of strings consisting of the segments of the word separated by '+' if there is more than one segment
- `pos_tags`: A sequence of strings of the part of speech tags of the segments separated by '+' if there is more than one segment
The POS tags consist of a set developed by [Darwish et al. (2017)](https://www.aclweb.org/anthology/W17-1316.pdf) for Modern Standard Arabic (MSA) plus an additional 6 tags (2 dialect-specific tags and 4 tweet-specific tags).
| Tag | Purpose | Description |
| ----- | ------ | ----- |
| ADV | MSA | Adverb |
| ADJ | MSA | Adjective |
| CONJ | MSA | Conjunction |
| DET | MSA | Determiner |
| NOUN | MSA | Noun |
| NSUFF | MSA | Noun suffix |
| NUM | MSA | Number |
| PART | MSA | Particle |
| PREP | MSA | Preposition |
| PRON | MSA | Pronoun |
| PUNC | MSA | Preposition |
| V | MSA | Verb |
| ABBREV | MSA | Abbreviation |
| CASE | MSA | Alef of tanween fatha |
| JUS | MSA | Jussification attached to verbs |
| VSUFF | MSA | Verb Suffix |
| FOREIGN | MSA | Non-Arabic as well as non-MSA words |
| FUR_PART | MSA | Future particle "s" prefix and "swf" |
| PROG_PART | Dialect | Progressive particle |
| NEG_PART | Dialect | Negation particle |
| HASH | Tweet | Hashtag |
| EMOT | Tweet | Emoticon/Emoji |
| MENTION | Tweet | Mention |
| URL | Tweet | URL |
### Data Splits
The dataset is split by dialect.
| Dialect | Tweets | Words |
| ----- | ------ | ----- |
| Egyptian (EGY) | 350 | 7481 |
| Levantine (LEV) | 350 | 7221 |
| Gulf (GLF) | 350 | 6767 |
| Maghrebi (MGR) | 350 | 6400 |
## Dataset Creation
### Curation Rationale
This dataset was created to address the lack of computational resources available for dialects of Arabic. These dialects are typically used in speech, while written forms of the language are typically in Modern Standard Arabic. Social media, however, has provided a venue for people to use dialects in written format.
### Source Data
This dataset builds off of the work of [Eldesouki et al. (2017)](https://arxiv.org/pdf/1708.05891.pdf) and [Samih et al. (2017b)](https://www.aclweb.org/anthology/K17-1043.pdf) who originally collected the tweets.
#### Initial Data Collection and Normalization
They started with 175 million Arabic tweets returned by the Twitter API using the query "lang:ar" in March 2014. They then filtered this set using author-identified locations and tokens that are unique to each dialect. Finally, they had native speakers of each dialect select 350 tweets that were heavily accented.
#### Who are the source language producers?
The source language producers are people who posted on Twitter in Arabic using dialectal words from countries where the dialects of interest were spoken, as identified in [Mubarak and Darwish (2014)](https://www.aclweb.org/anthology/W14-3601.pdf).
### Annotations
#### Annotation process
The segmentation guidelines are available at https://alt.qcri.org/resources1/da_resources/seg-guidelines.pdf. The tagging guidelines are not provided, but Darwish at al. note that there were multiple rounds of quality control and revision.
#### Who are the annotators?
The POS tags were annotated by native speakers of each dialect. Further information is not known.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Darwish et al find that the accuracy on the Maghrebi dataset suffered the most when the training set was from another dialect, and conversely training on Maghrebi yielded the worst results for all the other dialects. They suggest that Egyptian, Levantine, and Gulf may be more similar to each other and Maghrebi the most dissimilar to all of them. They also find that training on Modern Standard Arabic (MSA) and testing on dialects yielded significantly lower results compared to training on dialects and testing on MSA. This suggests that dialectal variation should be a significant consideration for future work in Arabic NLP applications, particularly when working with social media text.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was curated by Kareem Darwish, Hamdy Mubarak, Mohamed Eldesouki and Ahmed Abdelali with the Qatar Computing Research Institute (QCRI), Younes Samih and Laura Kallmeyer with the University of Dusseldorf, Randah Alharbi and Walid Magdy with the University of Edinburgh, and Mohammed Attia with Google. No funding information was included.
### Licensing Information
This dataset is licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
Kareem Darwish, Hamdy Mubarak, Ahmed Abdelali, Mohamed Eldesouki, Younes Samih, Randah Alharbi, Mohammed Attia, Walid Magdy and Laura Kallmeyer (2018) Multi-Dialect Arabic POS Tagging: A CRF Approach. Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), May 7-12, 2018. Miyazaki, Japan.
```
@InProceedings{DARWISH18.562,
author = {Kareem Darwish ,Hamdy Mubarak ,Ahmed Abdelali ,Mohamed Eldesouki ,Younes Samih ,Randah Alharbi ,Mohammed Attia ,Walid Magdy and Laura Kallmeyer},
title = {Multi-Dialect Arabic POS Tagging: A CRF Approach},
booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year = {2018},
month = {may},
date = {7-12},
location = {Miyazaki, Japan},
editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga},
publisher = {European Language Resources Association (ELRA)},
address = {Paris, France},
isbn = {979-10-95546-00-9},
language = {english}
}
```
### Contributions
Thanks to [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset. |
mesolitica/translated-CodeUltraFeedback | ---
language:
- ms
---
# Translated CodeUltraFeedback
Original repository, https://huggingface.co/datasets/coseal/CodeUltraFeedback
Translate using Malaya T5, source code at https://github.com/mesolitica/malaysian-dataset/tree/master/chatbot/CodeUltraFeedback
## Notes
1. We rejected some translated text based on ratio of unique word count / word count. |
safgasgfsa/Bratishkin-Voice | ---
license: other
---
|
climatebert/climate_sentiment | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: ClimateSentiment
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': risk
'1': neutral
'2': opportunity
splits:
- name: train
num_bytes: 492077
num_examples: 1000
- name: test
num_bytes: 174265
num_examples: 320
download_size: 373638
dataset_size: 666342
---
# Dataset Card for climate_sentiment
## Dataset Description
- **Homepage:** [climatebert.ai](https://climatebert.ai)
- **Repository:**
- **Paper:** [papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435)
- **Leaderboard:**
- **Point of Contact:** [Nicolas Webersinke](mailto:nicolas.webersinke@fau.de)
### Dataset Summary
We introduce an expert-annotated dataset for classifying climate-related sentiment of climate-related paragraphs in corporate disclosures.
### Supported Tasks and Leaderboards
The dataset supports a ternary sentiment classification task of whether a given climate-related paragraph has sentiment opportunity, neutral, or risk.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
'text': '− Scope 3: Optional scope that includes indirect emissions associated with the goods and services supply chain produced outside the organization. Included are emissions from the transport of products from our logistics centres to stores (downstream) performed by external logistics operators (air, land and sea transport) as well as the emissions associated with electricity consumption in franchise stores.',
'label': 1
}
```
### Data Fields
- text: a climate-related paragraph extracted from corporate annual reports and sustainability reports
- label: the label (0 -> risk, 1 -> neutral, 2 -> opportunity)
### Data Splits
The dataset is split into:
- train: 1,000
- test: 320
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains climate-related paragraphs extracted from financial disclosures by firms. We collect text from corporate annual reports and sustainability reports.
For more information regarding our sample selection, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the annotators?
The authors and students at Universität Zürich and Friedrich-Alexander-Universität Erlangen-Nürnberg with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
- Nicolas Webersinke
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch).
### Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
### Contributions
Thanks to [@webersni](https://github.com/webersni) for adding this dataset. |
CyberHarem/sheila_majonotabitabi | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Sheila
This is the dataset of Sheila, containing 67 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 67 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 149 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 185 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 67 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 67 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 67 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 149 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 149 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 139 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 185 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 185 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
torchgeo/skippd | ---
license: cc-by-4.0
size_categories:
- 100K<n<1M
---
2017-2019 Sky Images and Photovoltaic Power Generation Dataset for Short-term Solar Forecasting (Stanford Benchmark).
Nie, Y., Li, X., Scott, A., Sun, Y., Venugopal, V., and Brandt, A. (2022). 2017-2019 Sky Images and Photovoltaic Power Generation Dataset for Short-term Solar Forecasting (Stanford Benchmark). Stanford Digital Repository. https://purl.stanford.edu/dj417rh1007 |
luffycodes/DUPEd_StrategyQA | ---
license: cc-by-4.0
---
**Dataset for the paper: Deduction under Perturbed Evidence: Probing Student Simulation Capabilities of Large Language Models**
You can find the paper [here](https://arxiv.org/abs/2305.14507).
***Note***
The QID which ends in either math_dupe/nlp_dupe refer to math/nlp perturbations respectively.
The QID can be mapped to original strategyQA dataset which can be downloaded [here](https://allenai.org/data/strategyqa).
Please note that the answer needs to be reversed for the facts.
It refers to the original answer field in the strategyqa dataset.
## Citation
If you use this dataset in your work, please cite:
```
@article{sonkar2023deduction,
title={Deduction under Perturbed Evidence: Probing Student Simulation Capabilities of Large Language Models},
author={Sonkar, Shashank and Baraniuk, Richard G},
journal={arXiv preprint arXiv:2305.14507},
year={2023}
}
``` |
RikRaes/CV_13_FT_75_25_1 | ---
dataset_info:
features:
- name: client_id
dtype: string
- name: path
struct:
- name: array
sequence: float32
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accents
dtype: string
- name: variant
dtype: 'null'
- name: locale
dtype: string
- name: segment
dtype: string
splits:
- name: train
num_bytes: 1363205491.3122818
num_examples: 5000
- name: val
num_bytes: 272641098.26245636
num_examples: 1000
- name: test
num_bytes: 545282196.5249127
num_examples: 2000
download_size: 824807117
dataset_size: 2181128786.099651
---
# Dataset Card for "CV_13_FT_75_25_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
p208p2002/zhtw-sentence-error-correction | ---
language:
- zh
configs:
- config_name: alpha
data_files:
- split: train
path: "alpha/out.jsonl"
- config_name: beta
data_files:
- split: train
path: "beta/out.jsonl"
- config_name: gamma
data_files:
- split: train
path: "gamma/out.jsonl"
---
# 中文錯字糾正資料集
由規則與字典自維基百科產生的錯誤糾正資料集。
包含錯誤類型:隨機錯字、近似音錯字、缺字錯誤、冗字錯誤。
資料集使用函式庫: [p208p2002/zh-mistake-text-gen](https://github.com/p208p2002/zh-mistake-text-gen)
### 子集
- alpha: 95%錯誤,5%不變。單句中可能有多個錯誤。
- beta: 50%錯誤,50%不變。單句中僅有一個錯誤。
- gamma: 100%錯誤。單句中可能有多個錯誤。 |
jorgeortizfuentes/spanish_attitude_conll2003 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: att_tags
sequence:
class_label:
names:
'0': B-propriety (J3)
'1': I-Affect
'2': B-tenacity (J3)
'3': I-tenacity (J3)
'4': B-capacity (J3)
'5': I-Negative
'6': I-Social Sanction (J2)
'7': B-Social Sanction (J2)
'8': B-Social Esteem (J2)
'9': I-capacity (J3)
'10': I-normality (J3)
'11': B-normality (J3)
'12': B-Judgment (J1)
'13': B-Affect
'14': I-Judgment (J1)
'15': I-Appreciation
'16': B-Appreciation
'17': I-propriety (J3)
'18': I-veracity (J3)
'19': B-Negative
'20': I-Social Esteem (J2)
'21': B-veracity (J3)
'22': O
splits:
- name: train
num_bytes: 806686.2274741507
num_examples: 1083
- name: validation
num_bytes: 201857.77252584934
num_examples: 271
download_size: 272088
dataset_size: 1008544.0
---
# Dataset Card for "spanish_attitude_conll2003"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kenhktsui/refinedweb-3m_quality_score_v1 | ---
dataset_info:
features:
- name: text
dtype: string
- name: quality_score_v1
dtype: float64
splits:
- name: train
num_bytes: 7858920949
num_examples: 3000000
download_size: 4923434231
dataset_size: 7858920949
task_categories:
- text-generation
language:
- en
---
# Dataset Card for "refinedweb-3m_quality_score_v1"
Adding quality score v1 to [mattymchen/refinedweb-3m](https://huggingface.co/datasets/mattymchen/refinedweb-3m)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_Mihaiii__Metis-0.3 | ---
pretty_name: Evaluation run of Mihaiii/Metis-0.3
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Mihaiii/Metis-0.3](https://huggingface.co/Mihaiii/Metis-0.3) on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Mihaiii__Metis-0.3\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-18T10:29:51.346737](https://huggingface.co/datasets/open-llm-leaderboard/details_Mihaiii__Metis-0.3/blob/main/results_2023-12-18T10-29-51.346737.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6087596961823534,\n\
\ \"acc_stderr\": 0.033143419693783545,\n \"acc_norm\": 0.6135679004202929,\n\
\ \"acc_norm_stderr\": 0.03381506918300307,\n \"mc1\": 0.5263157894736842,\n\
\ \"mc1_stderr\": 0.017479241161975453,\n \"mc2\": 0.6755936296533276,\n\
\ \"mc2_stderr\": 0.015113334433722326\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5819112627986348,\n \"acc_stderr\": 0.01441398839699608,\n\
\ \"acc_norm\": 0.6271331058020477,\n \"acc_norm_stderr\": 0.014131176760131169\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6609241187014538,\n\
\ \"acc_stderr\": 0.004724281487819376,\n \"acc_norm\": 0.8480382393945429,\n\
\ \"acc_norm_stderr\": 0.0035825015965645452\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5703703703703704,\n\
\ \"acc_stderr\": 0.042763494943765995,\n \"acc_norm\": 0.5703703703703704,\n\
\ \"acc_norm_stderr\": 0.042763494943765995\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.618421052631579,\n \"acc_stderr\": 0.039531733777491945,\n\
\ \"acc_norm\": 0.618421052631579,\n \"acc_norm_stderr\": 0.039531733777491945\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n\
\ \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.62,\n \
\ \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6754716981132075,\n \"acc_stderr\": 0.02881561571343211,\n\
\ \"acc_norm\": 0.6754716981132075,\n \"acc_norm_stderr\": 0.02881561571343211\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6736111111111112,\n\
\ \"acc_stderr\": 0.03921067198982266,\n \"acc_norm\": 0.6736111111111112,\n\
\ \"acc_norm_stderr\": 0.03921067198982266\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.49,\n\
\ \"acc_stderr\": 0.05024183937956913,\n \"acc_norm\": 0.49,\n \
\ \"acc_norm_stderr\": 0.05024183937956913\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \
\ \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5780346820809249,\n\
\ \"acc_stderr\": 0.0376574669386515,\n \"acc_norm\": 0.5780346820809249,\n\
\ \"acc_norm_stderr\": 0.0376574669386515\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.04913595201274498,\n\
\ \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.04913595201274498\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n\
\ \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5319148936170213,\n \"acc_stderr\": 0.03261936918467382,\n\
\ \"acc_norm\": 0.5319148936170213,\n \"acc_norm_stderr\": 0.03261936918467382\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.41228070175438597,\n\
\ \"acc_stderr\": 0.046306532033665956,\n \"acc_norm\": 0.41228070175438597,\n\
\ \"acc_norm_stderr\": 0.046306532033665956\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6137931034482759,\n \"acc_stderr\": 0.04057324734419035,\n\
\ \"acc_norm\": 0.6137931034482759,\n \"acc_norm_stderr\": 0.04057324734419035\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.373015873015873,\n \"acc_stderr\": 0.02490699045899257,\n \"acc_norm\"\
: 0.373015873015873,\n \"acc_norm_stderr\": 0.02490699045899257\n },\n\
\ \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n\
\ \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n\
\ \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6548387096774193,\n\
\ \"acc_stderr\": 0.02704574657353433,\n \"acc_norm\": 0.6548387096774193,\n\
\ \"acc_norm_stderr\": 0.02704574657353433\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n\
\ \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\"\
: 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7393939393939394,\n \"acc_stderr\": 0.034277431758165236,\n\
\ \"acc_norm\": 0.7393939393939394,\n \"acc_norm_stderr\": 0.034277431758165236\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7727272727272727,\n \"acc_stderr\": 0.029857515673386417,\n \"\
acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.029857515673386417\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8393782383419689,\n \"acc_stderr\": 0.026499057701397443,\n\
\ \"acc_norm\": 0.8393782383419689,\n \"acc_norm_stderr\": 0.026499057701397443\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5794871794871795,\n \"acc_stderr\": 0.025028610276710862,\n\
\ \"acc_norm\": 0.5794871794871795,\n \"acc_norm_stderr\": 0.025028610276710862\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3074074074074074,\n \"acc_stderr\": 0.028133252578815632,\n \
\ \"acc_norm\": 0.3074074074074074,\n \"acc_norm_stderr\": 0.028133252578815632\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6554621848739496,\n \"acc_stderr\": 0.030868682604121626,\n\
\ \"acc_norm\": 0.6554621848739496,\n \"acc_norm_stderr\": 0.030868682604121626\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"\
acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8018348623853211,\n \"acc_stderr\": 0.017090573804217902,\n \"\
acc_norm\": 0.8018348623853211,\n \"acc_norm_stderr\": 0.017090573804217902\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.46296296296296297,\n \"acc_stderr\": 0.03400603625538271,\n \"\
acc_norm\": 0.46296296296296297,\n \"acc_norm_stderr\": 0.03400603625538271\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7549019607843137,\n \"acc_stderr\": 0.030190282453501954,\n \"\
acc_norm\": 0.7549019607843137,\n \"acc_norm_stderr\": 0.030190282453501954\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7510548523206751,\n \"acc_stderr\": 0.028146970599422644,\n \
\ \"acc_norm\": 0.7510548523206751,\n \"acc_norm_stderr\": 0.028146970599422644\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6188340807174888,\n\
\ \"acc_stderr\": 0.03259625118416827,\n \"acc_norm\": 0.6188340807174888,\n\
\ \"acc_norm_stderr\": 0.03259625118416827\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.732824427480916,\n \"acc_stderr\": 0.038808483010823944,\n\
\ \"acc_norm\": 0.732824427480916,\n \"acc_norm_stderr\": 0.038808483010823944\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8264462809917356,\n \"acc_stderr\": 0.0345727283691767,\n \"acc_norm\"\
: 0.8264462809917356,\n \"acc_norm_stderr\": 0.0345727283691767\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n \
\ \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7300613496932515,\n \"acc_stderr\": 0.03487825168497892,\n\
\ \"acc_norm\": 0.7300613496932515,\n \"acc_norm_stderr\": 0.03487825168497892\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n\
\ \"acc_stderr\": 0.04726835553719099,\n \"acc_norm\": 0.45535714285714285,\n\
\ \"acc_norm_stderr\": 0.04726835553719099\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7281553398058253,\n \"acc_stderr\": 0.044052680241409216,\n\
\ \"acc_norm\": 0.7281553398058253,\n \"acc_norm_stderr\": 0.044052680241409216\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8675213675213675,\n\
\ \"acc_stderr\": 0.02220930907316561,\n \"acc_norm\": 0.8675213675213675,\n\
\ \"acc_norm_stderr\": 0.02220930907316561\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \
\ \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.014866821664709588,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.014866821664709588\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6907514450867052,\n \"acc_stderr\": 0.024883140570071762,\n\
\ \"acc_norm\": 0.6907514450867052,\n \"acc_norm_stderr\": 0.024883140570071762\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3139664804469274,\n\
\ \"acc_stderr\": 0.015521923933523646,\n \"acc_norm\": 0.3139664804469274,\n\
\ \"acc_norm_stderr\": 0.015521923933523646\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6993464052287581,\n \"acc_stderr\": 0.02625605383571896,\n\
\ \"acc_norm\": 0.6993464052287581,\n \"acc_norm_stderr\": 0.02625605383571896\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6945337620578779,\n\
\ \"acc_stderr\": 0.02616058445014045,\n \"acc_norm\": 0.6945337620578779,\n\
\ \"acc_norm_stderr\": 0.02616058445014045\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7098765432098766,\n \"acc_stderr\": 0.025251173936495033,\n\
\ \"acc_norm\": 0.7098765432098766,\n \"acc_norm_stderr\": 0.025251173936495033\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4574468085106383,\n \"acc_stderr\": 0.029719281272236844,\n \
\ \"acc_norm\": 0.4574468085106383,\n \"acc_norm_stderr\": 0.029719281272236844\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4322033898305085,\n\
\ \"acc_stderr\": 0.012652297777114968,\n \"acc_norm\": 0.4322033898305085,\n\
\ \"acc_norm_stderr\": 0.012652297777114968\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6066176470588235,\n \"acc_stderr\": 0.029674288281311155,\n\
\ \"acc_norm\": 0.6066176470588235,\n \"acc_norm_stderr\": 0.029674288281311155\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6290849673202614,\n \"acc_stderr\": 0.019542101564854128,\n \
\ \"acc_norm\": 0.6290849673202614,\n \"acc_norm_stderr\": 0.019542101564854128\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7,\n\
\ \"acc_stderr\": 0.04389311454644287,\n \"acc_norm\": 0.7,\n \
\ \"acc_norm_stderr\": 0.04389311454644287\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.028920583220675596,\n\
\ \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.028920583220675596\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7412935323383084,\n\
\ \"acc_stderr\": 0.03096590312357303,\n \"acc_norm\": 0.7412935323383084,\n\
\ \"acc_norm_stderr\": 0.03096590312357303\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.82,\n \"acc_stderr\": 0.03861229196653693,\n \
\ \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.03861229196653693\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5060240963855421,\n\
\ \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.5060240963855421,\n\
\ \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5263157894736842,\n\
\ \"mc1_stderr\": 0.017479241161975453,\n \"mc2\": 0.6755936296533276,\n\
\ \"mc2_stderr\": 0.015113334433722326\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7726913970007893,\n \"acc_stderr\": 0.011778612167091088\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3934799090219864,\n \
\ \"acc_stderr\": 0.01345631582840459\n }\n}\n```"
repo_url: https://huggingface.co/Mihaiii/Metis-0.3
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|arc:challenge|25_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|gsm8k|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hellaswag|10_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-18T10-29-51.346737.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-18T10-29-51.346737.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- '**/details_harness|winogrande|5_2023-12-18T10-29-51.346737.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-18T10-29-51.346737.parquet'
- config_name: results
data_files:
- split: 2023_12_18T10_29_51.346737
path:
- results_2023-12-18T10-29-51.346737.parquet
- split: latest
path:
- results_2023-12-18T10-29-51.346737.parquet
---
# Dataset Card for Evaluation run of Mihaiii/Metis-0.3
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Mihaiii/Metis-0.3](https://huggingface.co/Mihaiii/Metis-0.3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Mihaiii__Metis-0.3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-18T10:29:51.346737](https://huggingface.co/datasets/open-llm-leaderboard/details_Mihaiii__Metis-0.3/blob/main/results_2023-12-18T10-29-51.346737.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6087596961823534,
"acc_stderr": 0.033143419693783545,
"acc_norm": 0.6135679004202929,
"acc_norm_stderr": 0.03381506918300307,
"mc1": 0.5263157894736842,
"mc1_stderr": 0.017479241161975453,
"mc2": 0.6755936296533276,
"mc2_stderr": 0.015113334433722326
},
"harness|arc:challenge|25": {
"acc": 0.5819112627986348,
"acc_stderr": 0.01441398839699608,
"acc_norm": 0.6271331058020477,
"acc_norm_stderr": 0.014131176760131169
},
"harness|hellaswag|10": {
"acc": 0.6609241187014538,
"acc_stderr": 0.004724281487819376,
"acc_norm": 0.8480382393945429,
"acc_norm_stderr": 0.0035825015965645452
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5703703703703704,
"acc_stderr": 0.042763494943765995,
"acc_norm": 0.5703703703703704,
"acc_norm_stderr": 0.042763494943765995
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.618421052631579,
"acc_stderr": 0.039531733777491945,
"acc_norm": 0.618421052631579,
"acc_norm_stderr": 0.039531733777491945
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6754716981132075,
"acc_stderr": 0.02881561571343211,
"acc_norm": 0.6754716981132075,
"acc_norm_stderr": 0.02881561571343211
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6736111111111112,
"acc_stderr": 0.03921067198982266,
"acc_norm": 0.6736111111111112,
"acc_norm_stderr": 0.03921067198982266
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956913,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956913
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5780346820809249,
"acc_stderr": 0.0376574669386515,
"acc_norm": 0.5780346820809249,
"acc_norm_stderr": 0.0376574669386515
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4215686274509804,
"acc_stderr": 0.04913595201274498,
"acc_norm": 0.4215686274509804,
"acc_norm_stderr": 0.04913595201274498
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5319148936170213,
"acc_stderr": 0.03261936918467382,
"acc_norm": 0.5319148936170213,
"acc_norm_stderr": 0.03261936918467382
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.41228070175438597,
"acc_stderr": 0.046306532033665956,
"acc_norm": 0.41228070175438597,
"acc_norm_stderr": 0.046306532033665956
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6137931034482759,
"acc_stderr": 0.04057324734419035,
"acc_norm": 0.6137931034482759,
"acc_norm_stderr": 0.04057324734419035
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.373015873015873,
"acc_stderr": 0.02490699045899257,
"acc_norm": 0.373015873015873,
"acc_norm_stderr": 0.02490699045899257
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.04415438226743744,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.04415438226743744
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6548387096774193,
"acc_stderr": 0.02704574657353433,
"acc_norm": 0.6548387096774193,
"acc_norm_stderr": 0.02704574657353433
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7393939393939394,
"acc_stderr": 0.034277431758165236,
"acc_norm": 0.7393939393939394,
"acc_norm_stderr": 0.034277431758165236
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7727272727272727,
"acc_stderr": 0.029857515673386417,
"acc_norm": 0.7727272727272727,
"acc_norm_stderr": 0.029857515673386417
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8393782383419689,
"acc_stderr": 0.026499057701397443,
"acc_norm": 0.8393782383419689,
"acc_norm_stderr": 0.026499057701397443
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5794871794871795,
"acc_stderr": 0.025028610276710862,
"acc_norm": 0.5794871794871795,
"acc_norm_stderr": 0.025028610276710862
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3074074074074074,
"acc_stderr": 0.028133252578815632,
"acc_norm": 0.3074074074074074,
"acc_norm_stderr": 0.028133252578815632
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6554621848739496,
"acc_stderr": 0.030868682604121626,
"acc_norm": 0.6554621848739496,
"acc_norm_stderr": 0.030868682604121626
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8018348623853211,
"acc_stderr": 0.017090573804217902,
"acc_norm": 0.8018348623853211,
"acc_norm_stderr": 0.017090573804217902
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.46296296296296297,
"acc_stderr": 0.03400603625538271,
"acc_norm": 0.46296296296296297,
"acc_norm_stderr": 0.03400603625538271
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7549019607843137,
"acc_stderr": 0.030190282453501954,
"acc_norm": 0.7549019607843137,
"acc_norm_stderr": 0.030190282453501954
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7510548523206751,
"acc_stderr": 0.028146970599422644,
"acc_norm": 0.7510548523206751,
"acc_norm_stderr": 0.028146970599422644
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6188340807174888,
"acc_stderr": 0.03259625118416827,
"acc_norm": 0.6188340807174888,
"acc_norm_stderr": 0.03259625118416827
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.732824427480916,
"acc_stderr": 0.038808483010823944,
"acc_norm": 0.732824427480916,
"acc_norm_stderr": 0.038808483010823944
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8264462809917356,
"acc_stderr": 0.0345727283691767,
"acc_norm": 0.8264462809917356,
"acc_norm_stderr": 0.0345727283691767
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.75,
"acc_stderr": 0.04186091791394607,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04186091791394607
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7300613496932515,
"acc_stderr": 0.03487825168497892,
"acc_norm": 0.7300613496932515,
"acc_norm_stderr": 0.03487825168497892
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.04726835553719099,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.04726835553719099
},
"harness|hendrycksTest-management|5": {
"acc": 0.7281553398058253,
"acc_stderr": 0.044052680241409216,
"acc_norm": 0.7281553398058253,
"acc_norm_stderr": 0.044052680241409216
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8675213675213675,
"acc_stderr": 0.02220930907316561,
"acc_norm": 0.8675213675213675,
"acc_norm_stderr": 0.02220930907316561
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.014866821664709588,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.014866821664709588
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6907514450867052,
"acc_stderr": 0.024883140570071762,
"acc_norm": 0.6907514450867052,
"acc_norm_stderr": 0.024883140570071762
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3139664804469274,
"acc_stderr": 0.015521923933523646,
"acc_norm": 0.3139664804469274,
"acc_norm_stderr": 0.015521923933523646
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6993464052287581,
"acc_stderr": 0.02625605383571896,
"acc_norm": 0.6993464052287581,
"acc_norm_stderr": 0.02625605383571896
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6945337620578779,
"acc_stderr": 0.02616058445014045,
"acc_norm": 0.6945337620578779,
"acc_norm_stderr": 0.02616058445014045
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7098765432098766,
"acc_stderr": 0.025251173936495033,
"acc_norm": 0.7098765432098766,
"acc_norm_stderr": 0.025251173936495033
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4574468085106383,
"acc_stderr": 0.029719281272236844,
"acc_norm": 0.4574468085106383,
"acc_norm_stderr": 0.029719281272236844
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4322033898305085,
"acc_stderr": 0.012652297777114968,
"acc_norm": 0.4322033898305085,
"acc_norm_stderr": 0.012652297777114968
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6066176470588235,
"acc_stderr": 0.029674288281311155,
"acc_norm": 0.6066176470588235,
"acc_norm_stderr": 0.029674288281311155
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6290849673202614,
"acc_stderr": 0.019542101564854128,
"acc_norm": 0.6290849673202614,
"acc_norm_stderr": 0.019542101564854128
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7,
"acc_stderr": 0.04389311454644287,
"acc_norm": 0.7,
"acc_norm_stderr": 0.04389311454644287
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.028920583220675596,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.028920583220675596
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7412935323383084,
"acc_stderr": 0.03096590312357303,
"acc_norm": 0.7412935323383084,
"acc_norm_stderr": 0.03096590312357303
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.82,
"acc_stderr": 0.03861229196653693,
"acc_norm": 0.82,
"acc_norm_stderr": 0.03861229196653693
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5060240963855421,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.5060240963855421,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5263157894736842,
"mc1_stderr": 0.017479241161975453,
"mc2": 0.6755936296533276,
"mc2_stderr": 0.015113334433722326
},
"harness|winogrande|5": {
"acc": 0.7726913970007893,
"acc_stderr": 0.011778612167091088
},
"harness|gsm8k|5": {
"acc": 0.3934799090219864,
"acc_stderr": 0.01345631582840459
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Entreprenerdly/finetunestablediffusion | ---
license: wtfpl
---
|
open-llm-leaderboard/details_abacusai__Smaug-72B-v0.1 | ---
pretty_name: Evaluation run of abacusai/Smaug-72B-v0.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_abacusai__Smaug-72B-v0.1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-04T04:59:32.876763](https://huggingface.co/datasets/open-llm-leaderboard/details_abacusai__Smaug-72B-v0.1/blob/main/results_2024-02-04T04-59-32.876763.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7716613011645818,\n\
\ \"acc_stderr\": 0.02801089457302993,\n \"acc_norm\": 0.7734062646949216,\n\
\ \"acc_norm_stderr\": 0.028568963791437117,\n \"mc1\": 0.6560587515299877,\n\
\ \"mc1_stderr\": 0.016629087514276785,\n \"mc2\": 0.7666613083747418,\n\
\ \"mc2_stderr\": 0.014124410528709273\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.735494880546075,\n \"acc_stderr\": 0.012889272949313371,\n\
\ \"acc_norm\": 0.7602389078498294,\n \"acc_norm_stderr\": 0.012476304127453944\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7199761003784106,\n\
\ \"acc_stderr\": 0.004480929450281562,\n \"acc_norm\": 0.8926508663612827,\n\
\ \"acc_norm_stderr\": 0.0030892396746331585\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \
\ \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7185185185185186,\n\
\ \"acc_stderr\": 0.038850042458002526,\n \"acc_norm\": 0.7185185185185186,\n\
\ \"acc_norm_stderr\": 0.038850042458002526\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.881578947368421,\n \"acc_stderr\": 0.026293995855474928,\n\
\ \"acc_norm\": 0.881578947368421,\n \"acc_norm_stderr\": 0.026293995855474928\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.82,\n\
\ \"acc_stderr\": 0.038612291966536955,\n \"acc_norm\": 0.82,\n \
\ \"acc_norm_stderr\": 0.038612291966536955\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.8452830188679246,\n \"acc_stderr\": 0.022257075558791282,\n\
\ \"acc_norm\": 0.8452830188679246,\n \"acc_norm_stderr\": 0.022257075558791282\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.9305555555555556,\n\
\ \"acc_stderr\": 0.021257974822832048,\n \"acc_norm\": 0.9305555555555556,\n\
\ \"acc_norm_stderr\": 0.021257974822832048\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.62,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\"\
: 0.62,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.55,\n \"acc_stderr\": 0.049999999999999996,\n \
\ \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.049999999999999996\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7456647398843931,\n\
\ \"acc_stderr\": 0.0332055644308557,\n \"acc_norm\": 0.7456647398843931,\n\
\ \"acc_norm_stderr\": 0.0332055644308557\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.5686274509803921,\n \"acc_stderr\": 0.04928099597287534,\n\
\ \"acc_norm\": 0.5686274509803921,\n \"acc_norm_stderr\": 0.04928099597287534\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.81,\n \"acc_stderr\": 0.03942772444036622,\n \"acc_norm\": 0.81,\n\
\ \"acc_norm_stderr\": 0.03942772444036622\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.7914893617021277,\n \"acc_stderr\": 0.026556982117838728,\n\
\ \"acc_norm\": 0.7914893617021277,\n \"acc_norm_stderr\": 0.026556982117838728\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.6140350877192983,\n\
\ \"acc_stderr\": 0.04579639422070434,\n \"acc_norm\": 0.6140350877192983,\n\
\ \"acc_norm_stderr\": 0.04579639422070434\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.7724137931034483,\n \"acc_stderr\": 0.03493950380131184,\n\
\ \"acc_norm\": 0.7724137931034483,\n \"acc_norm_stderr\": 0.03493950380131184\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.6904761904761905,\n \"acc_stderr\": 0.023809523809523864,\n \"\
acc_norm\": 0.6904761904761905,\n \"acc_norm_stderr\": 0.023809523809523864\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5714285714285714,\n\
\ \"acc_stderr\": 0.04426266681379909,\n \"acc_norm\": 0.5714285714285714,\n\
\ \"acc_norm_stderr\": 0.04426266681379909\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8838709677419355,\n\
\ \"acc_stderr\": 0.018225757949432306,\n \"acc_norm\": 0.8838709677419355,\n\
\ \"acc_norm_stderr\": 0.018225757949432306\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.6600985221674877,\n \"acc_stderr\": 0.033327690684107895,\n\
\ \"acc_norm\": 0.6600985221674877,\n \"acc_norm_stderr\": 0.033327690684107895\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536934,\n \"acc_norm\"\
: 0.82,\n \"acc_norm_stderr\": 0.038612291966536934\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8545454545454545,\n \"acc_stderr\": 0.027530196355066584,\n\
\ \"acc_norm\": 0.8545454545454545,\n \"acc_norm_stderr\": 0.027530196355066584\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.9393939393939394,\n \"acc_stderr\": 0.016999994927421592,\n \"\
acc_norm\": 0.9393939393939394,\n \"acc_norm_stderr\": 0.016999994927421592\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9844559585492227,\n \"acc_stderr\": 0.008927492715084315,\n\
\ \"acc_norm\": 0.9844559585492227,\n \"acc_norm_stderr\": 0.008927492715084315\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.8076923076923077,\n \"acc_stderr\": 0.019982347208637282,\n\
\ \"acc_norm\": 0.8076923076923077,\n \"acc_norm_stderr\": 0.019982347208637282\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.4703703703703704,\n \"acc_stderr\": 0.030431963547936584,\n \
\ \"acc_norm\": 0.4703703703703704,\n \"acc_norm_stderr\": 0.030431963547936584\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.8445378151260504,\n \"acc_stderr\": 0.023536818625398904,\n\
\ \"acc_norm\": 0.8445378151260504,\n \"acc_norm_stderr\": 0.023536818625398904\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.5629139072847682,\n \"acc_stderr\": 0.040500357222306355,\n \"\
acc_norm\": 0.5629139072847682,\n \"acc_norm_stderr\": 0.040500357222306355\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9357798165137615,\n \"acc_stderr\": 0.010510494713201403,\n \"\
acc_norm\": 0.9357798165137615,\n \"acc_norm_stderr\": 0.010510494713201403\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6805555555555556,\n \"acc_stderr\": 0.03179876342176853,\n \"\
acc_norm\": 0.6805555555555556,\n \"acc_norm_stderr\": 0.03179876342176853\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.9117647058823529,\n \"acc_stderr\": 0.019907399791316945,\n \"\
acc_norm\": 0.9117647058823529,\n \"acc_norm_stderr\": 0.019907399791316945\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.9113924050632911,\n \"acc_stderr\": 0.018498315206865384,\n \
\ \"acc_norm\": 0.9113924050632911,\n \"acc_norm_stderr\": 0.018498315206865384\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7982062780269058,\n\
\ \"acc_stderr\": 0.02693611191280227,\n \"acc_norm\": 0.7982062780269058,\n\
\ \"acc_norm_stderr\": 0.02693611191280227\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8931297709923665,\n \"acc_stderr\": 0.027096548624883733,\n\
\ \"acc_norm\": 0.8931297709923665,\n \"acc_norm_stderr\": 0.027096548624883733\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8925619834710744,\n \"acc_stderr\": 0.028268812192540616,\n \"\
acc_norm\": 0.8925619834710744,\n \"acc_norm_stderr\": 0.028268812192540616\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8611111111111112,\n\
\ \"acc_stderr\": 0.033432700628696195,\n \"acc_norm\": 0.8611111111111112,\n\
\ \"acc_norm_stderr\": 0.033432700628696195\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.8343558282208589,\n \"acc_stderr\": 0.029208296231259104,\n\
\ \"acc_norm\": 0.8343558282208589,\n \"acc_norm_stderr\": 0.029208296231259104\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.6160714285714286,\n\
\ \"acc_stderr\": 0.04616143075028546,\n \"acc_norm\": 0.6160714285714286,\n\
\ \"acc_norm_stderr\": 0.04616143075028546\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.0349260647662379,\n\
\ \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.0349260647662379\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9401709401709402,\n\
\ \"acc_stderr\": 0.015537514263253874,\n \"acc_norm\": 0.9401709401709402,\n\
\ \"acc_norm_stderr\": 0.015537514263253874\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.034873508801977725,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.034873508801977725\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9169859514687101,\n\
\ \"acc_stderr\": 0.009866287394639536,\n \"acc_norm\": 0.9169859514687101,\n\
\ \"acc_norm_stderr\": 0.009866287394639536\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.8410404624277457,\n \"acc_stderr\": 0.019685307033571946,\n\
\ \"acc_norm\": 0.8410404624277457,\n \"acc_norm_stderr\": 0.019685307033571946\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.6960893854748603,\n\
\ \"acc_stderr\": 0.01538284558758452,\n \"acc_norm\": 0.6960893854748603,\n\
\ \"acc_norm_stderr\": 0.01538284558758452\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.8496732026143791,\n \"acc_stderr\": 0.02046417512433263,\n\
\ \"acc_norm\": 0.8496732026143791,\n \"acc_norm_stderr\": 0.02046417512433263\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.842443729903537,\n\
\ \"acc_stderr\": 0.020692237273583984,\n \"acc_norm\": 0.842443729903537,\n\
\ \"acc_norm_stderr\": 0.020692237273583984\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8641975308641975,\n \"acc_stderr\": 0.019061588181505405,\n\
\ \"acc_norm\": 0.8641975308641975,\n \"acc_norm_stderr\": 0.019061588181505405\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.6560283687943262,\n \"acc_stderr\": 0.02833801742861133,\n \
\ \"acc_norm\": 0.6560283687943262,\n \"acc_norm_stderr\": 0.02833801742861133\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.6023468057366362,\n\
\ \"acc_stderr\": 0.012499840347460642,\n \"acc_norm\": 0.6023468057366362,\n\
\ \"acc_norm_stderr\": 0.012499840347460642\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.8345588235294118,\n \"acc_stderr\": 0.02257177102549473,\n\
\ \"acc_norm\": 0.8345588235294118,\n \"acc_norm_stderr\": 0.02257177102549473\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.815359477124183,\n \"acc_stderr\": 0.015697029240757773,\n \
\ \"acc_norm\": 0.815359477124183,\n \"acc_norm_stderr\": 0.015697029240757773\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7454545454545455,\n\
\ \"acc_stderr\": 0.04172343038705383,\n \"acc_norm\": 0.7454545454545455,\n\
\ \"acc_norm_stderr\": 0.04172343038705383\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8163265306122449,\n \"acc_stderr\": 0.024789071332007646,\n\
\ \"acc_norm\": 0.8163265306122449,\n \"acc_norm_stderr\": 0.024789071332007646\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.900497512437811,\n\
\ \"acc_stderr\": 0.021166216304659397,\n \"acc_norm\": 0.900497512437811,\n\
\ \"acc_norm_stderr\": 0.021166216304659397\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.93,\n \"acc_stderr\": 0.0256432399976243,\n \
\ \"acc_norm\": 0.93,\n \"acc_norm_stderr\": 0.0256432399976243\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5783132530120482,\n\
\ \"acc_stderr\": 0.038444531817709175,\n \"acc_norm\": 0.5783132530120482,\n\
\ \"acc_norm_stderr\": 0.038444531817709175\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8713450292397661,\n \"acc_stderr\": 0.025679342723276894,\n\
\ \"acc_norm\": 0.8713450292397661,\n \"acc_norm_stderr\": 0.025679342723276894\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.6560587515299877,\n\
\ \"mc1_stderr\": 0.016629087514276785,\n \"mc2\": 0.7666613083747418,\n\
\ \"mc2_stderr\": 0.014124410528709273\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.850828729281768,\n \"acc_stderr\": 0.010012598805627305\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7869598180439727,\n \
\ \"acc_stderr\": 0.01127844785690078\n }\n}\n```"
repo_url: https://huggingface.co/abacusai/Smaug-72B-v0.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|arc:challenge|25_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|gsm8k|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hellaswag|10_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-04T04-59-32.876763.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-04T04-59-32.876763.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- '**/details_harness|winogrande|5_2024-02-04T04-59-32.876763.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-04T04-59-32.876763.parquet'
- config_name: results
data_files:
- split: 2024_02_04T04_59_32.876763
path:
- results_2024-02-04T04-59-32.876763.parquet
- split: latest
path:
- results_2024-02-04T04-59-32.876763.parquet
---
# Dataset Card for Evaluation run of abacusai/Smaug-72B-v0.1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_abacusai__Smaug-72B-v0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-04T04:59:32.876763](https://huggingface.co/datasets/open-llm-leaderboard/details_abacusai__Smaug-72B-v0.1/blob/main/results_2024-02-04T04-59-32.876763.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7716613011645818,
"acc_stderr": 0.02801089457302993,
"acc_norm": 0.7734062646949216,
"acc_norm_stderr": 0.028568963791437117,
"mc1": 0.6560587515299877,
"mc1_stderr": 0.016629087514276785,
"mc2": 0.7666613083747418,
"mc2_stderr": 0.014124410528709273
},
"harness|arc:challenge|25": {
"acc": 0.735494880546075,
"acc_stderr": 0.012889272949313371,
"acc_norm": 0.7602389078498294,
"acc_norm_stderr": 0.012476304127453944
},
"harness|hellaswag|10": {
"acc": 0.7199761003784106,
"acc_stderr": 0.004480929450281562,
"acc_norm": 0.8926508663612827,
"acc_norm_stderr": 0.0030892396746331585
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.7185185185185186,
"acc_stderr": 0.038850042458002526,
"acc_norm": 0.7185185185185186,
"acc_norm_stderr": 0.038850042458002526
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.881578947368421,
"acc_stderr": 0.026293995855474928,
"acc_norm": 0.881578947368421,
"acc_norm_stderr": 0.026293995855474928
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.82,
"acc_stderr": 0.038612291966536955,
"acc_norm": 0.82,
"acc_norm_stderr": 0.038612291966536955
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8452830188679246,
"acc_stderr": 0.022257075558791282,
"acc_norm": 0.8452830188679246,
"acc_norm_stderr": 0.022257075558791282
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.9305555555555556,
"acc_stderr": 0.021257974822832048,
"acc_norm": 0.9305555555555556,
"acc_norm_stderr": 0.021257974822832048
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.55,
"acc_stderr": 0.049999999999999996,
"acc_norm": 0.55,
"acc_norm_stderr": 0.049999999999999996
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7456647398843931,
"acc_stderr": 0.0332055644308557,
"acc_norm": 0.7456647398843931,
"acc_norm_stderr": 0.0332055644308557
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5686274509803921,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.5686274509803921,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.81,
"acc_stderr": 0.03942772444036622,
"acc_norm": 0.81,
"acc_norm_stderr": 0.03942772444036622
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7914893617021277,
"acc_stderr": 0.026556982117838728,
"acc_norm": 0.7914893617021277,
"acc_norm_stderr": 0.026556982117838728
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.6140350877192983,
"acc_stderr": 0.04579639422070434,
"acc_norm": 0.6140350877192983,
"acc_norm_stderr": 0.04579639422070434
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7724137931034483,
"acc_stderr": 0.03493950380131184,
"acc_norm": 0.7724137931034483,
"acc_norm_stderr": 0.03493950380131184
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.6904761904761905,
"acc_stderr": 0.023809523809523864,
"acc_norm": 0.6904761904761905,
"acc_norm_stderr": 0.023809523809523864
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5714285714285714,
"acc_stderr": 0.04426266681379909,
"acc_norm": 0.5714285714285714,
"acc_norm_stderr": 0.04426266681379909
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8838709677419355,
"acc_stderr": 0.018225757949432306,
"acc_norm": 0.8838709677419355,
"acc_norm_stderr": 0.018225757949432306
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6600985221674877,
"acc_stderr": 0.033327690684107895,
"acc_norm": 0.6600985221674877,
"acc_norm_stderr": 0.033327690684107895
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.82,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.82,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8545454545454545,
"acc_stderr": 0.027530196355066584,
"acc_norm": 0.8545454545454545,
"acc_norm_stderr": 0.027530196355066584
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9393939393939394,
"acc_stderr": 0.016999994927421592,
"acc_norm": 0.9393939393939394,
"acc_norm_stderr": 0.016999994927421592
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9844559585492227,
"acc_stderr": 0.008927492715084315,
"acc_norm": 0.9844559585492227,
"acc_norm_stderr": 0.008927492715084315
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8076923076923077,
"acc_stderr": 0.019982347208637282,
"acc_norm": 0.8076923076923077,
"acc_norm_stderr": 0.019982347208637282
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.4703703703703704,
"acc_stderr": 0.030431963547936584,
"acc_norm": 0.4703703703703704,
"acc_norm_stderr": 0.030431963547936584
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8445378151260504,
"acc_stderr": 0.023536818625398904,
"acc_norm": 0.8445378151260504,
"acc_norm_stderr": 0.023536818625398904
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.5629139072847682,
"acc_stderr": 0.040500357222306355,
"acc_norm": 0.5629139072847682,
"acc_norm_stderr": 0.040500357222306355
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9357798165137615,
"acc_stderr": 0.010510494713201403,
"acc_norm": 0.9357798165137615,
"acc_norm_stderr": 0.010510494713201403
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6805555555555556,
"acc_stderr": 0.03179876342176853,
"acc_norm": 0.6805555555555556,
"acc_norm_stderr": 0.03179876342176853
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9117647058823529,
"acc_stderr": 0.019907399791316945,
"acc_norm": 0.9117647058823529,
"acc_norm_stderr": 0.019907399791316945
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9113924050632911,
"acc_stderr": 0.018498315206865384,
"acc_norm": 0.9113924050632911,
"acc_norm_stderr": 0.018498315206865384
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7982062780269058,
"acc_stderr": 0.02693611191280227,
"acc_norm": 0.7982062780269058,
"acc_norm_stderr": 0.02693611191280227
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8931297709923665,
"acc_stderr": 0.027096548624883733,
"acc_norm": 0.8931297709923665,
"acc_norm_stderr": 0.027096548624883733
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8925619834710744,
"acc_stderr": 0.028268812192540616,
"acc_norm": 0.8925619834710744,
"acc_norm_stderr": 0.028268812192540616
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8611111111111112,
"acc_stderr": 0.033432700628696195,
"acc_norm": 0.8611111111111112,
"acc_norm_stderr": 0.033432700628696195
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8343558282208589,
"acc_stderr": 0.029208296231259104,
"acc_norm": 0.8343558282208589,
"acc_norm_stderr": 0.029208296231259104
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.6160714285714286,
"acc_stderr": 0.04616143075028546,
"acc_norm": 0.6160714285714286,
"acc_norm_stderr": 0.04616143075028546
},
"harness|hendrycksTest-management|5": {
"acc": 0.8543689320388349,
"acc_stderr": 0.0349260647662379,
"acc_norm": 0.8543689320388349,
"acc_norm_stderr": 0.0349260647662379
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9401709401709402,
"acc_stderr": 0.015537514263253874,
"acc_norm": 0.9401709401709402,
"acc_norm_stderr": 0.015537514263253874
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.86,
"acc_stderr": 0.034873508801977725,
"acc_norm": 0.86,
"acc_norm_stderr": 0.034873508801977725
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9169859514687101,
"acc_stderr": 0.009866287394639536,
"acc_norm": 0.9169859514687101,
"acc_norm_stderr": 0.009866287394639536
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8410404624277457,
"acc_stderr": 0.019685307033571946,
"acc_norm": 0.8410404624277457,
"acc_norm_stderr": 0.019685307033571946
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.6960893854748603,
"acc_stderr": 0.01538284558758452,
"acc_norm": 0.6960893854748603,
"acc_norm_stderr": 0.01538284558758452
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8496732026143791,
"acc_stderr": 0.02046417512433263,
"acc_norm": 0.8496732026143791,
"acc_norm_stderr": 0.02046417512433263
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.842443729903537,
"acc_stderr": 0.020692237273583984,
"acc_norm": 0.842443729903537,
"acc_norm_stderr": 0.020692237273583984
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8641975308641975,
"acc_stderr": 0.019061588181505405,
"acc_norm": 0.8641975308641975,
"acc_norm_stderr": 0.019061588181505405
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6560283687943262,
"acc_stderr": 0.02833801742861133,
"acc_norm": 0.6560283687943262,
"acc_norm_stderr": 0.02833801742861133
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.6023468057366362,
"acc_stderr": 0.012499840347460642,
"acc_norm": 0.6023468057366362,
"acc_norm_stderr": 0.012499840347460642
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8345588235294118,
"acc_stderr": 0.02257177102549473,
"acc_norm": 0.8345588235294118,
"acc_norm_stderr": 0.02257177102549473
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.815359477124183,
"acc_stderr": 0.015697029240757773,
"acc_norm": 0.815359477124183,
"acc_norm_stderr": 0.015697029240757773
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7454545454545455,
"acc_stderr": 0.04172343038705383,
"acc_norm": 0.7454545454545455,
"acc_norm_stderr": 0.04172343038705383
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8163265306122449,
"acc_stderr": 0.024789071332007646,
"acc_norm": 0.8163265306122449,
"acc_norm_stderr": 0.024789071332007646
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.900497512437811,
"acc_stderr": 0.021166216304659397,
"acc_norm": 0.900497512437811,
"acc_norm_stderr": 0.021166216304659397
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.93,
"acc_stderr": 0.0256432399976243,
"acc_norm": 0.93,
"acc_norm_stderr": 0.0256432399976243
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5783132530120482,
"acc_stderr": 0.038444531817709175,
"acc_norm": 0.5783132530120482,
"acc_norm_stderr": 0.038444531817709175
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8713450292397661,
"acc_stderr": 0.025679342723276894,
"acc_norm": 0.8713450292397661,
"acc_norm_stderr": 0.025679342723276894
},
"harness|truthfulqa:mc|0": {
"mc1": 0.6560587515299877,
"mc1_stderr": 0.016629087514276785,
"mc2": 0.7666613083747418,
"mc2_stderr": 0.014124410528709273
},
"harness|winogrande|5": {
"acc": 0.850828729281768,
"acc_stderr": 0.010012598805627305
},
"harness|gsm8k|5": {
"acc": 0.7869598180439727,
"acc_stderr": 0.01127844785690078
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
macadeliccc/distilabel-neurology-preferences-2k-cleaner | ---
dataset_info:
features:
- name: input
dtype: string
- name: generation_model
sequence: string
- name: generation_prompt
list:
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_generation_responses
sequence: string
- name: generations
sequence: string
- name: labelling_model
dtype: string
- name: labelling_prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_labelling_response
dtype: string
- name: rating
sequence: float64
- name: rationale
sequence: string
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 46615638
num_examples: 1994
download_size: 16327496
dataset_size: 46615638
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "distilabel-neurology-preferences-2k-cleaner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aisc-team-c1/spanish-mmedbench-finetuning | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1082403
num_examples: 2657
download_size: 587637
dataset_size: 1082403
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
arias048/myPictures | ---
license: other
---
|
CyberHarem/oonuma_kurumi_idolmastercinderellagirls | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of oonuma_kurumi/大沼くるみ (THE iDOLM@STER: Cinderella Girls)
This is the dataset of oonuma_kurumi/大沼くるみ (THE iDOLM@STER: Cinderella Girls), containing 80 images and their tags.
The core tags of this character are `long_hair, breasts, brown_eyes, large_breasts, black_hair, bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 80 | 72.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oonuma_kurumi_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 80 | 51.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oonuma_kurumi_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 174 | 103.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oonuma_kurumi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 80 | 68.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oonuma_kurumi_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 174 | 133.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oonuma_kurumi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/oonuma_kurumi_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, blush, long_sleeves, open_mouth, tears, wavy_mouth, bangs, brown_skirt, solo, white_shirt, collared_shirt, looking_at_viewer, pink_bow, plaid_skirt, very_long_hair, blue_hair, center_frills, crying, hands_up, simple_background |
| 1 | 12 |  |  |  |  |  | 1girl, open_mouth, solo, blush, smile, cleavage, tears, hair_bow, microphone, ponytail, wavy_mouth, dress |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | long_sleeves | open_mouth | tears | wavy_mouth | bangs | brown_skirt | solo | white_shirt | collared_shirt | looking_at_viewer | pink_bow | plaid_skirt | very_long_hair | blue_hair | center_frills | crying | hands_up | simple_background | smile | cleavage | hair_bow | microphone | ponytail | dress |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:---------------|:-------------|:--------|:-------------|:--------|:--------------|:-------|:--------------|:-----------------|:--------------------|:-----------|:--------------|:-----------------|:------------|:----------------|:---------|:-----------|:--------------------|:--------|:-----------|:-----------|:-------------|:-----------|:--------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | |
| 1 | 12 |  |  |  |  |  | X | X | | X | X | X | | | X | | | | | | | | | | | | X | X | X | X | X | X |
|
RedBaron/Naturetreasures | ---
license: artistic-2.0
---
|
vsarathy/DIARC-embodied-nlu-styled-4k | ---
license: mit
language:
- en
pretty_name: 'DIARC-embodied-nlu-styled-4k '
---
# DIARC-LLM-Parser-Embodied-NLU-Styled-4K
This dataset contains about ~4k utterances together with their semantic parses as interpretable by the DIARC cognitive robotic architecture.
The parses are meant to capture the speech-theoretic aspects of NL and parse the intent, referents, and descriptors in the utterance.
This dataset is one in a set of datasets. For this particular one, we programmatically built 127 utterances and semantics that are groundable in a robotic architecture (DIARC)/
These 127 utterances were then expanded into ~4k style variations across four dimensions
1. Directness/Indirectness
2. Formality
3. Familiarity (whether it was uttered by a native speaker or a second-language speaker)
4. Word choice |
remyxai/ffmperative_refined | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3372951
num_examples: 5565
download_size: 1007476
dataset_size: 3372951
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ffmperative_refined"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mikhail-panzo/processed_dutch_ex_dataset | ---
dataset_info:
features:
- name: speaker_embeddings
sequence: float32
- name: input_ids
sequence: int32
- name: labels
sequence:
sequence: float32
splits:
- name: train
num_bytes: 336452268
num_examples: 2688
download_size: 335469721
dataset_size: 336452268
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_Gille__StrangeMerges_5-7B-ties | ---
pretty_name: Evaluation run of Gille/StrangeMerges_5-7B-ties
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Gille/StrangeMerges_5-7B-ties](https://huggingface.co/Gille/StrangeMerges_5-7B-ties)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Gille__StrangeMerges_5-7B-ties\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-02T02:44:27.733282](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_5-7B-ties/blob/main/results_2024-02-02T02-44-27.733282.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6545139389997998,\n\
\ \"acc_stderr\": 0.032091694452076346,\n \"acc_norm\": 0.6541682196117141,\n\
\ \"acc_norm_stderr\": 0.03275968368339009,\n \"mc1\": 0.5165238678090576,\n\
\ \"mc1_stderr\": 0.01749394019005772,\n \"mc2\": 0.6637291950615067,\n\
\ \"mc2_stderr\": 0.015304299142803788\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.689419795221843,\n \"acc_stderr\": 0.013522292098053059,\n\
\ \"acc_norm\": 0.7167235494880546,\n \"acc_norm_stderr\": 0.013167478735134575\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7105158334993029,\n\
\ \"acc_stderr\": 0.0045259609655517044,\n \"acc_norm\": 0.8788090021907986,\n\
\ \"acc_norm_stderr\": 0.003256821418857317\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6518518518518519,\n\
\ \"acc_stderr\": 0.041153246103369526,\n \"acc_norm\": 0.6518518518518519,\n\
\ \"acc_norm_stderr\": 0.041153246103369526\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n\
\ \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.65,\n\
\ \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.65,\n \
\ \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7320754716981132,\n \"acc_stderr\": 0.027257260322494845,\n\
\ \"acc_norm\": 0.7320754716981132,\n \"acc_norm_stderr\": 0.027257260322494845\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n\
\ \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n\
\ \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n\
\ \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n\
\ \"acc_stderr\": 0.035676037996391706,\n \"acc_norm\": 0.6763005780346821,\n\
\ \"acc_norm_stderr\": 0.035676037996391706\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.04858083574266345,\n\
\ \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.04858083574266345\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.74,\n \"acc_stderr\": 0.04408440022768077,\n \"acc_norm\": 0.74,\n\
\ \"acc_norm_stderr\": 0.04408440022768077\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6,\n \"acc_stderr\": 0.03202563076101735,\n \
\ \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.03202563076101735\n },\n\
\ \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \
\ \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"\
acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n\
\ \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.41534391534391535,\n \"acc_stderr\": 0.025379524910778408,\n \"\
acc_norm\": 0.41534391534391535,\n \"acc_norm_stderr\": 0.025379524910778408\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.48412698412698413,\n\
\ \"acc_stderr\": 0.04469881854072606,\n \"acc_norm\": 0.48412698412698413,\n\
\ \"acc_norm_stderr\": 0.04469881854072606\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7838709677419354,\n \"acc_stderr\": 0.02341529343356853,\n \"\
acc_norm\": 0.7838709677419354,\n \"acc_norm_stderr\": 0.02341529343356853\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n \"\
acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\
: 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n\
\ \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7828282828282829,\n \"acc_stderr\": 0.02937661648494563,\n \"\
acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.02937661648494563\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9119170984455959,\n \"acc_stderr\": 0.02045374660160103,\n\
\ \"acc_norm\": 0.9119170984455959,\n \"acc_norm_stderr\": 0.02045374660160103\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6717948717948717,\n \"acc_stderr\": 0.023807633198657266,\n\
\ \"acc_norm\": 0.6717948717948717,\n \"acc_norm_stderr\": 0.023807633198657266\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3333333333333333,\n \"acc_stderr\": 0.028742040903948485,\n \
\ \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.028742040903948485\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6680672268907563,\n \"acc_stderr\": 0.03058869701378364,\n \
\ \"acc_norm\": 0.6680672268907563,\n \"acc_norm_stderr\": 0.03058869701378364\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"\
acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8477064220183487,\n \"acc_stderr\": 0.015405084393157074,\n \"\
acc_norm\": 0.8477064220183487,\n \"acc_norm_stderr\": 0.015405084393157074\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5092592592592593,\n \"acc_stderr\": 0.034093869469927006,\n \"\
acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.034093869469927006\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8480392156862745,\n \"acc_stderr\": 0.0251956584289318,\n \"acc_norm\"\
: 0.8480392156862745,\n \"acc_norm_stderr\": 0.0251956584289318\n },\n\
\ \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\":\
\ 0.7974683544303798,\n \"acc_stderr\": 0.026160568246601443,\n \"\
acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.026160568246601443\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n\
\ \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n\
\ \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.03498149385462472,\n\
\ \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.03498149385462472\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"\
acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n\
\ \"acc_stderr\": 0.03957835471980979,\n \"acc_norm\": 0.7870370370370371,\n\
\ \"acc_norm_stderr\": 0.03957835471980979\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.0335195387952127,\n\
\ \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.0335195387952127\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n\
\ \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n\
\ \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n\
\ \"acc_stderr\": 0.02093019318517933,\n \"acc_norm\": 0.8846153846153846,\n\
\ \"acc_norm_stderr\": 0.02093019318517933\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8339719029374202,\n\
\ \"acc_stderr\": 0.013306478243066298,\n \"acc_norm\": 0.8339719029374202,\n\
\ \"acc_norm_stderr\": 0.013306478243066298\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7427745664739884,\n \"acc_stderr\": 0.023532925431044287,\n\
\ \"acc_norm\": 0.7427745664739884,\n \"acc_norm_stderr\": 0.023532925431044287\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4346368715083799,\n\
\ \"acc_stderr\": 0.01657899743549672,\n \"acc_norm\": 0.4346368715083799,\n\
\ \"acc_norm_stderr\": 0.01657899743549672\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7156862745098039,\n \"acc_stderr\": 0.025829163272757482,\n\
\ \"acc_norm\": 0.7156862745098039,\n \"acc_norm_stderr\": 0.025829163272757482\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7138263665594855,\n\
\ \"acc_stderr\": 0.025670259242188933,\n \"acc_norm\": 0.7138263665594855,\n\
\ \"acc_norm_stderr\": 0.025670259242188933\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7376543209876543,\n \"acc_stderr\": 0.02447722285613511,\n\
\ \"acc_norm\": 0.7376543209876543,\n \"acc_norm_stderr\": 0.02447722285613511\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.49645390070921985,\n \"acc_stderr\": 0.02982674915328092,\n \
\ \"acc_norm\": 0.49645390070921985,\n \"acc_norm_stderr\": 0.02982674915328092\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46870925684485004,\n\
\ \"acc_stderr\": 0.012745204626083136,\n \"acc_norm\": 0.46870925684485004,\n\
\ \"acc_norm_stderr\": 0.012745204626083136\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6654411764705882,\n \"acc_stderr\": 0.028661996202335303,\n\
\ \"acc_norm\": 0.6654411764705882,\n \"acc_norm_stderr\": 0.028661996202335303\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6797385620915033,\n \"acc_stderr\": 0.018875682938069443,\n \
\ \"acc_norm\": 0.6797385620915033,\n \"acc_norm_stderr\": 0.018875682938069443\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n\
\ \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n\
\ \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n\
\ \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454125,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454125\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774708,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774708\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5421686746987951,\n\
\ \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.5421686746987951,\n\
\ \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160893,\n\
\ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160893\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5165238678090576,\n\
\ \"mc1_stderr\": 0.01749394019005772,\n \"mc2\": 0.6637291950615067,\n\
\ \"mc2_stderr\": 0.015304299142803788\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8366219415943172,\n \"acc_stderr\": 0.010390695970273766\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6884003032600455,\n \
\ \"acc_stderr\": 0.012757375376754938\n }\n}\n```"
repo_url: https://huggingface.co/Gille/StrangeMerges_5-7B-ties
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|arc:challenge|25_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|gsm8k|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hellaswag|10_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T02-44-27.733282.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-02T02-44-27.733282.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- '**/details_harness|winogrande|5_2024-02-02T02-44-27.733282.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-02T02-44-27.733282.parquet'
- config_name: results
data_files:
- split: 2024_02_02T02_44_27.733282
path:
- results_2024-02-02T02-44-27.733282.parquet
- split: latest
path:
- results_2024-02-02T02-44-27.733282.parquet
---
# Dataset Card for Evaluation run of Gille/StrangeMerges_5-7B-ties
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Gille/StrangeMerges_5-7B-ties](https://huggingface.co/Gille/StrangeMerges_5-7B-ties) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Gille__StrangeMerges_5-7B-ties",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-02T02:44:27.733282](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_5-7B-ties/blob/main/results_2024-02-02T02-44-27.733282.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6545139389997998,
"acc_stderr": 0.032091694452076346,
"acc_norm": 0.6541682196117141,
"acc_norm_stderr": 0.03275968368339009,
"mc1": 0.5165238678090576,
"mc1_stderr": 0.01749394019005772,
"mc2": 0.6637291950615067,
"mc2_stderr": 0.015304299142803788
},
"harness|arc:challenge|25": {
"acc": 0.689419795221843,
"acc_stderr": 0.013522292098053059,
"acc_norm": 0.7167235494880546,
"acc_norm_stderr": 0.013167478735134575
},
"harness|hellaswag|10": {
"acc": 0.7105158334993029,
"acc_stderr": 0.0045259609655517044,
"acc_norm": 0.8788090021907986,
"acc_norm_stderr": 0.003256821418857317
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6518518518518519,
"acc_stderr": 0.041153246103369526,
"acc_norm": 0.6518518518518519,
"acc_norm_stderr": 0.041153246103369526
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.0378272898086547,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.0378272898086547
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7320754716981132,
"acc_stderr": 0.027257260322494845,
"acc_norm": 0.7320754716981132,
"acc_norm_stderr": 0.027257260322494845
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6763005780346821,
"acc_stderr": 0.035676037996391706,
"acc_norm": 0.6763005780346821,
"acc_norm_stderr": 0.035676037996391706
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.04858083574266345,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.04858083574266345
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768077,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768077
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6,
"acc_stderr": 0.03202563076101735,
"acc_norm": 0.6,
"acc_norm_stderr": 0.03202563076101735
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41534391534391535,
"acc_stderr": 0.025379524910778408,
"acc_norm": 0.41534391534391535,
"acc_norm_stderr": 0.025379524910778408
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.48412698412698413,
"acc_stderr": 0.04469881854072606,
"acc_norm": 0.48412698412698413,
"acc_norm_stderr": 0.04469881854072606
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7838709677419354,
"acc_stderr": 0.02341529343356853,
"acc_norm": 0.7838709677419354,
"acc_norm_stderr": 0.02341529343356853
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7828282828282829,
"acc_stderr": 0.02937661648494563,
"acc_norm": 0.7828282828282829,
"acc_norm_stderr": 0.02937661648494563
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9119170984455959,
"acc_stderr": 0.02045374660160103,
"acc_norm": 0.9119170984455959,
"acc_norm_stderr": 0.02045374660160103
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6717948717948717,
"acc_stderr": 0.023807633198657266,
"acc_norm": 0.6717948717948717,
"acc_norm_stderr": 0.023807633198657266
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.028742040903948485,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.028742040903948485
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6680672268907563,
"acc_stderr": 0.03058869701378364,
"acc_norm": 0.6680672268907563,
"acc_norm_stderr": 0.03058869701378364
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8477064220183487,
"acc_stderr": 0.015405084393157074,
"acc_norm": 0.8477064220183487,
"acc_norm_stderr": 0.015405084393157074
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8480392156862745,
"acc_stderr": 0.0251956584289318,
"acc_norm": 0.8480392156862745,
"acc_norm_stderr": 0.0251956584289318
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7974683544303798,
"acc_stderr": 0.026160568246601443,
"acc_norm": 0.7974683544303798,
"acc_norm_stderr": 0.026160568246601443
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477515,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477515
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.03498149385462472,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.03498149385462472
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.03957835471980979,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.03957835471980979
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.0335195387952127,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.0335195387952127
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.04718471485219588,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.04718471485219588
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.02093019318517933,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.02093019318517933
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8339719029374202,
"acc_stderr": 0.013306478243066298,
"acc_norm": 0.8339719029374202,
"acc_norm_stderr": 0.013306478243066298
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7427745664739884,
"acc_stderr": 0.023532925431044287,
"acc_norm": 0.7427745664739884,
"acc_norm_stderr": 0.023532925431044287
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4346368715083799,
"acc_stderr": 0.01657899743549672,
"acc_norm": 0.4346368715083799,
"acc_norm_stderr": 0.01657899743549672
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7156862745098039,
"acc_stderr": 0.025829163272757482,
"acc_norm": 0.7156862745098039,
"acc_norm_stderr": 0.025829163272757482
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7138263665594855,
"acc_stderr": 0.025670259242188933,
"acc_norm": 0.7138263665594855,
"acc_norm_stderr": 0.025670259242188933
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7376543209876543,
"acc_stderr": 0.02447722285613511,
"acc_norm": 0.7376543209876543,
"acc_norm_stderr": 0.02447722285613511
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.49645390070921985,
"acc_stderr": 0.02982674915328092,
"acc_norm": 0.49645390070921985,
"acc_norm_stderr": 0.02982674915328092
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46870925684485004,
"acc_stderr": 0.012745204626083136,
"acc_norm": 0.46870925684485004,
"acc_norm_stderr": 0.012745204626083136
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6654411764705882,
"acc_stderr": 0.028661996202335303,
"acc_norm": 0.6654411764705882,
"acc_norm_stderr": 0.028661996202335303
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6797385620915033,
"acc_stderr": 0.018875682938069443,
"acc_norm": 0.6797385620915033,
"acc_norm_stderr": 0.018875682938069443
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454125,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454125
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774708,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774708
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5421686746987951,
"acc_stderr": 0.0387862677100236,
"acc_norm": 0.5421686746987951,
"acc_norm_stderr": 0.0387862677100236
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.027966785859160893,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.027966785859160893
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5165238678090576,
"mc1_stderr": 0.01749394019005772,
"mc2": 0.6637291950615067,
"mc2_stderr": 0.015304299142803788
},
"harness|winogrande|5": {
"acc": 0.8366219415943172,
"acc_stderr": 0.010390695970273766
},
"harness|gsm8k|5": {
"acc": 0.6884003032600455,
"acc_stderr": 0.012757375376754938
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
CV3D/ArtiFact_CLIP_Features | ---
license: unknown
dataset_info:
features:
- name: filename
dtype: string
- name: image_path
dtype: string
- name: target
dtype: int64
- name: category
dtype: string
- name: group
dtype: string
- name: features
sequence: float32
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4312623896
num_examples: 1997390
- name: test
num_bytes: 1078157825
num_examples: 499348
download_size: 6758394242
dataset_size: 5390781721
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
gsh3729/sw_d | ---
dataset_info:
features:
- name: filename
dtype: string
- name: tif
dtype: binary
- name: tfw
dtype: binary
splits:
- name: train
num_bytes: 817283499
num_examples: 60000
- name: val
num_bytes: 273238365
num_examples: 20000
download_size: 1081420918
dataset_size: 1090521864
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
---
|
SEACrowd/indolem_ud_id_pud | ---
license: cc-by-4.0
tags:
- dependency-parsing
language:
- ind
---
# indolem_ud_id_pud
1 of 8 sub-datasets of IndoLEM, a comprehensive dataset encompassing 7 NLP tasks (Koto et al., 2020).
This dataset is part of [Parallel Universal Dependencies (PUD)](http://universaldependencies.org/conll17/) project.
This is based on the first corrected version by Alfina et al. (2019), contains 1,000 sentences.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@conference{2f8c7438a7f44f6b85b773586cff54e8,
title = "A gold standard dependency treebank for Indonesian",
author = "Ika Alfina and Arawinda Dinakaramani and Fanany, {Mohamad Ivan} and Heru Suhartanto",
note = "Publisher Copyright: { extcopyright} 2019 Proceedings of the 33rd Pacific Asia Conference on Language, Information and Computation, PACLIC 2019. All rights reserved.; 33rd Pacific Asia Conference on Language, Information and Computation, PACLIC 2019 ; Conference date: 13-09-2019 Through 15-09-2019",
year = "2019",
month = jan,
day = "1",
language = "English",
pages = "1--9",
}
@article{DBLP:journals/corr/abs-2011-00677,
author = {Fajri Koto and
Afshin Rahimi and
Jey Han Lau and
Timothy Baldwin},
title = {IndoLEM and IndoBERT: {A} Benchmark Dataset and Pre-trained Language
Model for Indonesian {NLP}},
journal = {CoRR},
volume = {abs/2011.00677},
year = {2020},
url = {https://arxiv.org/abs/2011.00677},
eprinttype = {arXiv},
eprint = {2011.00677},
timestamp = {Fri, 06 Nov 2020 15:32:47 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2011-00677.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
## License
Creative Commons Attribution 4.0
## Homepage
[https://indolem.github.io/](https://indolem.github.io/)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) |
louisbrulenaudet/code-travail | ---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- finetuning
- legal
- french law
- droit français
- Code du travail
source_datasets:
- original
pretty_name: Code du travail
task_categories:
- text-generation
- table-question-answering
- summarization
- text-retrieval
- question-answering
- text-classification
size_categories:
- 1K<n<10K
---
# Code du travail, non-instruct (2024-04-15)
This project focuses on fine-tuning pre-trained language models to create efficient and accurate models for legal practice.
Fine-tuning is the process of adapting a pre-trained model to perform specific tasks or cater to particular domains. It involves adjusting the model's parameters through a further round of training on task-specific or domain-specific data. While conventional fine-tuning strategies involve supervised learning with labeled data, instruction-based fine-tuning introduces a more structured and interpretable approach.
Instruction-based fine-tuning leverages the power of human-provided instructions to guide the model's behavior. These instructions can be in the form of text prompts, prompts with explicit task descriptions, or a combination of both. This approach allows for a more controlled and context-aware interaction with the LLM, making it adaptable to a multitude of specialized tasks.
Instruction-based fine-tuning significantly enhances the performance of LLMs in the following ways:
- Task-Specific Adaptation: LLMs, when fine-tuned with specific instructions, exhibit remarkable adaptability to diverse tasks. They can switch seamlessly between translation, summarization, and question-answering, guided by the provided instructions.
- Reduced Ambiguity: Traditional LLMs might generate ambiguous or contextually inappropriate responses. Instruction-based fine-tuning allows for a clearer and more context-aware generation, reducing the likelihood of nonsensical outputs.
- Efficient Knowledge Transfer: Instructions can encapsulate domain-specific knowledge, enabling LLMs to benefit from expert guidance. This knowledge transfer is particularly valuable in fields like tax practice, law, medicine, and more.
- Interpretability: Instruction-based fine-tuning also makes LLM behavior more interpretable. Since the instructions are human-readable, it becomes easier to understand and control model outputs.
- Adaptive Behavior: LLMs, post instruction-based fine-tuning, exhibit adaptive behavior that is responsive to both explicit task descriptions and implicit cues within the provided text.
## Concurrent reading of the LegalKit
To use all the legal data published on LegalKit, you can use this code snippet:
```python
# -*- coding: utf-8 -*-
import concurrent.futures
import os
import datasets
from tqdm.notebook import tqdm
def dataset_loader(
name:str,
streaming:bool=True
) -> datasets.Dataset:
"""
Helper function to load a single dataset in parallel.
Parameters
----------
name : str
Name of the dataset to be loaded.
streaming : bool, optional
Determines if datasets are streamed. Default is True.
Returns
-------
dataset : datasets.Dataset
Loaded dataset object.
Raises
------
Exception
If an error occurs during dataset loading.
"""
try:
return datasets.load_dataset(
name,
split="train",
streaming=streaming
)
except Exception as exc:
logging.error(f"Error loading dataset {name}: {exc}")
return None
def load_datasets(
req:list,
streaming:bool=True
) -> list:
"""
Downloads datasets specified in a list and creates a list of loaded datasets.
Parameters
----------
req : list
A list containing the names of datasets to be downloaded.
streaming : bool, optional
Determines if datasets are streamed. Default is True.
Returns
-------
datasets_list : list
A list containing loaded datasets as per the requested names provided in 'req'.
Raises
------
Exception
If an error occurs during dataset loading or processing.
Examples
--------
>>> datasets = load_datasets(["dataset1", "dataset2"], streaming=False)
"""
datasets_list = []
with concurrent.futures.ThreadPoolExecutor() as executor:
future_to_dataset = {executor.submit(dataset_loader, name): name for name in req}
for future in tqdm(concurrent.futures.as_completed(future_to_dataset), total=len(req)):
name = future_to_dataset[future]
try:
dataset = future.result()
if dataset:
datasets_list.append(dataset)
except Exception as exc:
logging.error(f"Error processing dataset {name}: {exc}")
return datasets_list
req = [
"louisbrulenaudet/code-artisanat",
"louisbrulenaudet/code-action-sociale-familles",
# ...
]
datasets_list = load_datasets(
req=req,
streaming=True
)
dataset = datasets.concatenate_datasets(
datasets_list
)
```
## Dataset generation
This JSON file is a list of dictionaries, each dictionary contains the following fields:
- `instruction`: `string`, presenting the instruction linked to the element.
- `input`: `string`, signifying the input details for the element.
- `output`: `string`, indicating the output information for the element.
- `start`: `string`, the date of entry into force of the article.
- `expiration`: `string`, the date of expiration of the article.
- `num`: `string`, the id of the article.
We used the following list of instructions for generating the dataset:
```python
instructions = [
"Compose l'intégralité de l'article sous forme écrite.",
"Écris la totalité du contenu de l'article.",
"Formule la totalité du texte présent dans l'article.",
"Produis l'intégralité de l'article en écriture.",
"Développe l'article dans son ensemble par écrit.",
"Génère l'ensemble du texte contenu dans l'article.",
"Formule le contenu intégral de l'article en entier.",
"Rédige la totalité du texte de l'article en entier.",
"Compose l'intégralité du contenu textuel de l'article.",
"Rédige l'ensemble du texte qui constitue l'article.",
"Formule l'article entier dans son contenu écrit.",
"Composez l'intégralité de l'article sous forme écrite.",
"Écrivez la totalité du contenu de l'article.",
"Formulez la totalité du texte présent dans l'article.",
"Développez l'article dans son ensemble par écrit.",
"Générez l'ensemble du texte contenu dans l'article.",
"Formulez le contenu intégral de l'article en entier.",
"Rédigez la totalité du texte de l'article en entier.",
"Composez l'intégralité du contenu textuel de l'article.",
"Écrivez l'article dans son intégralité en termes de texte.",
"Rédigez l'ensemble du texte qui constitue l'article.",
"Formulez l'article entier dans son contenu écrit.",
"Composer l'intégralité de l'article sous forme écrite.",
"Écrire la totalité du contenu de l'article.",
"Formuler la totalité du texte présent dans l'article.",
"Produire l'intégralité de l'article en écriture.",
"Développer l'article dans son ensemble par écrit.",
"Générer l'ensemble du texte contenu dans l'article.",
"Formuler le contenu intégral de l'article en entier.",
"Rédiger la totalité du texte de l'article en entier.",
"Composer l'intégralité du contenu textuel de l'article.",
"Rédiger l'ensemble du texte qui constitue l'article.",
"Formuler l'article entier dans son contenu écrit.",
"Quelles sont les dispositions de l'article ?",
"Quelles dispositions sont incluses dans l'article ?",
"Quelles sont les dispositions énoncées dans l'article ?",
"Quel est le texte intégral de l'article ?",
"Quelle est la lettre de l'article ?"
]
```
## Feedback
If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com). |
Trelis/function_calling_v3_SAMPLE | ---
task_categories:
- question-answering
- conversational
- text-generation
language:
- en
tags:
- function call
- function calling
- function-calling
size_categories:
- n<1K
---
# Trelis Function Calling Dataset - VERSION 3 - SAMPLE
> This is a SAMPLE of the v3 dataset available for purchase [here](https://huggingface.co/datasets/Trelis/function_calling_v3/edit/main/README.md).
Features:
- Allows models to be fine-tuned for function-calling.
- The dataset is human generated and does not make use of Llama 2 or OpenAI!
- The dataset includes 66 training rows, 19 validation rows and 5 test rows (for manual evaluation).
- Based on eight functions: search_bing, search_arxiv, save_chat, read_json_file, list_files, get_current_weather, delete_file, clear_chat
Alternatively, you can find pre-trained function calling models on [Trelis Mart](https://mart.trelis.com)
## Updates since v2
- Cross-compatible function format: The format now matches OpenAI's function format, making it easy to migrate from using OpenAI APIs to any models fine-tuned with this dataset.
- Chain function calling: Ability (particularly with larger models) to first make a call to one function in order to get data for a second function call.
- Supported by inferencing scripts, read more below.
--Change-log--
04Dec2023 - Official release of function_calling_v3
02Dec2023 - Pre-release of function_calling_v3
## Inference Scripts
Out-of-the-box inference scripts are available for purchase:
- Purchase only the function calling inference scripts, [HERE](https://buy.stripe.com/28o00M9K50zp4ow4hf)
- Purchase as part of the full ADVANCED-inference repo, [HERE](https://trelis.com/enterprise-server-api-and-inference-guide/).
## Fine-Tuning Notes and Scripts
The objective of function calling is for the model to return a structured json object *and nothing else*. The performance of fine-tuning depends **strongly** on how the attention mask and loss mask are set. For further details see the [Youtube Video Here](https://youtu.be/OQdp-OeG1as).
The fine-tuning script is available for purchase alone [here](https://buy.stripe.com/fZe14Qe0l81R9IQaFy), or is included in the ADVANCED-fine-tuning repository available for purchase on [Trelis.com](https://trelis.com).
### QLoRa Training Notebook for Llama 2 (FREE)
- Access a basic Google Colab script for fine-tuning [here](https://colab.research.google.com/drive/1uMSS1o_8YOPyG1X_4k6ENEE3kJfBGGhH?usp=sharing).
## Licensing
The Function Calling Extended dataset is suitable for commercial use.
Further terms:
- Licenses are not transferable to other users/entities.
- The dataset may not be re-published in it's current or derivative form.
- The dataset may be used to train and fine-tune commercial language models.
### Attribution of data sources
This project includes data from the TruthfulQA dataset, which is available at: https://huggingface.co/datasets/truthful_qa. The truthful_qa dataset is licensed under the Apache License 2.0, Copyright (C) 2023, Stephanie Lin, Jacob Hilton, and Owain Evans.
## Prompt Format (example below is for openchat)
```
B_FUNC, E_FUNC = "You have access to the following functions. Use them if required:\n\n", "\n\n"
B_INST, E_INST = "GPT4 Correct User: ", "<|end_of_turn|>GPT4 Correct Assistant:" #OpenChat style
# B_INST, E_INST = "[INST] ", " [/INST]" #Llama 2 style
functionList = data['test'][index]['functionList']
user_prompt = data['test'][index]['userPrompt']
correct_answer = data['test'][index]['assistantResponse']
prompt = f"{E_FUNC}{B_FUNC}{functionList.strip()}{E_FUNC}{B_INST}{user_prompt.strip()}{E_INST}\n\n"
```
## Sample Prompt and Response:
```
You have access to the following functions. Use them if required:
[
{
"type": "function",
"function": {
"name": "get_stock_price",
"description": "Get the stock price of an array of stocks",
"parameters": {
"type": "object",
"properties": {
"names": {
"type": "array",
"items": {
"type": "string"
},
"description": "An array of stocks"
}
},
"required": [
"names"
]
}
}
},
{
"type": "function",
"function": {
"name": "get_big_stocks",
"description": "Get the names of the largest N stocks by market cap",
"parameters": {
"type": "object",
"properties": {
"number": {
"type": "integer",
"description": "The number of largest stocks to get the names of, e.g. 25"
},
"region": {
"type": "string",
"description": "The region to consider, can be \"US\" or \"World\"."
}
},
"required": [
"number"
]
}
}
}
]GPT4 Correct User: Get the price of Apple's stock<|end_of_turn|>GPT4 Correct Assistant:{
"name": "get_stock_price",
"arguments": {
"names": [
"Apple"
]
}
}<|end_of_turn|>
```
## CSV File Structure
The generated CSV file has the following columns:
- `functionList`: Descriptions of two functions (the current function and a randomly selected other function).
- `userPrompt`: The user's prompt.
- `assistantResponse`: The assistant's response.
### JSON File Structure
Function metadata format follows the OpenAI standard.
Each function file should be a JSON file with the following structure:
```json
{
"type": "function",
"function": {
"name": "function_name",
"description": "function description",
"parameters": {
"type": "object",
"properties": {
"property_1": {
"type": "property_type", //#e.g. string
"description": "property description"
},
"property_2": {
"type": "property_type", //#e.g. string
"description": "property description"
}
},
"required": ["property_1","property_2"]
}
},
"samplePromptResponsePairs": [
{
"prompt": "sample_prompt",
"response": {
"name": "generate_password",
"arguments": {
"property_1": "property_value",
"property_2": "property_value"
}
}
},
...
]
}
```
The `functionMetaData` object describes the function. The `samplePromptResponsePairs` array contains sample prompts and responses for the function.
### Testing JSON Structure
A script named `validate.py` can be used to validate the structure of a function JSON file. It checks for the presence and correct types of all necessary keys in the JSON structure.
To use the script, call it from the command line with the name of the function file as an argument:
```
python validate.py my_function.json
```
## Repo Structure (for prompt dataset generation)
- `functions/`: This directory contains function files, each of which is a JSON file with a specific structure that describes a function and its sample prompts and responses.
- `generate_dataset.py`: This Python script generates the base training and testing dataset CSV files. The first example in each function json file is used in the validation dataset and the rest are used for the train dataset.
- `addBlank.py`: This adds in truthfulqa questions and answers after system prompts with functions.
- `text_responses.py`: adds in prompts to accustomise the model to the presence of function descriptions at the start of prompt sequences.
There are also, some equivalent files for generating a test dataset - to be used for manual evaluation:
- `test_functions/`: contains functions for manual evaluation, different to the training and test set of functions.
- create_test_datasets.py - which runs createTestPrompts.py and test_text_responses.py
- createTestPrompts.py which creates data rows to test function calling without and without required arguments provided, as well as one chain function calling test (e.g. where one function must be called before the other).
- test_text_responses.py generates data rows to test out simple prompts (e.g. Greetings!), short non-sensical prompts (e.g. "shop"), and also a standard question (What planets are in our solar system?). |
omarelsayeed/ALG_FULL | ---
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 181948837
num_examples: 69212
download_size: 88983209
dataset_size: 181948837
---
# Dataset Card for "ALG_FULL"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LudditeDrawslave/cookies | ---
license: unknown
---
|
llm-book/ner-wikipedia-dataset | ---
language:
- ja
license:
- cc-by-sa-3.0
size_categories:
- 1K<n<10K
task_categories:
- token-classification
---
# Dataset Card for llm-book/ner-wikipedia-dataset
書籍『大規模言語モデル入門』で使用する、ストックマーク株式会社により作成された「Wikipediaを用いた日本語の固有表現抽出データセット」(Version 2.0)です。
Githubリポジトリ[stockmarkteam/ner-wikipedia-dataset](https://github.com/stockmarkteam/ner-wikipedia-dataset)で公開されているデータセットを利用しています。
### Citation
```bibtex
@inproceedings{omi-2021-wikipedia,
title = "Wikipediaを用いた日本語の固有表現抽出のデータセットの構築",
author = "近江 崇宏",
booktitle = "言語処理学会第27回年次大会",
year = "2021",
url = "https://anlp.jp/proceedings/annual_meeting/2021/pdf_dir/P2-7.pdf",
}
```
### Licence
Wikipedia日本語版と同じCC-BY-SA 3.0のライセンスに従います。
|
Naveen1224it/Resume_classification | ---
dataset_info:
features:
- name: ID
dtype: int64
- name: Resume_str
dtype: string
- name: Resume_html
dtype: string
- name: Category
dtype: string
splits:
- name: train
num_bytes: 43835582.16223832
num_examples: 1987
- name: validation
num_bytes: 5471174.824476651
num_examples: 248
- name: test
num_bytes: 5493236.013285024
num_examples: 249
download_size: 20310640
dataset_size: 54799993.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Gurveer05/maize-promoter-sequences | ---
tags:
- biology
size_categories:
- 1M<n<10M
---
# Promoter Sequences for Maize NAM lines
The data in this dataset has the promoter sequences for **26 Maize NAM lines** and has been used for the finetuning step of [`Florabert`](https://huggingface.co/Gurveer05/FloraBERT). It has been created by processing the raw fasta files and the gff3 files from [`MaizeGDB`](https://www.maizegdb.org/) for the 26 NAM lines.
*samtools* and *bedtools* have been used to extract the promoter sequences from these that are 1kb upstream of the sequence.
The data has been split into train and test data (70-30 split). In all, there are ~ 1 million sequences across the files. The steps followed to obtain this data are available in this [`Github Repository`](https://github.com/gurveervirk/florabert). |
Multimodal-Fatima/VizWiz_test | ---
dataset_info:
features:
- name: id
dtype: int32
- name: image
dtype: image
- name: filename
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: answers_original
list:
- name: answer
dtype: string
- name: answer_confidence
dtype: string
- name: answer_type
dtype: string
- name: answerable
dtype: int32
- name: id_image
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B
sequence: string
- name: blip_caption_beam_5
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes
list:
- name: attribute
dtype: string
- name: box
sequence: float32
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float32
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_module
sequence: string
- name: captions_module_filter
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
splits:
- name: test
num_bytes: 3995437282.0
num_examples: 8000
download_size: 3977376350
dataset_size: 3995437282.0
---
# Dataset Card for "VizWiz_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
saibo/bookcorpus_small_compact_1024 | ---
dataset_info:
features:
- name: text
dtype: string
- name: concept_with_offset
dtype: string
splits:
- name: train
num_bytes: 18843209
num_examples: 1571
download_size: 9378154
dataset_size: 18843209
---
# Dataset Card for "bookcorpus_small_compact_1024"
Num samples: 1,571
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anan-2024/twitter_dataset_1713103808 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 192200
num_examples: 520
download_size: 109302
dataset_size: 192200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jcssafedep/exploit_db_train_v1 | ---
dataset_info:
features:
- name: prompts
struct:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 21288475
num_examples: 5820
download_size: 8554314
dataset_size: 21288475
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/utsumi_erice_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of utsumi_erice/宇津見エリセ/宇津见绘里濑 (Fate/Grand Order)
This is the dataset of utsumi_erice/宇津見エリセ/宇津见绘里濑 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are `black_hair, multicolored_hair, streaked_hair, sidelocks, medium_hair, breasts, pink_hair, blue_eyes, large_breasts, ribbon, blue_ribbon, medium_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 720.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/utsumi_erice_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 626.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/utsumi_erice_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1281 | 1.21 GiB | [Download](https://huggingface.co/datasets/CyberHarem/utsumi_erice_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/utsumi_erice_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, bare_shoulders, blush, fundoshi, looking_at_viewer, magatama, necklace, pelvic_curtain, puffy_long_sleeves, seigaiha, short_dress, sideboob, sideless_outfit, solo, spear, thighs, white_dress, white_background, open_mouth, simple_background, collarbone, two-sided_fabric |
| 1 | 5 |  |  |  |  |  | 1girl, bare_shoulders, blush, collarbone, fundoshi, looking_at_viewer, magatama, necklace, night_sky, pelvic_curtain, puffy_long_sleeves, seigaiha, short_dress, sideboob, sideless_outfit, solo, spear, starry_sky, white_dress, ahoge, thighs, two-sided_skirt, open_mouth |
| 2 | 7 |  |  |  |  |  | 1boy, 1girl, blue_jacket, blue_skirt, blush, breasts_out, collared_shirt, hetero, long_sleeves, mosaic_censoring, nipples, open_jacket, penis, spread_legs, thighs, white_shirt, clothed_sex, high-waist_skirt, open_mouth, open_shirt, vaginal, buttons, necktie, collarbone, cum_in_pussy, dress_shirt, grabbing_another's_breast, speech_bubble, clothed_female_nude_male, command_spell, on_side, panties_aside, pillow, sex_from_behind, socks |
| 3 | 24 |  |  |  |  |  | 1girl, white_shirt, collared_shirt, long_sleeves, blue_jacket, necktie, open_jacket, solo, blue_skirt, blush, buttons, looking_at_viewer, high-waist_skirt, dress_shirt, thighs, smile, ahoge, school_uniform, closed_mouth, white_background, cropped_jacket, socks |
| 4 | 6 |  |  |  |  |  | 1girl, blush, collarbone, nipples, looking_at_viewer, solo, upper_body, magatama, nude, navel, open_mouth |
| 5 | 5 |  |  |  |  |  | 1girl, blush, collarbone, looking_at_viewer, solo, bare_shoulders, magatama, navel, open_mouth, thighs, beach, nipples, on_back, outdoors, water, bikini, blue_sky, ocean, see-through, smile, stomach, topless |
| 6 | 32 |  |  |  |  |  | low_twintails, 1girl, short_twintails, black_bikini, looking_at_viewer, blush, short_sleeves, solo, choker, white_shirt, bare_shoulders, off-shoulder_shirt, bikini_under_clothes, smile, baseball_cap, black_shorts, short_shorts, collarbone, see-through, white_headwear, thighs, thigh_strap, open_mouth |
| 7 | 6 |  |  |  |  |  | 1girl, long_sleeves, looking_at_viewer, navel, open_jacket, solo, black_jacket, blush, hooded_jacket, smile, short_shorts, thighs, water, innertube, open_mouth, swim_ring, white_bikini |
| 8 | 32 |  |  |  |  |  | 1girl, bare_shoulders, blue_sailor_collar, white_one-piece_swimsuit, low_twin_braids, sailor_hat, white_headwear, blue_skirt, solo, blush, thighs, looking_at_viewer, double-breasted, armlet, smile, innertube, swim_ring, open_mouth, food |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | blush | fundoshi | looking_at_viewer | magatama | necklace | pelvic_curtain | puffy_long_sleeves | seigaiha | short_dress | sideboob | sideless_outfit | solo | spear | thighs | white_dress | white_background | open_mouth | simple_background | collarbone | two-sided_fabric | night_sky | starry_sky | ahoge | two-sided_skirt | 1boy | blue_jacket | blue_skirt | breasts_out | collared_shirt | hetero | long_sleeves | mosaic_censoring | nipples | open_jacket | penis | spread_legs | white_shirt | clothed_sex | high-waist_skirt | open_shirt | vaginal | buttons | necktie | cum_in_pussy | dress_shirt | grabbing_another's_breast | speech_bubble | clothed_female_nude_male | command_spell | on_side | panties_aside | pillow | sex_from_behind | socks | smile | school_uniform | closed_mouth | cropped_jacket | upper_body | nude | navel | beach | on_back | outdoors | water | bikini | blue_sky | ocean | see-through | stomach | topless | low_twintails | short_twintails | black_bikini | short_sleeves | choker | off-shoulder_shirt | bikini_under_clothes | baseball_cap | black_shorts | short_shorts | white_headwear | thigh_strap | black_jacket | hooded_jacket | innertube | swim_ring | white_bikini | blue_sailor_collar | white_one-piece_swimsuit | low_twin_braids | sailor_hat | double-breasted | armlet | food |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:--------|:-----------|:--------------------|:-----------|:-----------|:-----------------|:---------------------|:-----------|:--------------|:-----------|:------------------|:-------|:--------|:---------|:--------------|:-------------------|:-------------|:--------------------|:-------------|:-------------------|:------------|:-------------|:--------|:------------------|:-------|:--------------|:-------------|:--------------|:-----------------|:---------|:---------------|:-------------------|:----------|:--------------|:--------|:--------------|:--------------|:--------------|:-------------------|:-------------|:----------|:----------|:----------|:---------------|:--------------|:----------------------------|:----------------|:---------------------------|:----------------|:----------|:----------------|:---------|:------------------|:--------|:--------|:-----------------|:---------------|:-----------------|:-------------|:-------|:--------|:--------|:----------|:-----------|:--------|:---------|:-----------|:--------|:--------------|:----------|:----------|:----------------|:------------------|:---------------|:----------------|:---------|:---------------------|:-----------------------|:---------------|:---------------|:---------------|:-----------------|:--------------|:---------------|:----------------|:------------|:------------|:---------------|:---------------------|:---------------------------|:------------------|:-------------|:------------------|:---------|:-------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | | X | | | | | | | | | | | | | X | | | X | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 24 |  |  |  |  |  | X | | X | | X | | | | | | | | | X | | X | | X | | | | | | | X | | | X | X | | X | | X | | | X | | | X | | X | | | X | X | | X | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | | X | | X | X | | | | | | | | X | | | | | X | | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | X | | X | X | | | | | | | | X | | X | | | X | | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 32 |  |  |  |  |  | X | X | X | | X | | | | | | | | | X | | X | | | X | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | | X | | X | | | | | | | | | X | | X | | | X | | | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | | | | X | | | | | | | | | | | | | | | | X | | | X | X | X | X | X | | | | | | | |
| 8 | 32 |  |  |  |  |  | X | X | X | | X | | | | | | | | | X | | X | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | X | | X | X | X | X | X | X | X |
|
alx-ai/nogglesonly | ---
license: cc0-1.0
---
|
stsudharsan/veshti-controlnet-v2-sammed-fingers | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_img
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 42872715.0
num_examples: 143
download_size: 42037622
dataset_size: 42872715.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "veshti-controlnet-v2-sammed-fingers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KrishnAI7/autotrain-data-aniaitokenclassification | ---
language:
- en
task_categories:
- token-classification
---
# AutoTrain Dataset for project: aniaitokenclassification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project aniaitokenclassification.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"tokens": [
"I",
" booked",
"a",
" flight",
"to",
"London."
],
"tags": [
4,
2,
2,
5,
2,
1
]
},
{
"tokens": [
"Apple",
"Inc.",
"is",
"planning",
"to",
"open",
"a",
"new",
"store",
"in",
"Paris."
],
"tags": [
3,
3,
2,
2,
2,
2,
2,
2,
2,
2,
1
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"tags": "Sequence(feature=ClassLabel(names=['COMPANY', 'LOC', 'O', 'ORG', 'PER', 'THING'], id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 23 |
| valid | 6 |
|
bigscience-data/roots_ar_wikiversity | ---
language: ar
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_ar_wikiversity
# wikiversity_filtered
- Dataset uid: `wikiversity_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0367 % of total
- 0.1050 % of en
- 0.1178 % of fr
- 0.1231 % of pt
- 0.0072 % of zh
- 0.0393 % of es
- 0.0076 % of ar
- 0.0069 % of indic-hi
### BigScience processing steps
#### Filters applied to: en
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_en
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_fr
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_pt
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: zh
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_zhs
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_es
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ar
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-hi
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
|
tiagoblima/punctuation-nilc-bert | ---
language: pt
dataset_info:
features:
- name: text_id
dtype: int64
- name: text
dtype: string
- name: level
dtype: string
- name: tokens
sequence: string
- name: labels
sequence: string
splits:
- name: test
num_bytes: 1177684.2701598366
num_examples: 2604
- name: train
num_bytes: 4224993.504240118
num_examples: 9371
- name: validation
num_bytes: 479472.5920696906
num_examples: 1041
download_size: 1802076
dataset_size: 5882150.366469645
---
# Dataset Card for "punctuation-nilc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
presencesw/dataset_2000_complexquestion | ---
dataset_info:
features:
- name: entities
sequence: 'null'
- name: triples
sequence: 'null'
- name: answer
dtype: string
- name: complex_question
dtype: string
splits:
- name: train
num_bytes: 175875
num_examples: 2000
download_size: 80882
dataset_size: 175875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dataset_2000_complexquestion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ZhangShenao/0.0001_idpo_same_noreplacerej_decalpha_dataset | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: is_better
dtype: bool
splits:
- name: train_prefs_2
num_bytes: 174377751
num_examples: 20378
- name: test_prefs_2
num_bytes: 16878132
num_examples: 2000
download_size: 106572359
dataset_size: 191255883
configs:
- config_name: default
data_files:
- split: train_prefs_2
path: data/train_prefs_2-*
- split: test_prefs_2
path: data/test_prefs_2-*
---
# Dataset Card for "0.0001_idpo_same_noreplacerej_decalpha_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Lollitor/CASFONLYPROTEIN | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: ID
dtype: string
- name: INPUT
dtype: string
splits:
- name: train
num_bytes: 252143
num_examples: 285
download_size: 71507
dataset_size: 252143
---
# Dataset Card for "CASFONLYPROTEIN"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Falcon96/tsrg | ---
license: openrail
---
|
HumanDynamics/ppo_dataset | ---
dataset_info:
features:
- name: system
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 6643843.271364645
num_examples: 10000
download_size: 2739472
dataset_size: 6643843.271364645
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ppo_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ilass/OktoberfestFoodDatasetPlus | ---
license: bsd
task_categories:
- object-detection
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset: OktoberfestFoodDatasetPlus
## Dataset Description
- **Homepage: www.ilass.com**
- **Repository: https://github.com/ilassAG/OktoberfestFoodDataset**
- **Paper: https://arxiv.org/abs/1912.05007**
### Dataset Summary
This dataset comprises three categories: drinkServed, foodServed, person.
Part of it consists of real camera footage annotated by hand, while the rest is synthetically generated and annotated data.
A demo space is available to view results after training on the YOLO8 platform:
https://huggingface.co/spaces/ilass/yolov8_foodServed_drinkServed_Person
### Annotations
#### Annotation process
1000 images were annotated by hand.
1000 person images were sourced from COCO.
3000 images were synthetically produced and annotated.
|
Andyrasika/summary_qa | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: prompt
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 294050.25
num_examples: 420
- name: test
num_bytes: 98016.75
num_examples: 140
download_size: 211064
dataset_size: 392067.0
---
# Dataset Card for "summary_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kheopss/prompt_dataset_hermes | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: response
dtype: string
- name: text
dtype: string
- name: system
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 13301248
num_examples: 1960
download_size: 3856474
dataset_size: 13301248
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kpriyanshu256/MultiTabQA-multitable_pretraining-train-v2-15000 | ---
dataset_info:
features:
- name: tables
sequence: string
- name: table_names
sequence: string
- name: query
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: source_latex
dtype: string
- name: target_latex
dtype: string
- name: source_html
dtype: string
- name: target_html
dtype: string
- name: source_markdown
dtype: string
- name: target_markdown
dtype: string
splits:
- name: train
num_bytes: 14699197081
num_examples: 2500
download_size: 2863062962
dataset_size: 14699197081
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
naorm/malware-text-db-cyner | ---
dataset_info:
features:
- name: Type
dtype: string
- name: Text
dtype: string
- name: Fixed Text
dtype: string
- name: Score
dtype: float64
- name: Original Sentence ID
dtype: int64
- name: Original Sentence
dtype: string
- name: Decoded Sentence
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 2110807
num_examples: 5255
download_size: 751269
dataset_size: 2110807
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PeterGraebner/LDNOOBW_V2 | ---
license: cc0-1.0
language:
- af
- az
- am
- be
- bg
- dz
- eu
- my
- ca
- cs
- cy
- hr
- zh
- da
- de
- nl
- el
- en
- eo
- es
- et
- fa
- fi
- fr
- gl
- gd
- hi
- hy
- hu
- id
- is
- it
- ja
- ko
- la
- lt
- lv
- mi
- mk
- ml
- ms
- mt
- mr
- mn
- 'no'
- pl
- pt
- ro
- ru
- sk
- sl
- sm
- sq
- te
- ta
- to
- tr
- uk
- uz
- vi
- yid
- zu
pretty_name: List of Dirty Naughty Obscene and Otherwise Bad Words V2
size_categories:
- 10K<n<100K
---
> Written with [StackEdit](https://stackedit.io/).
> ## [List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words_V2](https://github.com/LDNOOBWV2/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words_V2#list-of-dirty-naughty-obscene-and-otherwise-bad-words_v2)
This list of words is a follow-up and extension of the Shutterstock [List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words/tree/master) as that list is not maintained anymore. As there are many profanity word lists around on the web (and many not maintained) their content was crabbed and joined here together (see the source list below).
As the opinion on which words should be in such lists varies between culture, language, and geographies, feel free to extend them to your needs, hopefully getting a lot of feedback.
The lists need reviews from native speakers. It would be great to collect more words and even get more languages (**75** right now, with over **50k words** alltogether).
The long list of English words shows that people got very creative to get around profanity filters. The best way to use these hard-coded word lists is to use them as an additional quality criterion for filtering texts like it is done in [RedPajama](https://github.com/togethercomputer/RedPajama-Data) data set or use them for ML building profanity filters.
### Structure and Format
- filename is the **iso-code** of the country
- file extension is **".txt"**
- **utf-8** encoded
- all words are **lowercase**
- one expression per line
- if the language has non-ASCII chrachters a transcription with python's "anyascii" is in the wordlist
- for leed-speech there are python lists for the most common leet replacements in naughty and slang words, see LEET.md for details
- for English and French there are wordlists with these replacements being already done
- all words contained in the English "***en.txt***" file are **excluded** in the other language files
- often used words where the classification as a profane word is doubtful, there is a separate csv file
- the csv-file is: [questionable_international_words.csv](questionable_international_words.csv)
- separator is the comma "**,**"
- **51** words for several languages (see table below)
- the header line contains the iso-code of the language, a classification column (*category*), and a *remark* column
- these words are **NOT** included in the language-text-files, e.g. "*.txt"
- when I couldn't find a translation, the field contains the string: **<NO_TRANSLATION>**
### Languages Files Overview
language | count | filename | in csv-file | remark
--- | --- | --- | --- | ---
[Afrikaans](data/af.txt) | 256 | af | Y|
[Albanian](data/sq.txt) | 223 | sq | Y|
[Algerian](data/dz.txt) | 86 | dz | N|
[Amharic](data/am.txt) | 71 | am | N|
[Arabic](data/ar.txt) |1609 | ar | N|
[Armenian](data/hy.txt) | 440 | hy | Y|
[Australian Kriol](data/rop.txt) | 16 | rop| N|
[Azerbaijanian](data/az.txt) | 37 | az | N|
[Basque](data/eu.txt) | 48 | eu | N|
[Belorussian](data/be.txt) | 236 | be | N|
[Bulgarian](data/bg.txt) | 535 | bg | Y|
[Burmese](data/my.txt) | 133 | my | N|
[Cambodian](data/kh.txt) | 264 | kh | N|
[Catalan](data/ca.txt) | 163 | ca | Y|
[Cebuano](data/ceb.txt) | 18 | ceb| N|
[Chinese](data/zh.txt) |3090 | zh | Y|
[Croatian](data/hr.txt) | 275 | hr | Y|
[Czech](data/cs.txt) | 343 | cs | Y|
[Danish](data/da.txt) | 227 | da | Y|
[Dutch](data/nl.txt) |1224 | nl | Y|
[English](data/en.txt) |12996| en | Y| various spelling variations, does not contain Spanish (es) words
[English](data/en_leet.txt) |12532| en | Y| version with repaced leet letters, see LEET.md
[Esperanto](data/eo.txt) | 60 | eo | N|
[Estonian](data/et.txt) | 203 | et | Y|
[Filipino](data/fil.txt) | 165 | fil| Y|
[Finnish](data/fi.txt) | 368 | fi | Y|
[French](data/fr.txt) |4056 | fr | Y| many spelling variations
[French](data/fr.txt) |2380 | fr | Y| version with repaced leet letters, see LEET.md
[Gaelic](data/gd.txt) | 105 | gd | N|
[Galician](data/gl.txt) | 89 | gl | N|
[German](data/de.txt) | 685 | de | Y|
[Greek](data/el.txt) | 417 | el | Y|
[Hebrew](data/yid.txt) | 173 | yid| N|
[Hindi](data/hi.txt) |1102 | hi | Y|
[Hungarian](data/hu.txt) | 433 | hu | Y|
[Icelandic](data/is.txt) | 208 | is | Y|
[Italian](data/it.txt) |1710 | it | Y|
[Indonesian](data/id.txt) | 582 | id | Y|
[Japanese](data/ja.txt) | 783 | ja | Y|
[Kabyle](data/kab.txt) | 31 | kab| N|
[Klingon](data/tlh.txt) | 33 | tlh| N|
[Korean](data/ko.txt) |6125 | ko | Y|
[Latin](data/la.txt) | 103 | la | N|
[Latvian](data/lv.txt) | 280 | lv | Y|
[Lithuanian](data/lt.txt) | 211 | lt | Y|
[Macedonian](data/mk.txt) | 294 | mk | N|
[Malay](data/ms.txt) | 201 | ms | Y|
[Malayalam](data/ml.txt) | 338 | ml | Y|
[Maltese](data/mt.txt) | 132 | mt | Y|
[Maori](data/mi.txt) | 75 | mi | Y|
[Marathi](data/mr.txt) | 453 | mr | Y|
[Mongolian](data/mn.txt) | 164 | mn | N|
[Norwegian](data/no.txt) | 341 | no | Y|
[Persian](data/fa.txt) |1128 | fa | N|
[Pictrain-Norfolk](data/pih.txt) | 14 | pih| N|
[Piya-Kwonci](data/piy.txt) | 13 | piy| N|
[Polish](data/pl.txt) |12639 | pl | Y| different grammatical variations
[Portuguese](data/pt.txt) | 629 | pt | Y| including Brasilian
[Romanian](data/ro.txt) | 341 | ro | Y|
[Russian](data/ru.txt) |9569 | ru | Y|
[Samoan](data/sm.txt) | 116 | sm | Y|
[Serbian](data/sr.txt) | 459 | sr | Y| sr_k & sr_l in csv file
[Slovak](data/sk.txt) | 586 | sk | Y|
[Slovene](data/sl.txt) | 186 | sl | Y|
[Spanish](data/es.txt) |1804 | es | Y| including Middle- and South American
[Swedish](data/sv.txt) | 304 | sv | Y|
[Tamil](data/ta.txt) | 143 | ta | N|
[Telugu](data/te.txt) | 509 | te | Y|
[Tetum](data/tet.txt) | 11 | tet| N|
[Thai](data/th.txt) |4377 | th | Y|
[Tongan](data/to.txt) | 68 | to | N|
[Turkish](data/tr.txt) | 491 | tr | Y|
[Ukrainian](data/uk.txt) | 377 | uk | Y|
[Uzbek](data/uz.txt) | 102 | uz | N|
[Vietnamese](data/vi.txt) |1031 | vi | Y|
[Welsh](data/cy.txt) | 169 | cy | Y|
[Zulu](data/zu.txt) | 115 | zu | N|
### Categories in *questionable_international_words.csv*
The categories used are:
- **cul**: cultural differences
- **dm**: drugs & medicine
- **his**: historical
- **leg**: Legislative term
- **mab**: medical, anatomic, biological term
- **pol**: political
- **rel**: religious
- **so**: sexual orientation
- **vm**: various meanings
This is just an ad hoc classification where several expressions can be in different categories. |
arieg/bw_spec_cls_4_18_s_200 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '1666'
'1': '1673'
'2': '1680'
'3': '1681'
splits:
- name: train
num_bytes: 46542294.0
num_examples: 800
- name: test
num_bytes: 1182286.0
num_examples: 20
download_size: 41914749
dataset_size: 47724580.0
---
# Dataset Card for "bw_spec_cls_4_18_s_200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ovior/twitter_dataset_1713013495 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 2542952
num_examples: 7554
download_size: 1457985
dataset_size: 2542952
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
FarAwayFer/alpaca_es_far | ---
license: apache-2.0
---
|
AntoineBlanot/xnli-es | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label_name
dtype: string
splits:
- name: train
num_bytes: 84786708
num_examples: 392702
- name: test
num_bytes: 500002
num_examples: 2490
download_size: 53283432
dataset_size: 85286710
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
deepapaikar/Katzbot_final_train_test_QA_Pairs | ---
license: apache-2.0
---
|
sumit077/guanaco-llama2-1k | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966692
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_maximuslee07__llama-2-7b-rockwell-final | ---
pretty_name: Evaluation run of maximuslee07/llama-2-7b-rockwell-final
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [maximuslee07/llama-2-7b-rockwell-final](https://huggingface.co/maximuslee07/llama-2-7b-rockwell-final)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_maximuslee07__llama-2-7b-rockwell-final\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-23T20:01:16.627369](https://huggingface.co/datasets/open-llm-leaderboard/details_maximuslee07__llama-2-7b-rockwell-final/blob/main/results_2023-10-23T20-01-16.627369.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.03460570469798658,\n\
\ \"em_stderr\": 0.001871827675399587,\n \"f1\": 0.100573615771812,\n\
\ \"f1_stderr\": 0.002343392042876464,\n \"acc\": 0.3819496844432025,\n\
\ \"acc_stderr\": 0.010259509540838537\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.03460570469798658,\n \"em_stderr\": 0.001871827675399587,\n\
\ \"f1\": 0.100573615771812,\n \"f1_stderr\": 0.002343392042876464\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07960576194086429,\n \
\ \"acc_stderr\": 0.007455924338676263\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6842936069455406,\n \"acc_stderr\": 0.01306309474300081\n\
\ }\n}\n```"
repo_url: https://huggingface.co/maximuslee07/llama-2-7b-rockwell-final
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|arc:challenge|25_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_23T20_01_16.627369
path:
- '**/details_harness|drop|3_2023-10-23T20-01-16.627369.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-23T20-01-16.627369.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_23T20_01_16.627369
path:
- '**/details_harness|gsm8k|5_2023-10-23T20-01-16.627369.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-23T20-01-16.627369.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hellaswag|10_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T01-33-36.813954.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T01-33-36.813954.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T01-33-36.813954.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_23T20_01_16.627369
path:
- '**/details_harness|winogrande|5_2023-10-23T20-01-16.627369.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-23T20-01-16.627369.parquet'
- config_name: results
data_files:
- split: 2023_10_04T01_33_36.813954
path:
- results_2023-10-04T01-33-36.813954.parquet
- split: 2023_10_23T20_01_16.627369
path:
- results_2023-10-23T20-01-16.627369.parquet
- split: latest
path:
- results_2023-10-23T20-01-16.627369.parquet
---
# Dataset Card for Evaluation run of maximuslee07/llama-2-7b-rockwell-final
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/maximuslee07/llama-2-7b-rockwell-final
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [maximuslee07/llama-2-7b-rockwell-final](https://huggingface.co/maximuslee07/llama-2-7b-rockwell-final) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_maximuslee07__llama-2-7b-rockwell-final",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-23T20:01:16.627369](https://huggingface.co/datasets/open-llm-leaderboard/details_maximuslee07__llama-2-7b-rockwell-final/blob/main/results_2023-10-23T20-01-16.627369.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.03460570469798658,
"em_stderr": 0.001871827675399587,
"f1": 0.100573615771812,
"f1_stderr": 0.002343392042876464,
"acc": 0.3819496844432025,
"acc_stderr": 0.010259509540838537
},
"harness|drop|3": {
"em": 0.03460570469798658,
"em_stderr": 0.001871827675399587,
"f1": 0.100573615771812,
"f1_stderr": 0.002343392042876464
},
"harness|gsm8k|5": {
"acc": 0.07960576194086429,
"acc_stderr": 0.007455924338676263
},
"harness|winogrande|5": {
"acc": 0.6842936069455406,
"acc_stderr": 0.01306309474300081
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Myccel0t/verify | ---
license: cc-by-nc-4.0
---
|
davidfant/natural-questions-chunk-21 | ---
dataset_info:
features:
- name: id
dtype: string
- name: document
struct:
- name: html
dtype: string
- name: title
dtype: string
- name: tokens
sequence:
- name: end_byte
dtype: int64
- name: is_html
dtype: bool
- name: start_byte
dtype: int64
- name: token
dtype: string
- name: url
dtype: string
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: long_answer_candidates
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: top_level
dtype: bool
- name: annotations
sequence:
- name: id
dtype: string
- name: long_answer
struct:
- name: candidate_index
dtype: int64
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: short_answers
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: text
dtype: string
- name: yes_no_answer
dtype:
class_label:
names:
'0': 'NO'
'1': 'YES'
splits:
- name: train
num_bytes: 4588320501
num_examples: 10000
download_size: 1786342885
dataset_size: 4588320501
---
# Dataset Card for "natural-questions-chunk-21"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joachimsallstrom/mjportraits_smoothend_sharpened | ---
license: creativeml-openrail-m
---
|
thomascuddihy/hrw_test_multiclass_flagged_data | ---
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
yatharth2307/Haiku_infgen_2 | ---
license: wtfpl
---
|
kunishou/J-ResearchCorpus | ---
license: other
license_name: mixed-license
license_link: LICENSE
language:
- ja
---
# J-ResearchCorpus
**Update:**
- 2024/3/16
言語処理学会第30回年次大会(NLP2024)を含む、論文 1,343 本のデータを追加
- 2024/2/25
言語処理学会誌「自然言語処理」のうち CC-BY-4.0 で公開されている論文 360 本のデータを追加
## 概要
- CC-BY-* ライセンスで公開されている日本語論文や学会誌等から抜粋した**高品質なテキストのデータセット**です。言語モデルの事前学習や RAG 等でご活用下さい。
- 今後も CC-BY-* ライセンスの日本語論文があれば追加する予定です。
## データ説明
- filename : 該当データのファイル名
- text : 日本語論文から抽出したテキストデータ
- category : データソース
- license : ライセンス
- credit : クレジット
## データソース・ライセンス
- **テキスト総文字数 : 約 3,900 万文字**
|data source|num records|license|note|
|:----|:----|:----|:----|
|言語処理学会 年次大会発表論文集アーカイブ|1,924|cc-by-4.0|・2021年から2024年の論文を抜粋(※言語処理学会に確認したところ2020年以前のものは CC-BY-4.0 ではないとのこと)|
|言語処理学会誌「自然言語処理」|363|cc-by-4.0|・CC-BY-4.0公開となっている2009年以降のものを抜粋|
|東京女子医科大学雑誌|96|cc-by-4.0| |
|リスク研究(日本リスク学会)|100|cc-by-4.0| |
|日本熱電学会誌|11|cc-by-4.0| |
|デジタルアーカイブ学会誌|744|cc-by-4.0| |
## テキスト抽出例
以下の一例のようにテキストを抽出しています(VSCode の Markdown プレビューで見ると数式も綺麗に見れます)。
**<details><summary>表示する</summary><div>**
# ニューラル機械翻訳における Iterative Back-Translation を利用した コンパラブルコーパスの活用
山本 優紀 秋葉 友良 塚田 元
豊橋技術科学大学
\{yamamoto.yuki.pr, akiba.tomoyoshi.tk, tsukada.hajime.hl\}@tut.jp
## 概要
ニューラル機械翻訳 (NMT) の学習に用いる対訳 コーパスの構築法として, 文書単位で対応付けられ た 2 つの言語のコーパス (コンパラブルコーパス) から、対応付けられる文ペアを自動的に抽出する 手法が広く採用されている. しかし, 文単位で意味 が対応するものは少なく,多くの文は抽出されず捨 てられてしまう. 本研究では、対訳コーパスとし て抽出されなかった文を含めて,コンパラブルコー パス全体を NMT の学習に活用する手法を提案す る. 評価実験により, コンパラブルコーパスでデータ 拡張を行うことや, コンパラブル性の利用, Iterative Back-Translation の活用によって翻訳モデルの性能が 向上することを確認した.
## 1 はじめに
機械翻訳の分野では, 深層学習の発達により, ニューラルネットワークを用いるニューラル機械翻訳 (Neural Machine Translation:NMT) が, 従来手法の統計的機械翻訳よりも高い性能を示しており, 様々な 研究が行われている. NMT では, ニューラルネット ワークで構築した翻訳モデルを, 翻訳元の言語 (原言語) の文と,その訳の言語 (目的言語) の文のぺアにし た対訳コーパスを用いて学習を行う. NMT は, 対訳 コーパスから翻訳に関わる様々な知識を学習するた め, 対訳コーパスの質や量が NMT モデルの翻訳性能 に大きく影響する.しかし, 大規模な対訳コーパスを 人手で作成することは困難という問題点がある.
この問題の解決策として, 既存の日本語と英語の 翻訳テキストから対訳コーパスを構築する手法が提案されている.[1]これは, 新聞などの文書単位で対応付けつけられた 2 つの言語コーパス (コンパラブ ルコーパス) から, 対応付けられる文ぺアを自動的
に抽出することで対訳コーパスを構築する方法で ある. しかし,コンパラブルコーパスの中で文単位 で意味が対応するものは少なく,多くの文は抽出さ れずに捨てられてしまう. 実際, 本論文で使用した PatentMT の調査では 1 つの文書から平均約 $27.1 \%$ の文しか抽出されていなかった.
本研究では, 対訳コーパスとして抽出されなかっ た文を含めて,コンパラブルコーパス全体を NMT の 学習に活用する手法を提案する. データ拡張手法と して, 逆翻訳 (Back-Translation:BT)[2] や, その拡張手法である Iterative Back-Translation (IBT)[3][4][5] を利用することで,より効果的なデータ拡張手法を探す. さらに, 上記の手法をコンパラブルコーパスのコン パラブル性を活用して行い, その効果を調べる.
## 2 提案手法
## 2.1 コンパラブルコーパスの再現
本研究では, 対訳コーパスの抽出元であるコン パラブルコーパスを翻訳モデル学習に活用するこ とを目的とする. しかし, 実験で用いる NTCIR-10 PatentMT[6] のコンパラブルコーパスを直接入手す ることができなかったため, 以下の方法で対訳コー パスからコンパラブルコーパスを再現した.
1. $C=\{\}$ と初期化する.
2. 対訳コーパス $P$ の各文ペア $(x, y) \in P$ について 以下を繰り返す。
$2.1 x$ と $y$ の抽出元の文書である $D_{x}$ と $D_{y}$ を特定する。
2.2 特定した $D_{x}$ と $D_{y}$ を文書ペア $\left(D_{x}, D_{y}\right)$ と し, $C$ に $C \leftarrow C \bigcup\left.\{\left(D_{x}, D_{y}\right)\right.\}$ と追加する.
最終的にコンパラブルコーパス $C=$ $\bigcup_{(x, y) \in P}\left.\{\left(D_{x}, D_{y}\right)\right.\}$ が得られる.
## 2.2 データ拡張手法
節 2.1 で構築したコンパラブルコーパスを利用 して, データ拡張を行う. 本研究では, 4 つの手法で データ拡張実験を行い, 比較を行うことで, より効果的なコンパラブルコーパスの活用方法を模索する.
## 2.2.1 Back-Translation
逆翻訳手法 (Back-Translation:BT) は, Sennrich ら [2] の提案した手法である. BT の流れを図 1 に示す. 図 1 では, 言語 $X$ から言語 $Y$ の翻訳モデルの構築 を考えている. はじめに, 対訳コーパスを利用して $Y \rightarrow X$ 方向の翻訳モデル Model $_{Y \rightarrow X} 0$ を作成する.次に,このモデルを用いて, 単言語コーパス $C_{Y}$ mono からサンプリングして得たサブセット $\hat{C}_{Y}$ mono を 逆翻訳し, 翻訳結果 $\hat{C}_{X}^{\prime}$ mono を得る. 翻訳結果と元 の単言語コーパスを組み合わせて疑似対訳コーパ ス ( $\hat{C}_{X}^{\prime}$ mono, $\hat{C}_{Y}$ mono $)$ を構築する. 構築した疑似対訳コーパスと対訳コーパスを混合し, 言語 $X$ から 言語 $Y$ の翻訳モデル Model $_{X \rightarrow Y} 1$ を学習する. 以上 が BT の流れである. 本研究では, 構築したコンパ ラブルコーパス $C=\bigcup_{(x, y) \in P}\left.\{\left(D_{x}, D_{y}\right)\right.\}$ の Y 言語側 $C_{Y}=\bigcup_{(x, y) \in P}\left.\{D_{y}\right.\}$ を単言語コーパスとすることで BTを利用する。
図 1 Back Translation
## 2.2.2 Iterative Back-Translation
Iterative Back-Translation(IBT) は, 原言語の単言語 コーパスと目的言語の単言語コーパスを用いて, BT を双方向かつ反復的に繰り返す手法である. IBT の 流れを図 2 に示す. 図では, 言語 $X$ と言語 $Y$ におけ る IBT の流れを示している. IBT は以下のようにし てモデルを学習する。
1. 対訳コーパスを用いて, $X \rightarrow Y, Y \rightarrow X$ の各方向 の翻訳モデル Model $_{X \rightarrow Y} 0$, Model $_{Y \rightarrow X} 0$ を学習 し, $i \leftarrow 0$ に初期化する.
2. 以下の手順で Model $_{X \rightarrow Y} i$ を更新する.
2.1 Model $_{Y \rightarrow X} i$ で単言語コーパス $C_{Y}$ mono からサンプリングして得たサブセッ ト $\hat{C}_{Y}$ mono を翻訳し, 疑似対訳コーパス ( $\hat{C}_{X}^{\prime}$ mono, $\hat{C}_{Y}$ mono) を得る.
2.2疑似対訳コーパス ( $\hat{C}_{X}^{\prime}$ mono, $\hat{C}_{Y}$ mono) と対訳コーパス $\left(C_{X}, C_{Y}\right)$ を結合し, $\operatorname{Model}_{X \rightarrow Y} i$ を fine-tuning し, $\operatorname{Model}_{X \rightarrow Y}(i+1)$ を学習 する。
3. ステップ 2 と同様に Model $_{Y \rightarrow X} i$ を更新する.
4. $i \leftarrow i+1$ としてステップ 2 に戻る.
本研究では, BT と同じように, 構築したコンパラブ ルコーパスを, 単言語コーパスとすることでIBT を 利用する。
図 2 Iterative Back-Translation
表 1 実験に使用したコーパスサイズ
## 2.2.3コンパラブル性を利用した IBT
コンパラブル性を利用した IBT では, 構築したコ ンパラブルコーパスが文書単位で対応付けられてい ることを利用して, IBT に利用する両言語の単言語 コーパスをコンパラブルになるように選択する方法 である. 具体的には, IBT のステップ 2.1 および 3.1 で 単言語コーパスから $\hat{C}_{X}$ mono および $\hat{C}_{Y}$ mono をサン プリングする際, $\hat{C}_{X}$ mono と $\hat{C}_{Y}$ mono が互いにコン パラブルになるように選ぶ. すなわち, 指定されたサ ンプリングサイズを満たすように最小限のコンパラ ブルコーパスのサブセット $C_{s u b}=\left.\{\left(D_{X}, D_{Y}\right)\right.\} \subset C$ をサンプリングして, $\hat{C}_{X}$ mono $\subseteq \cup_{\left(D_{X}, D_{Y}\right) \in C_{\text {sub }}}\left.\{D_{X}\right.\}$ および $\hat{C}_{Y}$ mono $\subseteq \cup_{\left(D_{X}, D_{Y}\right) \in C_{\text {sub }}}\left.\{D_{Y}\right.\}$ のように単言語コーパスを選択する。
## 3 評価実験
## 3.1 データセット
本研究では, 使用する大規模なコーパスとして 特許機械翻訳テストコレクションである NTCIR 10 PatentMT[6] を使用した. PatentMT は特許文書から文 を抽出することで構築されている対訳コーパスであ る. PatentMT の対訳コーパスから, 2.1 節の方法でコ ンパラブルコーパスを構築した. このとき,数式を含 む文や長い文を除いた. 使用した対訳コーパスと構築したコンパラブルコーパスのサイズを表 1 に示す.
また, PatentMT の対訳コーパスと構築したコンパ ラブルコーパスの関係を調査した. コンパラブル コーパスの全文書は 66,414 文書である. このうちの 20,485 文書は, 文書内の $10 \%$ 以下の文しか対訳コー パスとして抽出されていないことがわかった. また,構築したコンパラブルコーパスを利用することで,約 67\%の文を新しく学習に使用することができるこ とがわかった.表 2 コンパラブルコーパスの効果確認実験の結果
## 3.2 データセットの前処理
前処理として英語文, 日本語文ともに NFKC 正規化を行った. また, 英語文は Moses[7] に付属する トークナイザーと truecaser でトークナイズ大文字小文字の表記を統一した. 学習前の事前処理として, SentencePiece[8] で語彙サイズを 16,000 でサブワー ド化を行った.
## 3.3 ニューラル機械翻訳のパラメータ
NMT システムには Fairseq[9] の Transformer を使用した. エンコーダー及びデコーダは Transformer を 6 層とした. 学習率は 5e-4 とし, Warmup は 4000 ス テップ, dropout は 0.1 としている. 損失関数は, ラべ ル平滑化クロスエントロピーを使用した. 最適化関数は Adam を利用し, パラメータである $\beta_{1}$ を $0.9, \beta_{2}$ を 0.98 に設定した。
## 3.4 コンパラブルコーパスの効果
今回構築したコンパラブルコーパスの効果を確認 するための実験を行った. PatentMT の対訳コーパス のみで学習した翻訳モデルと,コンパラブルコーパ スを利用してデータ拡張を行った翻訳モデルを比較 する。
ベースラインは, PatentMT の対訳コーパスのみで 学習したものを利用した. コンパラブルコーパスを 利用した翻訳モデルは, ベースラインに加え, 全ての コンパラブルコーパスを利用したものと,対訳コー パスと同サイズである $3,186,254$ 文をコンパラブル コーパスから抽出したものの 2 つで実験を行った. ベースラインを利用してそれぞれ BTを行い, デー 夕拡張して学習を行った. ベースラインは 20epoch, コンパラブルコーパスを利用した翻訳モデルはどち らも 10epoch の学習を行った. 評価尺度は BLEU[10] を用いる。また, NTCIR-10 のベスト翻訳モデルとも 比較を行った。
コンパラブルコーパスの効果確認の実験結果を表
表 3 翻訳モデルの BLEU
2 に示す. なお, 表 2 のサイズは, 左が対訳コーパス の使用文数, 右が単言語コーパスの使用文数となっ ている.
コンパラブルコーパスを利用した 2 つの結果が ベースラインを上回ったことから,これまで利用さ れていなかったコンパラブルコーパスを活用するこ との有効性を示している. また, NTCIR-10 のベスト 翻訳モデルと BLEU を比較すると, BLEU を大きく 上回っており, 本実験で作成された翻訳モデルは十分な性能があるといえる.
## 3.5 データ拡張手法の比較
節 2.2 で説明した BT, IBT, コンパラブル性を利用 したIBT の 3 つの手法で実験を行い, データ拡張手法の比較を行った. データ拡張は学習データのサイ ズが少ないほど効果が見られるため, 学習に使用す るデータ数を減らして実験を行った. ベースライン は対訳コーパスを 10 万文使用して学習を行った. 提案手法である 3 つのデータ拡張手法では, ベースラ インに加え, 10 万文ずつコンパラブルコーパスから サンプリングし, データ拡張を行い, モデルを更新し た. モデルの更新後, 新たに 10 万文をコンパラブル コーパスからサンプリングし, 対訳コーパスと混合 してデータ拡張を行う. これを繰り返すことで, モデ ルの更新を進める. モデルの更新は 3 手法とも 5 回行った. 比較は, 開発データで最も高い BLEU スコア のモデルで比較を行った.
データ拡張手法の比較を行うために, BT, IBT, コ ンパラブル性を利用した IBT の 3 つの手法を行っ た. 実験の翻訳モデルの学習結果を, 表 3 に示す. な お, 表 3 の学習データサイズは, 左が対訳コーパスの 使用文数, 右が単言語コーパスの使用文数となって いる. なお, 太字になっている BLEU スコアが, 開発
データで最も高い BLEUを示した Model である.英日方向における各手法の BLEU を比較すると, コンパラブル性を利用した IBT が最も性能が高く,続いて IBT の性能が高い. 日英方向における各手法 の BLEU を比較すると, 英日と同じく,コンパラブル 性を利用した IBT が最も性能が高く, 続いて IBT の 性能が高い. IBT は, BT と比較して, BLEU が高いこ とが確認できる. コンパラブル性を利用した IBT は, コンパラブル性を利用していない BT や IBT と比較 して, BLEUが高いことが確認できる.
## 4 結論
対訳コーパスをとして抽出されなかった文を含め たコンパラブルコーパスを利用してデータ拡張を行 うことで, 翻訳モデルの性能が向上し, これまで利用 されていなかったコンパラブルコーパスを活用する ことの有効性を確認した. また, コンパラブルコーパ スの活用方法として, IBT を利用することの有効性 と, 利用する単言語コーパスにコンパラブル性を持 たせることの効果を確認することができた.
## 謝辞
本研究は JSPS 科研費 $18 \mathrm{H} 01062$ の助成を受けた.
## 参考文献
[1] 内山将夫. 対訳データの効率的な構築方法. 情報通信研究機構季報 Vol.58, pp. 37-43, 2012.
[2] Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 86-96, 2016.
[3] Vu Cong Duy Hoang, Phiilpp Koehn, Gholamreza Haffari, and Trevor Cohn. Iterative back-translation for neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pp. 18-24, 2018.
[4] Zhirui Zhang, Shujie Liu, Mu Li, Ming Zhou, and Enhong Chen. Joint training for neural machine translation models with monolingual data. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 555562, 2018.
[5] 森田知熙, 秋葉友良, 塚田元. 双方向の逆翻訳を利用 したニューラル機械翻訳の教師なし適応の検討. 情報処理学会研究報告 2018-NL-238 (第 5 回自然言語処理シンポジウム), pp. 1-5, 2018.
[6] Isao Goto, Ka Po Chow, Bin Lu, Eiichiro Sumita, and Benjamin K. Tsou. Overview of the patent machine translation task at the NTCIR-10 workshop. Proceedings of the 10th NTCIR Conference, pp. 260-286, 2013.
[7] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond`rej Bojar, Alexandra Constantin, and Evan Herbst. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pp. 177-180, 2007.
[8] Taku Kudo and John Richardson. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 66-71, 2018.
[9] Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, 2019.
[10] Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311-318, 2002.
</div></details> |
limajean/audiojk007 | ---
license: openrail
---
|
CyberHarem/xuanzang_sanzang_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of xuanzang_sanzang/玄奘三蔵/玄奘三藏 (Fate/Grand Order)
This is the dataset of xuanzang_sanzang/玄奘三蔵/玄奘三藏 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are `long_hair, breasts, brown_hair, large_breasts, hair_between_eyes, earrings, hoop_earrings, hat, purple_eyes, black_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 662.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/xuanzang_sanzang_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 580.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/xuanzang_sanzang_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1224 | 1.09 GiB | [Download](https://huggingface.co/datasets/CyberHarem/xuanzang_sanzang_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/xuanzang_sanzang_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, bead_necklace, cleavage, looking_at_viewer, prayer_beads, smile, solo, white_bikini, blush, white_thighhighs, navel, open_mouth, thighs, bare_shoulders, collarbone, red_eyes, sitting, very_long_hair |
| 1 | 6 |  |  |  |  |  | 1girl, bikini_top_only, cleavage, necklace, prayer_beads, smile, solo, looking_at_viewer, white_bikini |
| 2 | 5 |  |  |  |  |  | 1girl, bead_necklace, bikini_top_only, cleavage, looking_at_viewer, prayer_beads, solo, white_bikini, blush, upper_body, grin, brown_eyes, white_background |
| 3 | 7 |  |  |  |  |  | 1girl, bare_shoulders, bead_necklace, cleavage, prayer_beads, smile, solo, looking_at_viewer, purple_bikini, simple_background, white_background, gourd, blush |
| 4 | 5 |  |  |  |  |  | 1boy, 1girl, bead_necklace, hetero, paizuri, penis, prayer_beads, solo_focus, breasts_squeezed_together, looking_at_viewer, nipples, red_eyes, sweat, male_pubic_hair, bar_censor, ejaculation, grin, huge_breasts, nose_blush, open_mouth, white_background |
| 5 | 8 |  |  |  |  |  | 1boy, 1girl, bead_necklace, blush, hetero, navel, prayer_beads, nipples, penis, thighhighs, thighs, mosaic_censoring, purple_bikini, spread_legs, sweat, open_mouth, sex, solo_focus, vaginal, looking_at_viewer, bare_shoulders, bridal_gauntlets, cum_in_pussy, detached_sleeves, nude, on_back, purple_headwear |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bead_necklace | cleavage | looking_at_viewer | prayer_beads | smile | solo | white_bikini | blush | white_thighhighs | navel | open_mouth | thighs | bare_shoulders | collarbone | red_eyes | sitting | very_long_hair | bikini_top_only | necklace | upper_body | grin | brown_eyes | white_background | purple_bikini | simple_background | gourd | 1boy | hetero | paizuri | penis | solo_focus | breasts_squeezed_together | nipples | sweat | male_pubic_hair | bar_censor | ejaculation | huge_breasts | nose_blush | thighhighs | mosaic_censoring | spread_legs | sex | vaginal | bridal_gauntlets | cum_in_pussy | detached_sleeves | nude | on_back | purple_headwear |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------|:-----------|:--------------------|:---------------|:--------|:-------|:---------------|:--------|:-------------------|:--------|:-------------|:---------|:-----------------|:-------------|:-----------|:----------|:-----------------|:------------------|:-----------|:-------------|:-------|:-------------|:-------------------|:----------------|:--------------------|:--------|:-------|:---------|:----------|:--------|:-------------|:----------------------------|:----------|:--------|:------------------|:-------------|:--------------|:---------------|:-------------|:-------------|:-------------------|:--------------|:------|:----------|:-------------------|:---------------|:-------------------|:-------|:----------|:------------------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | | X | X | X | X | X | X | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | X | X | | X | X | X | | | | | | | | | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | | X | | | | | X | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | | X | X | | | | | | | X | | | | X | | | | | | X | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 5 | 8 |  |  |  |  |  | X | X | | X | X | | | | X | | X | X | X | X | | | | | | | | | | | X | | | X | X | | X | X | | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X |
|
azz1990/myTest | ---
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- en
- ny
tags:
- biology
- sfd
- abc
pretty_name: testAb
size_categories:
- 1K<n<10K
--- |
open-llm-leaderboard/details_khoantap__wizard-limarp | ---
pretty_name: Evaluation run of khoantap/wizard-limarp
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [khoantap/wizard-limarp](https://huggingface.co/khoantap/wizard-limarp) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_khoantap__wizard-limarp\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-10-01T15:39:54.493965](https://huggingface.co/datasets/open-llm-leaderboard/details_khoantap__wizard-limarp/blob/main/results_2023-10-01T15-39-54.493965.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5508016266489828,\n\
\ \"acc_stderr\": 0.03448632181800869,\n \"acc_norm\": 0.5547870513407215,\n\
\ \"acc_norm_stderr\": 0.03446689219489,\n \"mc1\": 0.33414932680538556,\n\
\ \"mc1_stderr\": 0.016512530677150538,\n \"mc2\": 0.482777527677442,\n\
\ \"mc2_stderr\": 0.015184988472523642\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5452218430034129,\n \"acc_stderr\": 0.014551507060836357,\n\
\ \"acc_norm\": 0.5861774744027304,\n \"acc_norm_stderr\": 0.014392730009221005\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6244771957777335,\n\
\ \"acc_stderr\": 0.004832679188788789,\n \"acc_norm\": 0.8186616211909978,\n\
\ \"acc_norm_stderr\": 0.003845108476401298\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5037037037037037,\n\
\ \"acc_stderr\": 0.04319223625811331,\n \"acc_norm\": 0.5037037037037037,\n\
\ \"acc_norm_stderr\": 0.04319223625811331\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5394736842105263,\n \"acc_stderr\": 0.04056242252249034,\n\
\ \"acc_norm\": 0.5394736842105263,\n \"acc_norm_stderr\": 0.04056242252249034\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.54,\n\
\ \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \
\ \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6075471698113207,\n \"acc_stderr\": 0.030052580579557845,\n\
\ \"acc_norm\": 0.6075471698113207,\n \"acc_norm_stderr\": 0.030052580579557845\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5625,\n\
\ \"acc_stderr\": 0.04148415739394154,\n \"acc_norm\": 0.5625,\n \
\ \"acc_norm_stderr\": 0.04148415739394154\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \
\ \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\"\
: 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5086705202312138,\n\
\ \"acc_stderr\": 0.03811890988940412,\n \"acc_norm\": 0.5086705202312138,\n\
\ \"acc_norm_stderr\": 0.03811890988940412\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.23529411764705882,\n \"acc_stderr\": 0.04220773659171451,\n\
\ \"acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.04220773659171451\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n\
\ \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.4340425531914894,\n \"acc_stderr\": 0.032400380867927465,\n\
\ \"acc_norm\": 0.4340425531914894,\n \"acc_norm_stderr\": 0.032400380867927465\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2807017543859649,\n\
\ \"acc_stderr\": 0.042270544512322004,\n \"acc_norm\": 0.2807017543859649,\n\
\ \"acc_norm_stderr\": 0.042270544512322004\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.4689655172413793,\n \"acc_stderr\": 0.04158632762097828,\n\
\ \"acc_norm\": 0.4689655172413793,\n \"acc_norm_stderr\": 0.04158632762097828\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3544973544973545,\n \"acc_stderr\": 0.024636830602842,\n \"acc_norm\"\
: 0.3544973544973545,\n \"acc_norm_stderr\": 0.024636830602842\n },\n\
\ \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.40476190476190477,\n\
\ \"acc_stderr\": 0.04390259265377562,\n \"acc_norm\": 0.40476190476190477,\n\
\ \"acc_norm_stderr\": 0.04390259265377562\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6387096774193548,\n\
\ \"acc_stderr\": 0.027327548447957532,\n \"acc_norm\": 0.6387096774193548,\n\
\ \"acc_norm_stderr\": 0.027327548447957532\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.458128078817734,\n \"acc_stderr\": 0.03505630140785741,\n\
\ \"acc_norm\": 0.458128078817734,\n \"acc_norm_stderr\": 0.03505630140785741\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\"\
: 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.03663974994391244,\n\
\ \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.03663974994391244\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.702020202020202,\n \"acc_stderr\": 0.03258630383836556,\n \"acc_norm\"\
: 0.702020202020202,\n \"acc_norm_stderr\": 0.03258630383836556\n },\n\
\ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.7927461139896373,\n \"acc_stderr\": 0.029252823291803638,\n\
\ \"acc_norm\": 0.7927461139896373,\n \"acc_norm_stderr\": 0.029252823291803638\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5205128205128206,\n \"acc_stderr\": 0.02532966316348994,\n \
\ \"acc_norm\": 0.5205128205128206,\n \"acc_norm_stderr\": 0.02532966316348994\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3037037037037037,\n \"acc_stderr\": 0.02803792996911499,\n \
\ \"acc_norm\": 0.3037037037037037,\n \"acc_norm_stderr\": 0.02803792996911499\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.5840336134453782,\n \"acc_stderr\": 0.032016501007396114,\n\
\ \"acc_norm\": 0.5840336134453782,\n \"acc_norm_stderr\": 0.032016501007396114\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.32450331125827814,\n \"acc_stderr\": 0.03822746937658753,\n \"\
acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.03822746937658753\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.728440366972477,\n \"acc_stderr\": 0.01906909836319143,\n \"acc_norm\"\
: 0.728440366972477,\n \"acc_norm_stderr\": 0.01906909836319143\n },\n\
\ \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.3888888888888889,\n\
\ \"acc_stderr\": 0.03324708911809117,\n \"acc_norm\": 0.3888888888888889,\n\
\ \"acc_norm_stderr\": 0.03324708911809117\n },\n \"harness|hendrycksTest-high_school_us_history|5\"\
: {\n \"acc\": 0.75,\n \"acc_stderr\": 0.03039153369274154,\n \
\ \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.03039153369274154\n \
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\"\
: 0.7257383966244726,\n \"acc_stderr\": 0.029041333510598028,\n \"\
acc_norm\": 0.7257383966244726,\n \"acc_norm_stderr\": 0.029041333510598028\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6502242152466368,\n\
\ \"acc_stderr\": 0.03200736719484503,\n \"acc_norm\": 0.6502242152466368,\n\
\ \"acc_norm_stderr\": 0.03200736719484503\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6183206106870229,\n \"acc_stderr\": 0.042607351576445594,\n\
\ \"acc_norm\": 0.6183206106870229,\n \"acc_norm_stderr\": 0.042607351576445594\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228732,\n \"\
acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228732\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n\
\ \"acc_stderr\": 0.04236511258094633,\n \"acc_norm\": 0.7407407407407407,\n\
\ \"acc_norm_stderr\": 0.04236511258094633\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6380368098159509,\n \"acc_stderr\": 0.037757007291414416,\n\
\ \"acc_norm\": 0.6380368098159509,\n \"acc_norm_stderr\": 0.037757007291414416\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.33035714285714285,\n\
\ \"acc_stderr\": 0.04464285714285712,\n \"acc_norm\": 0.33035714285714285,\n\
\ \"acc_norm_stderr\": 0.04464285714285712\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.6699029126213593,\n \"acc_stderr\": 0.04656147110012351,\n\
\ \"acc_norm\": 0.6699029126213593,\n \"acc_norm_stderr\": 0.04656147110012351\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.811965811965812,\n\
\ \"acc_stderr\": 0.02559819368665224,\n \"acc_norm\": 0.811965811965812,\n\
\ \"acc_norm_stderr\": 0.02559819368665224\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \
\ \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n\
\ \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7279693486590039,\n\
\ \"acc_stderr\": 0.015913367447500503,\n \"acc_norm\": 0.7279693486590039,\n\
\ \"acc_norm_stderr\": 0.015913367447500503\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.5924855491329479,\n \"acc_stderr\": 0.026454578146931505,\n\
\ \"acc_norm\": 0.5924855491329479,\n \"acc_norm_stderr\": 0.026454578146931505\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.30726256983240224,\n\
\ \"acc_stderr\": 0.015430158846469609,\n \"acc_norm\": 0.30726256983240224,\n\
\ \"acc_norm_stderr\": 0.015430158846469609\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6013071895424836,\n \"acc_stderr\": 0.028036092273891776,\n\
\ \"acc_norm\": 0.6013071895424836,\n \"acc_norm_stderr\": 0.028036092273891776\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6045016077170418,\n\
\ \"acc_stderr\": 0.02777091853142784,\n \"acc_norm\": 0.6045016077170418,\n\
\ \"acc_norm_stderr\": 0.02777091853142784\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.5771604938271605,\n \"acc_stderr\": 0.027487472980871595,\n\
\ \"acc_norm\": 0.5771604938271605,\n \"acc_norm_stderr\": 0.027487472980871595\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.41843971631205673,\n \"acc_stderr\": 0.02942799403941999,\n \
\ \"acc_norm\": 0.41843971631205673,\n \"acc_norm_stderr\": 0.02942799403941999\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.41460234680573665,\n\
\ \"acc_stderr\": 0.012582597058908284,\n \"acc_norm\": 0.41460234680573665,\n\
\ \"acc_norm_stderr\": 0.012582597058908284\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5551470588235294,\n \"acc_stderr\": 0.030187532060329387,\n\
\ \"acc_norm\": 0.5551470588235294,\n \"acc_norm_stderr\": 0.030187532060329387\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5359477124183006,\n \"acc_stderr\": 0.02017548876548404,\n \
\ \"acc_norm\": 0.5359477124183006,\n \"acc_norm_stderr\": 0.02017548876548404\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n\
\ \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n\
\ \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6693877551020408,\n \"acc_stderr\": 0.030116426296540603,\n\
\ \"acc_norm\": 0.6693877551020408,\n \"acc_norm_stderr\": 0.030116426296540603\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7014925373134329,\n\
\ \"acc_stderr\": 0.03235743789355042,\n \"acc_norm\": 0.7014925373134329,\n\
\ \"acc_norm_stderr\": 0.03235743789355042\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \
\ \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.46987951807228917,\n\
\ \"acc_stderr\": 0.03885425420866766,\n \"acc_norm\": 0.46987951807228917,\n\
\ \"acc_norm_stderr\": 0.03885425420866766\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7543859649122807,\n \"acc_stderr\": 0.0330140594698725,\n\
\ \"acc_norm\": 0.7543859649122807,\n \"acc_norm_stderr\": 0.0330140594698725\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.33414932680538556,\n\
\ \"mc1_stderr\": 0.016512530677150538,\n \"mc2\": 0.482777527677442,\n\
\ \"mc2_stderr\": 0.015184988472523642\n }\n}\n```"
repo_url: https://huggingface.co/khoantap/wizard-limarp
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|arc:challenge|25_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hellaswag|10_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-01T15-39-54.493965.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-01T15-39-54.493965.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-01T15-39-54.493965.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-01T15-39-54.493965.parquet'
- config_name: results
data_files:
- split: 2023_10_01T15_39_54.493965
path:
- results_2023-10-01T15-39-54.493965.parquet
- split: latest
path:
- results_2023-10-01T15-39-54.493965.parquet
---
# Dataset Card for Evaluation run of khoantap/wizard-limarp
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/khoantap/wizard-limarp
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [khoantap/wizard-limarp](https://huggingface.co/khoantap/wizard-limarp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_khoantap__wizard-limarp",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-01T15:39:54.493965](https://huggingface.co/datasets/open-llm-leaderboard/details_khoantap__wizard-limarp/blob/main/results_2023-10-01T15-39-54.493965.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5508016266489828,
"acc_stderr": 0.03448632181800869,
"acc_norm": 0.5547870513407215,
"acc_norm_stderr": 0.03446689219489,
"mc1": 0.33414932680538556,
"mc1_stderr": 0.016512530677150538,
"mc2": 0.482777527677442,
"mc2_stderr": 0.015184988472523642
},
"harness|arc:challenge|25": {
"acc": 0.5452218430034129,
"acc_stderr": 0.014551507060836357,
"acc_norm": 0.5861774744027304,
"acc_norm_stderr": 0.014392730009221005
},
"harness|hellaswag|10": {
"acc": 0.6244771957777335,
"acc_stderr": 0.004832679188788789,
"acc_norm": 0.8186616211909978,
"acc_norm_stderr": 0.003845108476401298
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5037037037037037,
"acc_stderr": 0.04319223625811331,
"acc_norm": 0.5037037037037037,
"acc_norm_stderr": 0.04319223625811331
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5394736842105263,
"acc_stderr": 0.04056242252249034,
"acc_norm": 0.5394736842105263,
"acc_norm_stderr": 0.04056242252249034
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6075471698113207,
"acc_stderr": 0.030052580579557845,
"acc_norm": 0.6075471698113207,
"acc_norm_stderr": 0.030052580579557845
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5625,
"acc_stderr": 0.04148415739394154,
"acc_norm": 0.5625,
"acc_norm_stderr": 0.04148415739394154
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5086705202312138,
"acc_stderr": 0.03811890988940412,
"acc_norm": 0.5086705202312138,
"acc_norm_stderr": 0.03811890988940412
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.23529411764705882,
"acc_stderr": 0.04220773659171451,
"acc_norm": 0.23529411764705882,
"acc_norm_stderr": 0.04220773659171451
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4340425531914894,
"acc_stderr": 0.032400380867927465,
"acc_norm": 0.4340425531914894,
"acc_norm_stderr": 0.032400380867927465
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2807017543859649,
"acc_stderr": 0.042270544512322004,
"acc_norm": 0.2807017543859649,
"acc_norm_stderr": 0.042270544512322004
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.4689655172413793,
"acc_stderr": 0.04158632762097828,
"acc_norm": 0.4689655172413793,
"acc_norm_stderr": 0.04158632762097828
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3544973544973545,
"acc_stderr": 0.024636830602842,
"acc_norm": 0.3544973544973545,
"acc_norm_stderr": 0.024636830602842
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.04390259265377562,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.04390259265377562
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6387096774193548,
"acc_stderr": 0.027327548447957532,
"acc_norm": 0.6387096774193548,
"acc_norm_stderr": 0.027327548447957532
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.458128078817734,
"acc_stderr": 0.03505630140785741,
"acc_norm": 0.458128078817734,
"acc_norm_stderr": 0.03505630140785741
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.03663974994391244,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.03663974994391244
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.702020202020202,
"acc_stderr": 0.03258630383836556,
"acc_norm": 0.702020202020202,
"acc_norm_stderr": 0.03258630383836556
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7927461139896373,
"acc_stderr": 0.029252823291803638,
"acc_norm": 0.7927461139896373,
"acc_norm_stderr": 0.029252823291803638
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5205128205128206,
"acc_stderr": 0.02532966316348994,
"acc_norm": 0.5205128205128206,
"acc_norm_stderr": 0.02532966316348994
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3037037037037037,
"acc_stderr": 0.02803792996911499,
"acc_norm": 0.3037037037037037,
"acc_norm_stderr": 0.02803792996911499
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5840336134453782,
"acc_stderr": 0.032016501007396114,
"acc_norm": 0.5840336134453782,
"acc_norm_stderr": 0.032016501007396114
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.03822746937658753,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.03822746937658753
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.728440366972477,
"acc_stderr": 0.01906909836319143,
"acc_norm": 0.728440366972477,
"acc_norm_stderr": 0.01906909836319143
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.03324708911809117,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.03324708911809117
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.75,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7257383966244726,
"acc_stderr": 0.029041333510598028,
"acc_norm": 0.7257383966244726,
"acc_norm_stderr": 0.029041333510598028
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6502242152466368,
"acc_stderr": 0.03200736719484503,
"acc_norm": 0.6502242152466368,
"acc_norm_stderr": 0.03200736719484503
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6183206106870229,
"acc_stderr": 0.042607351576445594,
"acc_norm": 0.6183206106870229,
"acc_norm_stderr": 0.042607351576445594
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228732,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228732
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.04236511258094633,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.04236511258094633
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6380368098159509,
"acc_stderr": 0.037757007291414416,
"acc_norm": 0.6380368098159509,
"acc_norm_stderr": 0.037757007291414416
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.33035714285714285,
"acc_stderr": 0.04464285714285712,
"acc_norm": 0.33035714285714285,
"acc_norm_stderr": 0.04464285714285712
},
"harness|hendrycksTest-management|5": {
"acc": 0.6699029126213593,
"acc_stderr": 0.04656147110012351,
"acc_norm": 0.6699029126213593,
"acc_norm_stderr": 0.04656147110012351
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.811965811965812,
"acc_stderr": 0.02559819368665224,
"acc_norm": 0.811965811965812,
"acc_norm_stderr": 0.02559819368665224
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7279693486590039,
"acc_stderr": 0.015913367447500503,
"acc_norm": 0.7279693486590039,
"acc_norm_stderr": 0.015913367447500503
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5924855491329479,
"acc_stderr": 0.026454578146931505,
"acc_norm": 0.5924855491329479,
"acc_norm_stderr": 0.026454578146931505
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.30726256983240224,
"acc_stderr": 0.015430158846469609,
"acc_norm": 0.30726256983240224,
"acc_norm_stderr": 0.015430158846469609
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6013071895424836,
"acc_stderr": 0.028036092273891776,
"acc_norm": 0.6013071895424836,
"acc_norm_stderr": 0.028036092273891776
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6045016077170418,
"acc_stderr": 0.02777091853142784,
"acc_norm": 0.6045016077170418,
"acc_norm_stderr": 0.02777091853142784
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5771604938271605,
"acc_stderr": 0.027487472980871595,
"acc_norm": 0.5771604938271605,
"acc_norm_stderr": 0.027487472980871595
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.41843971631205673,
"acc_stderr": 0.02942799403941999,
"acc_norm": 0.41843971631205673,
"acc_norm_stderr": 0.02942799403941999
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.41460234680573665,
"acc_stderr": 0.012582597058908284,
"acc_norm": 0.41460234680573665,
"acc_norm_stderr": 0.012582597058908284
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5551470588235294,
"acc_stderr": 0.030187532060329387,
"acc_norm": 0.5551470588235294,
"acc_norm_stderr": 0.030187532060329387
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5359477124183006,
"acc_stderr": 0.02017548876548404,
"acc_norm": 0.5359477124183006,
"acc_norm_stderr": 0.02017548876548404
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6693877551020408,
"acc_stderr": 0.030116426296540603,
"acc_norm": 0.6693877551020408,
"acc_norm_stderr": 0.030116426296540603
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7014925373134329,
"acc_stderr": 0.03235743789355042,
"acc_norm": 0.7014925373134329,
"acc_norm_stderr": 0.03235743789355042
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-virology|5": {
"acc": 0.46987951807228917,
"acc_stderr": 0.03885425420866766,
"acc_norm": 0.46987951807228917,
"acc_norm_stderr": 0.03885425420866766
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7543859649122807,
"acc_stderr": 0.0330140594698725,
"acc_norm": 0.7543859649122807,
"acc_norm_stderr": 0.0330140594698725
},
"harness|truthfulqa:mc|0": {
"mc1": 0.33414932680538556,
"mc1_stderr": 0.016512530677150538,
"mc2": 0.482777527677442,
"mc2_stderr": 0.015184988472523642
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
tilyupo/quac_cqa | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 176933317
num_examples: 69109
- name: validation
num_bytes: 16817625
num_examples: 5868
download_size: 26638055
dataset_size: 193750942
---
# Dataset Card for "quac_cqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
akazad/github-commits | ---
dataset_info:
features:
- name: hash
dtype: string
- name: msg
dtype: string
- name: author
dtype: string
- name: email
dtype: string
- name: date
dtype: int64
splits:
- name: train
num_bytes: 42673325
num_examples: 155401
download_size: 24367297
dataset_size: 42673325
---
# Dataset Card for "github-commits"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-billsum-default-e23aac-2376574533 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- billsum
eval_info:
task: summarization
model: google/bigbird-pegasus-large-bigpatent
metrics: []
dataset_name: billsum
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-bigpatent
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sam-mosaic](https://huggingface.co/sam-mosaic) for evaluating this model. |
mmdjiji/bert-chinese-idioms | ---
license: gpl-3.0
---
For the detail, see [github:mmdjiji/bert-chinese-idioms](https://github.com/mmdjiji/bert-chinese-idioms).
[preprocess.js](preprocess.js) is a Node.JS script to generate the data for training the language model. |
habedi/stack-exchange-dataset | ---
license: cc
task_categories:
- text-classification
- question-answering
language:
- en
size_categories:
- 10K<n<100K
pretty_name: Stack Exchange -- Question Dataset
---
This dataset consists of three CSV files, namely: 'cs.csv', 'ds.csv', and 'p.csv'.
Each CSV file includes the data for the questions asked on a Stack Exchange (SE) question-answering community, from the creation of the community until May 2021.
- 'cs.csv' --> [Computer Science SE](https://cs.stackexchange.com/)
- 'ds.csv' --> [Data Science SE](https://datascience.stackexchange.com/)
- 'p.csv' --> [Political Science SE](https://politics.stackexchange.com/)
Each CSV file has the following columns:
- `id`: the question id
- `title`: the title of the question
- `body`: the body or text of the question
- `tags`: the list of tags assigned to the question
- `label`: a label indicating whether the question is resolved or not (0: not resolved; 1: resolved)
The dataset was used in these researches:
- [A deep learning-based approach for identifying unresolved questions on Stack Exchange Q&A communities through graph-based communication modelling](https://doi.org/10.1007/s41060-023-00454-0)
- [Survival analysis for user disengagement prediction: question-and-answering communities’ case](https://doi.org/10.1007/s13278-022-00914-8) |
huggingartists/the-sugarcubes | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/the-sugarcubes"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.077715 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/da10eeb7730741736a4f7ac4cc998c4e.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/the-sugarcubes">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Sugarcubes</div>
<a href="https://genius.com/artists/the-sugarcubes">
<div style="text-align: center; font-size: 14px;">@the-sugarcubes</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/the-sugarcubes).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/the-sugarcubes")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|52| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/the-sugarcubes")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
Cohere/beir-embed-english-v3 | ---
configs:
- config_name: arguana-corpus
data_files:
- split: train
path: arguana/corpus/*
- config_name: arguana-queries
data_files:
- split: test
path: arguana/queries/test.parquet
- config_name: arguana-qrels
data_files:
- split: test
path: arguana/qrels/test.parquet
- config_name: bioasq-corpus
data_files:
- split: train
path: bioasq/corpus/*
- config_name: bioasq-queries
data_files:
- split: train
path: bioasq/queries/train.parquet
- split: test
path: bioasq/queries/test.parquet
- config_name: bioasq-qrels
data_files:
- split: train
path: bioasq/qrels/train.parquet
- split: test
path: bioasq/qrels/test.parquet
- config_name: climate-fever-corpus
data_files:
- split: train
path: climate-fever/corpus/*
- config_name: climate-fever-queries
data_files:
- split: test
path: climate-fever/queries/test.parquet
- config_name: climate-fever-qrels
data_files:
- split: test
path: climate-fever/qrels/test.parquet
- config_name: cqadupstack-android-corpus
data_files:
- split: train
path: cqadupstack-android/corpus/*
- config_name: cqadupstack-android-queries
data_files:
- split: test
path: cqadupstack-android/queries/test.parquet
- config_name: cqadupstack-android-qrels
data_files:
- split: test
path: cqadupstack-android/qrels/test.parquet
- config_name: cqadupstack-english-corpus
data_files:
- split: train
path: cqadupstack-english/corpus/*
- config_name: cqadupstack-english-queries
data_files:
- split: test
path: cqadupstack-english/queries/test.parquet
- config_name: cqadupstack-english-qrels
data_files:
- split: test
path: cqadupstack-english/qrels/test.parquet
- config_name: cqadupstack-gaming-corpus
data_files:
- split: train
path: cqadupstack-gaming/corpus/*
- config_name: cqadupstack-gaming-queries
data_files:
- split: test
path: cqadupstack-gaming/queries/test.parquet
- config_name: cqadupstack-gaming-qrels
data_files:
- split: test
path: cqadupstack-gaming/qrels/test.parquet
- config_name: cqadupstack-gis-corpus
data_files:
- split: train
path: cqadupstack-gis/corpus/*
- config_name: cqadupstack-gis-queries
data_files:
- split: test
path: cqadupstack-gis/queries/test.parquet
- config_name: cqadupstack-gis-qrels
data_files:
- split: test
path: cqadupstack-gis/qrels/test.parquet
- config_name: cqadupstack-mathematica-corpus
data_files:
- split: train
path: cqadupstack-mathematica/corpus/*
- config_name: cqadupstack-mathematica-queries
data_files:
- split: test
path: cqadupstack-mathematica/queries/test.parquet
- config_name: cqadupstack-mathematica-qrels
data_files:
- split: test
path: cqadupstack-mathematica/qrels/test.parquet
- config_name: cqadupstack-physics-corpus
data_files:
- split: train
path: cqadupstack-physics/corpus/*
- config_name: cqadupstack-physics-queries
data_files:
- split: test
path: cqadupstack-physics/queries/test.parquet
- config_name: cqadupstack-physics-qrels
data_files:
- split: test
path: cqadupstack-physics/qrels/test.parquet
- config_name: cqadupstack-programmers-corpus
data_files:
- split: train
path: cqadupstack-programmers/corpus/*
- config_name: cqadupstack-programmers-queries
data_files:
- split: test
path: cqadupstack-programmers/queries/test.parquet
- config_name: cqadupstack-programmers-qrels
data_files:
- split: test
path: cqadupstack-programmers/qrels/test.parquet
- config_name: cqadupstack-stats-corpus
data_files:
- split: train
path: cqadupstack-stats/corpus/*
- config_name: cqadupstack-stats-queries
data_files:
- split: test
path: cqadupstack-stats/queries/test.parquet
- config_name: cqadupstack-stats-qrels
data_files:
- split: test
path: cqadupstack-stats/qrels/test.parquet
- config_name: cqadupstack-text-corpus
data_files:
- split: train
path: cqadupstack-text/corpus/*
- config_name: cqadupstack-text-queries
data_files:
- split: test
path: cqadupstack-text/queries/test.parquet
- config_name: cqadupstack-text-qrels
data_files:
- split: test
path: cqadupstack-text/qrels/test.parquet
- config_name: cqadupstack-unix-corpus
data_files:
- split: train
path: cqadupstack-unix/corpus/*
- config_name: cqadupstack-unix-queries
data_files:
- split: test
path: cqadupstack-unix/queries/test.parquet
- config_name: cqadupstack-unix-qrels
data_files:
- split: test
path: cqadupstack-unix/qrels/test.parquet
- config_name: cqadupstack-webmasters-corpus
data_files:
- split: train
path: cqadupstack-webmasters/corpus/*
- config_name: cqadupstack-webmasters-queries
data_files:
- split: test
path: cqadupstack-webmasters/queries/test.parquet
- config_name: cqadupstack-webmasters-qrels
data_files:
- split: test
path: cqadupstack-webmasters/qrels/test.parquet
- config_name: cqadupstack-wordpress-corpus
data_files:
- split: train
path: cqadupstack-wordpress/corpus/*
- config_name: cqadupstack-wordpress-queries
data_files:
- split: test
path: cqadupstack-wordpress/queries/test.parquet
- config_name: cqadupstack-wordpress-qrels
data_files:
- split: test
path: cqadupstack-wordpress/qrels/test.parquet
- config_name: fever-corpus
data_files:
- split: train
path: fever/corpus/*
- config_name: fever-queries
data_files:
- split: train
path: fever/queries/train.parquet
- split: dev
path: fever/queries/dev.parquet
- split: test
path: fever/queries/test.parquet
- config_name: fever-qrels
data_files:
- split: train
path: fever/qrels/train.parquet
- split: dev
path: fever/qrels/dev.parquet
- split: test
path: fever/qrels/test.parquet
- config_name: fiqa-corpus
data_files:
- split: train
path: fiqa/corpus/*
- config_name: fiqa-queries
data_files:
- split: train
path: fiqa/queries/train.parquet
- split: dev
path: fiqa/queries/dev.parquet
- split: all
path: fiqa/queries/all.parquet
- split: test
path: fiqa/queries/test.parquet
- config_name: fiqa-qrels
data_files:
- split: train
path: fiqa/qrels/train.parquet
- split: dev
path: fiqa/qrels/dev.parquet
- split: all
path: fiqa/qrels/all.parquet
- split: test
path: fiqa/qrels/test.parquet
- config_name: hotpotqa-corpus
data_files:
- split: train
path: hotpotqa/corpus/*
- config_name: hotpotqa-queries
data_files:
- split: train
path: hotpotqa/queries/train.parquet
- split: dev
path: hotpotqa/queries/dev.parquet
- split: test
path: hotpotqa/queries/test.parquet
- config_name: hotpotqa-qrels
data_files:
- split: train
path: hotpotqa/qrels/train.parquet
- split: dev
path: hotpotqa/qrels/dev.parquet
- split: test
path: hotpotqa/qrels/test.parquet
- config_name: msmarco-corpus
data_files:
- split: train
path: msmarco/corpus/*
- config_name: msmarco-queries
data_files:
- split: train
path: msmarco/queries/train.parquet
- split: dev
path: msmarco/queries/dev.parquet
- config_name: msmarco-qrels
data_files:
- split: train
path: msmarco/qrels/train.parquet
- split: dev
path: msmarco/qrels/dev.parquet
- config_name: nfcorpus-corpus
data_files:
- split: train
path: nfcorpus/corpus/*
- config_name: nfcorpus-queries
data_files:
- split: train
path: nfcorpus/queries/train.parquet
- split: dev
path: nfcorpus/queries/dev.parquet
- split: test
path: nfcorpus/queries/test.parquet
- config_name: nfcorpus-qrels
data_files:
- split: train
path: nfcorpus/qrels/train.parquet
- split: dev
path: nfcorpus/qrels/dev.parquet
- split: test
path: nfcorpus/qrels/test.parquet
- config_name: nq-corpus
data_files:
- split: train
path: nq/corpus/*
- config_name: nq-queries
data_files:
- split: test
path: nq/queries/test.parquet
- config_name: nq-qrels
data_files:
- split: test
path: nq/qrels/test.parquet
- config_name: quora-corpus
data_files:
- split: train
path: quora/corpus/*
- config_name: quora-queries
data_files:
- split: dev
path: quora/queries/dev.parquet
- split: test
path: quora/queries/test.parquet
- config_name: quora-qrels
data_files:
- split: dev
path: quora/qrels/dev.parquet
- split: test
path: quora/qrels/test.parquet
- config_name: robust04-corpus
data_files:
- split: train
path: robust04/corpus/*
- config_name: robust04-queries
data_files:
- split: test
path: robust04/queries/test.parquet
- config_name: robust04-qrels
data_files:
- split: test
path: robust04/qrels/test.parquet
- config_name: scidocs-corpus
data_files:
- split: train
path: scidocs/corpus/*
- config_name: scidocs-queries
data_files:
- split: test
path: scidocs/queries/test.parquet
- config_name: scidocs-qrels
data_files:
- split: test
path: scidocs/qrels/test.parquet
- config_name: scifact-corpus
data_files:
- split: train
path: scifact/corpus/*
- config_name: scifact-queries
data_files:
- split: train
path: scifact/queries/train.parquet
- split: test
path: scifact/queries/test.parquet
- config_name: scifact-qrels
data_files:
- split: train
path: scifact/qrels/train.parquet
- split: test
path: scifact/qrels/test.parquet
- config_name: signal1m-corpus
data_files:
- split: train
path: signal1m/corpus/*
- config_name: signal1m-queries
data_files:
- split: test
path: signal1m/queries/test.parquet
- config_name: signal1m-qrels
data_files:
- split: test
path: signal1m/qrels/test.parquet
- config_name: trec-covid-corpus
data_files:
- split: train
path: trec-covid/corpus/*
- config_name: trec-covid-queries
data_files:
- split: test
path: trec-covid/queries/test.parquet
- config_name: trec-covid-qrels
data_files:
- split: test
path: trec-covid/qrels/test.parquet
- config_name: trec-news-corpus
data_files:
- split: train
path: trec-news/corpus/*
- config_name: trec-news-queries
data_files:
- split: test
path: trec-news/queries/test.parquet
- config_name: trec-news-qrels
data_files:
- split: test
path: trec-news/qrels/test.parquet
- config_name: webis-touche2020-corpus
data_files:
- split: train
path: webis-touche2020/corpus/*
- config_name: webis-touche2020-queries
data_files:
- split: test
path: webis-touche2020/queries/test.parquet
- config_name: webis-touche2020-qrels
data_files:
- split: test
path: webis-touche2020/qrels/test.parquet
---
# BEIR embeddings with Cohere embed-english-v3.0 model
This datasets contains all query & document embeddings for [BEIR](https://github.com/beir-cellar/beir), embedded with the [Cohere embed-english-v3.0](https://huggingface.co/Cohere/Cohere-embed-english-v3.0) embedding model.
## Overview of datasets
This repository hosts all 18 datasets from BEIR, including query and document embeddings. The following table gives an overview of the available datasets.
See the next section how to load the individual datasets.
| Dataset | nDCG@10 | #Documents
| --- | --- | --- |
| arguana | 53.98 | 8,674 |
| bioasq | 45.66 | 14,914,603 |
| climate-fever | 25.90 | 5,416,593 |
| cqadupstack-android | 50.01 | 22,998 |
| cqadupstack-english | 49.09 | 40,221 |
| cqadupstack-gaming | 60.50 | 45,301 |
| cqadupstack-gis | 39.17 | 37,637 |
| cqadupstack-mathematica | 30.38 | 16,705 |
| cqadupstack-physics | 43.82 | 38,316 |
| cqadupstack-programmers | 43.67 | 32,176 |
| cqadupstack-stats | 35.23 | 42,269 |
| cqadupstack-text | 30.84 | 68,184 |
| cqadupstack-unix | 40.59 | 47,382 |
| cqadupstack-webmasters | 40.68 | 17,405 |
| cqadupstack-wordpress | 34.26 | 48,605 |
| fever | 89.00 | 5,416,568 |
| fiqa | 42.14 | 57,638 |
| hotpotqa | 70.72 | 5,233,329 |
| msmarco | 42.86 | 8,841,823 |
| nfcorpus | 38.63 | 3,633 |
| nq | 61.62 | 2,681,468 |
| quora | 88.72 | 522,931 |
| robust04 | 54.06 | 528,155 |
| scidocs | 20.34 | 25,657 |
| scifact | 71.81 | 5,183 |
| signal1m | 26.32 | 2,866,316 |
| trec-covid | 81.78 | 171,332 |
| trec-news | 50.42 | 594,977 |
| webis-touche2020 | 32.64 | 382,545 |
Notes:
- arguana: The task of arguana is to find for a given argument (e.g. `Being vegetarian helps the environment ...`), an argument that refutes it (e.g. `Vegetarian doesn't have an impact on the environment`). Naturally, embedding models work by finding the most similar texts, hence for the given argument it would find similar arguments first that support that `vegetarian helps the environment`, which would be treated as non-relevant. By embedding model prompting, the model can be steered to find arguments that refute the query. This will improve the nDCG@10 score from 53.98 to 61.5.
- climate-fever: The task is to find evidence that support or refute a claim. As with arguana, with the default mode, the model will find the evidence primarily supporting the claim. By embedding model prompting, we can tell the model to find support and contra evidence for a claim. This improves the nDCG@10 score to 38.4.
- Quora: As the corpus consists of questions, they have been encoded with the `input_type='search_query'` in order to find similar/duplicate questions.
- cqadupstack: The datasets consists of several sub-datasets, where the nDCG@10 scores will be averaged in BEIR.
- bioasq/robust04/trec-news/signal1m: For these datasets we just provide the IDs and the embeddings, but not title/text fields. See the [BEIR repository](https://github.com/beir-cellar/beir) how to obtain the respective text corpora. You can still evaluate search quality on these datasets.
## Loading the dataset
### Loading the document embeddings
The `corpus` split contains all document embeddings of the corpus.
You can either load the dataset like this:
```python
from datasets import load_dataset
dataset_name = "hotpotqa"
docs = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-corpus", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
dataset_name = "hotpotqa"
docs = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-corpus", split="train", streaming=True)
for doc in docs:
doc_id = doc['_id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
Note, depending on the dataset size, the corpus split can be quite large.
### Loading the query embeddings
The `queries` split contains all query embeddings. There might be up to three splits: `train`, `dev`, and `test`, depending which splits are available in BEIR. Evaluation is performed on the `test` split.
You can load the dataset like this:
```python
from datasets import load_dataset
dataset_name = "hotpotqa"
queries = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-queries", split="test")
for query in queries:
query_id = query['_id']
text = query['text']
emb = query['emb']
```
### Loading the qrels
The `qrels` split contains the query relevance annotation, i.e., it contains the relevance score for (query, document) pairs.
You can load the dataset like this:
```python
from datasets import load_dataset
dataset_name = "hotpotqa"
qrels = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-qrels", split="test")
for qrel in qrels:
query_id = qrel['query_id']
corpus_id = qrel['corpus_id']
score = qrel['score']
```
## Search
The following shows an example, how the dataset can be used to build a semantic search application.
Get your API key from [cohere.com](https://cohere.com) and start using this dataset.
```python
#Run: pip install cohere datasets torch
from datasets import load_dataset
import torch
import cohere
dataset_name = "hotpotqa"
co = cohere.Client("<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-corpus", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'What is an abstract' #Your query
response = co.embed(texts=[query], model='embed-english-v3.0', input_type='search_query')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Running evaluations
This dataset allows to reproduce the [BEIR](https://github.com/beir-cellar/beir) performance results and to compute nDCG@10, Recall@10, and Accuracy@3.
You must have `beir`, `faiss`, `numpy`, and `datasets` installed. The following scripts loads all files, runs search and computes the search quality metrices.
```python
import numpy as np
import faiss
from beir.retrieval.evaluation import EvaluateRetrieval
import time
from datasets import load_dataset
def faiss_search(index, queries_emb, k=[10, 100]):
start_time = time.time()
faiss_scores, faiss_doc_ids = index.search(queries_emb, max(k))
print(f"Search took {(time.time()-start_time):.2f} sec")
query2id = {idx: qid for idx, qid in enumerate(query_ids)}
doc2id = {idx: cid for idx, cid in enumerate(docs_ids)}
faiss_results = {}
for idx in range(0, len(faiss_scores)):
qid = query2id[idx]
doc_scores = {doc2id[doc_id]: score.item() for doc_id, score in zip(faiss_doc_ids[idx], faiss_scores[idx])}
faiss_results[qid] = doc_scores
ndcg, map_score, recall, precision = EvaluateRetrieval.evaluate(qrels, faiss_results, k)
acc = EvaluateRetrieval.evaluate_custom(qrels, faiss_results, [3, 5, 10], metric="acc")
print(ndcg)
print(recall)
print(acc)
dataset_name = "<<DATASET_NAME>>"
dataset_split = "test"
num_dim = 1024
#Load qrels
df = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-qrels", split=dataset_split)
qrels = {}
for row in df:
qid = row['query_id']
cid = row['corpus_id']
if row['score'] > 0:
if qid not in qrels:
qrels[qid] = {}
qrels[qid][cid] = row['score']
#Load queries
df = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-queries", split=dataset_split)
query_ids = df['_id']
query_embs = np.asarray(df['emb'])
print("Query embeddings:", query_embs.shape)
#Load corpus
df = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-corpus", split="train")
docs_ids = df['_id']
#Build index
print("Build index. This might take some time")
index = faiss.IndexFlatIP(num_dim)
index.add(np.asarray(df.to_pandas()['emb'].tolist()))
#Run and evaluate search
print("Seach on index")
faiss_search(index, query_embs)
```
## Notes
- This dataset was created with `datasets==2.15.0`. Make sure to use this or a newer version of the datasets library.
|
jeanevesss/esteicy-teste.v01 | ---
license: cc
---
|
Falah/anime_arabic_style_prompts | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 623328012
num_examples: 1000000
download_size: 112334528
dataset_size: 623328012
---
# Dataset Card for "anime_arabic_style_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LRAI/task-normalization-chip2020 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: entities
sequence: string
splits:
- name: train
num_bytes: 623418
num_examples: 8000
- name: test
num_bytes: 412454
num_examples: 10000
download_size: 601155
dataset_size: 1035872
---
# Dataset Card for "task-normalization-chip2020"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sarahpann/gsmk_sbs | ---
dataset_info:
features:
- name: question
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 4170271
num_examples: 7100
- name: validation
num_bytes: 223453
num_examples: 373
download_size: 2415621
dataset_size: 4393724
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
H-Liu1997/BEAT2 | ---
license: apache-2.0
---
|
linhqyy/result_with_w2v2_spkn_ft_2e | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: id
dtype: string
- name: w2v2_baseline_transcription
dtype: string
- name: w2v2_baseline_norm
dtype: string
splits:
- name: train
num_bytes: 174371742.027
num_examples: 1299
download_size: 164200565
dataset_size: 174371742.027
---
# Dataset Card for "result_with_w2v2_spkn_ft_2e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PeterBrendan/AdImageNet | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: dimensions
dtype: string
splits:
- name: train
num_bytes: 684595217.53
num_examples: 9003
download_size: 682372973
dataset_size: 684595217.53
license: mit
language:
- en
pretty_name: AdImageNet - Programmatic Ad Creatives
---
# Dataset Summary
The AdImageNet dataset contains 9,003 samples of online programmatic ad creatives along with their ad sizes and extracted creative text. Just as ImageNet revolutionized computer vision, AdImageNet aims to serve as a transformative resource for the field of advertising creatives. The dataset includes various ad sizes, such as (300, 250), (728, 90), (970, 250), (300, 600), (160, 600), (970, 90), (336, 280), and (320, 50). This dataset was curated from a larger collection of programmatic creative images hosted by [Project300x250.com](https://www.project300x250.com). It is intended to support the development and evaluation of AI models for tasks related to ad creative generation and understanding.
# Supported Tasks
This dataset is suitable for a range of tasks, including text generation, language modeling, and text augmentation. Researchers and developers can use this dataset to train and fine-tune AI models for generating creative ad copy. Inspired by ImageNet, AdImageNet opens doors to exploring alternatives to proprietary advertising platforms like Google and Meta. By promoting open solutions in the advertising domain, this dataset supports the growth of independent advertising technologies.
# Languages
The dataset primarily consists of English language text.
# Dataset Structure
## Data Fields
The dataset contains the following fields:
- `file_name`: The name of the image file.
- `text`: The extracted text from the programmatic ad creative.
- `dimensions`: The dimensions (ad size) of the creative.
## Data Splits
The data is provided as a single whole dataset and is not split into separate subsets.
# Dataset Creation
## Curation Rationale
AdImageNet was meticulously curated to provide a valuable resource for researchers and developers in the field of advertising creatives. Drawing inspiration from ImageNet's impact on computer vision, AdImageNet aims to revolutionize the advertising domain by offering a diverse collection of advertising creatives. The dataset encourages the development of open-source alternatives to dominant advertising platforms like Google and Meta. By fostering open solutions, AdImageNet promotes creativity and innovation in advertising.
## Source Data
The data is derived from a comprehensive collection of programmatic creative images hosted by [Project300x250.com](https://www.project300x250.com). The creative text was extracted from each image using Google's Vision API.
# Dataset Use
## Use Cases
AdImageNet can serve a variety of purposes, including language understanding, natural language processing, machine learning model training, and performance evaluation. Researchers and practitioners can use this dataset to fine-tune AI models that generate unique ad copy based on programmatic ad text. These models offer a starting point for developing effective marketing content and encouraging creativity in advertising.
## Usage Caveats
As this dataset represents a sampled subset, it is advisable to regularly check for updates and improvements. The full data set is ~18K creative images. Researchers can also reach out to the dataset author for access to the complete dataset available at [Project300x250.com](https://www.project300x250.com). |
AdapterOcean/augmentatio-standardized_cluster_7 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 71839225
num_examples: 7171
download_size: 20017335
dataset_size: 71839225
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "augmentatio-standardized_cluster_7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Jayem-11/mozilla_commonvoice_hackathon_preprocessed_train_batch_2 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: input_length
dtype: int64
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
- name: labels_length
dtype: int64
splits:
- name: train
num_bytes: 15584501798.875
num_examples: 13689
download_size: 4765376085
dataset_size: 15584501798.875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "mozilla_commonvoice_hackathon_preprocessed_train_batch_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Lifan-Z/tox-antitox-proteins | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- biology
- medical
pretty_name: tox-antitox-proteins
size_categories:
- n<1K
---
This dataset is used for finetuning protGPT2. The features are ['attention_mask', 'input_ids'], no 'labels'.
After using DataCollatorForLanguageModeling and DataLoader, the features will be ['attention_mask', 'input_ids', 'labels']. |
datajuicer/the-pile-pubmed-central-refined-by-data-juicer | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- data-juicer
- pretraining
size_categories:
- 1M<n<10M
---
# The Pile -- PubMed Central (refined by Data-Juicer)
A refined version of PubMed Central dataset in The Pile by [Data-Juicer](https://github.com/alibaba/data-juicer). Removing some "bad" samples from the original dataset to make it higher-quality.
This dataset is usually used to pretrain a Large Language Model.
**Notice**: Here is a small subset for previewing. The whole dataset is available [here](https://dail-wlcb.oss-cn-wulanchabu.aliyuncs.com/LLM_data/our_refined_datasets/pretraining/the-pile-pubmed-central-refine-result.jsonl) (About 83G).
## Dataset Information
- Number of samples: 2,694,860 (Keep ~86.96% from the original dataset)
## Refining Recipe
```yaml
# global parameters
project_name: 'Data-Juicer-recipes-pubmed-central'
dataset_path: '/path/to/your/dataset' # path to your dataset directory or file
export_path: '/path/to/your/dataset.jsonl'
np: 50 # number of subprocess to process your dataset
open_tracer: true
# process schedule
# a list of several process operators with their arguments
process:
- clean_email_mapper:
- clean_links_mapper:
- fix_unicode_mapper:
- punctuation_normalization_mapper:
- whitespace_normalization_mapper:
- alphanumeric_filter: # 89217
tokenization: false
min_ratio: 0.2787 # 3sigma
- average_line_length_filter: # for code
max_len: 1200 # < 3sigma (1478) -- 7410
- character_repetition_filter:
rep_len: 10
max_ratio: 0.3741 # 3sigma -- 65849
- flagged_words_filter:
lang: en
tokenization: true
max_ratio: 0.00195 # 3sigma -- 8305
- language_id_score_filter: # remove language filter
min_score: 0.5 # 272359
- maximum_line_length_filter: # for code
max_len: 7328 # remove 23808 samples
- perplexity_filter:
lang: en
max_ppl: 8000 # remove 173883 samples
- special_characters_filter:
max_ratio: 0.842 # remove 87661 samples
- text_length_filter:
max_len: 136028 # 3sigma -- 15118
- words_num_filter:
lang: en
tokenization: true
min_num: 20 # remove 176537 samples
max_num: 23305 # remove 15016 samples
- word_repetition_filter:
lang: en
tokenization: true
rep_len: 10
max_ratio: 0.5981 # 3sigma -- 93843
- document_simhash_deduplicator:
tokenization: space
window_size: 6
lowercase: true
ignore_pattern: '\p{P}'
num_blocks: 6
hamming_distance: 4
``` |
Back-up/stock-data | ---
dataset_info:
features:
- name: time
dtype: date32
- name: open
dtype: int64
- name: high
dtype: int64
- name: low
dtype: int64
- name: close
dtype: int64
- name: volume
dtype: int64
- name: ticker
dtype: string
splits:
- name: train
num_bytes: 339711
num_examples: 6661
download_size: 169179
dataset_size: 339711
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EarthnDusk/comfypractice-nodes | ---
license: creativeml-openrail-m
---
|
junisky/junisky_test | ---
license: other
task_categories:
- text-classification
- token-classification
- translation
tags:
- code
- chemistry
- junisky-tag
- music
language:
- aa
- ko
- xx
size_categories:
- jjj
pretty_name: Junisky's test data
--- |
AdapterOcean/gorilla_16k_standardized_cluster_0_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 3019664
num_examples: 5246
download_size: 0
dataset_size: 3019664
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "gorilla_16k_standardized_cluster_0_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Amram/pdf_files | ---
license: openrail
---
|
ramixpe/ramixpe | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 19183
num_examples: 70
download_size: 8097
dataset_size: 19183
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Amani123/donutdataset | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 77291761.0
num_examples: 96
download_size: 76288174
dataset_size: 77291761.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "donutdataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.