datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
ai2lumos/lumos_web_agent_ground_iterative | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- language-agent
- web-agent
- web-browsing
- reasoning
- grounding
size_categories:
- 1K<n<10K
---
# 🪄 Agent Lumos: Unified and Modular Training for Open-Source Language Agents
<p align="center">
🌐<a href="https://allenai.github.io/lumos">[Website]</a>
📝<a href="https://arxiv.org/abs/2311.05657">[Paper]</a>
🤗<a href="https://huggingface.co/datasets?sort=trending&search=ai2lumos">[Data]</a>
🤗<a href="https://huggingface.co/models?sort=trending&search=ai2lumos">[Model]</a>
🤗<a href="https://huggingface.co/spaces/ai2lumos/lumos_data_demo">[Demo]</a>
</p>
We introduce 🪄**Lumos**, Language Agents with **Unified** Formats, **Modular** Design, and **Open-Source** LLMs. **Lumos** unifies a suite of complex interactive tasks and achieves competitive performance with GPT-4/3.5-based and larger open-source agents.
**Lumos** has following features:
* 🧩 **Modular Architecture**:
- 🧩 **Lumos** consists of planning, grounding, and execution modules built based on LLAMA-2-7B/13B and off-the-shelf APIs.
- 🤗 **Lumos** utilizes a unified data format that encompasses multiple task types, thereby enabling the developed agent framework to conveniently support a range of interactive tasks.
* 🌍 **Diverse Training Data**:
- 🌍 **Lumos** is trained with ~56K diverse high-quality subgoal/action annotations from ground-truth reasoning steps in existing benchmarks with GPT-4.
- ⚒️ **Lumos** data can be instrumental for future research in developing open-source agents for complex interactive tasks.
* 🚀 **Competitive Performance**:
- 🚀 **Lumos** is comparable or even beats **GPT-series** agents on web/complex QA tasks Mind2Web and HotpotQA, and **larger open agents** on math and multimodal tasks.
- 🚀 **Lumos** exceeds contemporaneous agents that have been **fine-tuned** with in-domain HotpotQA, Mind2Web and ScienceQA annotations, such as **FiReAct**, **AgentLM**, and **AutoAct**.
- 🚀 **Lumos** performs better than open agent baseline formulations including **chain-of-thoughts** and **integrated** training.
- 🚀 **Lumos** surpasses larger open LLM agents and domain-specific agents on unseen tasks, WebShop and InterCode_SQL.
## Data Overview
`lumos_web_agent_ground_iterative` is the data for training **grounding** module on **web agent** task in **Lumos-Iterative (Lumos-I)** formulation.
The source of the training annotation training data is shown below:
| Task | Number |
|---|---|
|Mind2Web|1009|
## Models Trained with the Data
`lumos_web_agent_ground_iterative` is used to train the following models.
|Model|Huggingface Repo|
|---|---|
|`lumos_web_agent_ground_iterative`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_web_agent_ground_iterative) |
|`lumos_web_agent_ground_iterative-13B`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_web_agent_ground_iterative-13B) |
|`lumos_unified_ground_iterative`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_unified_ground_iterative) |
|`lumos_unified_ground_iterative-13B`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_unified_ground_iterative-13B) |
## Citation
If you find this work is relevant with your research, please feel free to cite our work!
```
@article{yin2023lumos,
title={Agent Lumos: Unified and Modular Training for Open-Source Language Agents},
author={Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen},
journal={arXiv preprint arXiv:2311.05657},
year={2023}
}
``` |
Back-up/review-crawl-data-v1 | ---
dataset_info:
features:
- name: id
dtype: string
- name: titles
dtype: string
- name: url
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 672799681
num_examples: 73226
download_size: 135848963
dataset_size: 672799681
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ishajo/autotrain-data-beproj_meeting_summarization_usingt5 | ---
language:
- en
task_categories:
- summarization
---
# AutoTrain Dataset for project: beproj_meeting_summarization_usingt5
## Dataset Description
This dataset has been automatically processed by AutoTrain for project beproj_meeting_summarization_usingt5.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_id": "16e6a86e9189b5566c19bc7fc48d923139da9bd2",
"text": "(CNN)A TV series based on the 1999 sci-fi film \"Galaxy Quest\" is in the works at Paramount Television. The DreamWorks film centered on the cast of a canceled space TV show who are accidentally sent to a spaceship and must save an alien nation. TV Land's 'Younger' renewed for second season . The film's scribe Robert Gordon is expected to write the TV version and executive produce with the film's director Dean Parisot, producer Mark Johnson and Johnson's producing partner Melissa Bernstein. 'The Voice' coaches CeeLo Green, Gwen Stefani and Usher to return . The film starred Tim Allen, Sigourney Weaver, Alan Rickman, Tony Shalhoub, Sam Rockwell, Daryl Mitchell and Enrico Colantoni. PBS to conduct \"Internal Review\" over Ben Affleck's request to hide slave-owner ancestry . \"Galaxy Quest\" is the latest movie to be adapted for the small screen. This pilot season, ABC has \"Uncle Buck,\" CBS has \"Rush Hour\" and Fox has \"Minority Report.\" Paramount Television specifically has turned several of the studio's hit films into TV series. \"School of Rock\" will debut on Nickelodeon later this year, and USA recently ordered a pilot for \"Shooter,\" based on the Mark Wahlberg film. \u00a92015 The Hollywood Reporter. All rights reserved.",
"target": "\"Galaxy Quest\" TV series in the works .\nShow would be based on the cult classic 1999 sci-fi comedy ."
},
{
"feat_id": "3815d19af18ff22be6ad6095722d7367bb7271af",
"text": "A paramedic who pretended he was gay to get close to women before sexually assaulting them has been struck off the medical register. Christopher Bridger, 25, from Stevenage, Hertfordshire, attacked three women after separate drinking sessions and was jailed for 12 years after being convicted of rape and four other abuse charges last year. The HCPC Conduct and Competence Committee today removed him from the register after hearing his crimes and describing them as 'a serious breach of trust'. Christopher Bridger, 25, who was jailed for 12 years after he sexually assaulted three women, has been struck off the medical register . A jury at Guildford Crown Court, Surrey, found him guilty of raping a fellow student while he was studying to be a paramedic at St George's University Hospital in London in 2008. He had accompanied her back to her halls following a Freshers' Week fancy dress party and began kissing and cuddling her, despite being told to stop. He then raped her but astonishingly broke down in tears afterwards and said: 'I just want to like girls.' The woman told the jury she ended up comforting Bridger, despite knowing he was in the wrong. His other victims were co-workers at South East Coast Ambulance Service NHS Trust, where he started working in 2010. A lesbian colleague told the court she was molested by Bridger after a staff Christmas party while her girlfriend was in the same hotel bed. The HCPC Conduct and Competence Committee found his crimes were a 'serious breach of trust' The women, aged in their 20s - who cannot be named for legal reasons - were forced to relive their ordeals after the ambulance worker accused them of lying during a trial in July last year. His colleague explained how Bridger came up to her hotel room after she got extremely intoxicated at the party in December 2011. He climbed into bed between his victim and her partner and the woman awoke to find him sexually assaulting her and pleasuring himself as her girlfriend lay asleep next to them. She kept quiet, fearing her partner wouldn't understand what had happened, but the day after on his birthday, he sheepishly sent the woman a number of text messages apologising for his behaviour. One text said: 'It was one night of stupidity for which I will be eternally sorry.' Another said: 'You don't have to forgive me, I'm just telling you the truth. I'm ashamed of myself.' His final victim was also a colleague from the South East Coast Ambulance Service, who said she was sexually assaulted after she allowed him to stay at her house after a dinner in October 2012. Bridger was suspended from work after the incidents were reported to South East Coast Ambulance Services bosses in 2012. He was jailed for 12 years and ordered to sign the Sex Offenders' Register for life but failed to attend today's medical register hearing. Striking him off, chair of the HCPC panel, Nicola Bastin said: 'The panel has heard that the offences were committed against three vulnerable young women who were known to the registrant as friends and colleagues including a student paramedic. This represented a serious breach of trust. 'The panel has also heard that the women were vulnerable due to the effects of alcohol and that one of the offences was committed when the woman was asleep. 'The panel has considered this case very carefully and cannot find any redeeming features on the part of the registrant. A jury at Guildford Crown Court, Surrey, found him guilty of rape and four other sex abuse charges . 'The panel takes the view that this case is serious, it does indeed involve abuse of trust, sexual abuse of a serious nature and, furthermore, there is no evidence of insight on the part of the registrant.' The HCPC panel chairman Brian Wroe added: 'The registrant entered a plea of not guilty to each of the charges and was found guilty following a 13 day trial. 'This showed Christopher Bridger lacks the insight into the circumstances which resulted in the convictions and does not take responsibility for his actions.' When he was sentenced in September, Mr Recorder Mark Milliken-Smith told him: 'These were wicked, mean and utterly cowardly offences which have and will have serious consequences on these young women and those around them for a very long time.'",
"target": "Christopher Bridger, 25, attacked three women after drinking sessions .\nHe was convicted of rape and four other abuse charges at court last year .\nAmbulance worker told women he was gay before assaulting them in bed .\nHCPC Conduct and Competence Committee removed him from register .\nPanel described crimes against three women as 'a serious breach of trust'"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_id": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2400 |
| valid | 600 |
|
habanoz/airoboros-3.1-no-mathjson | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 76681303.64476947
num_examples: 42129
download_size: 47376788
dataset_size: 76681303.64476947
---
# Dataset Card for "airoboros-3.1-no-mathjson"
This is a copy of "jondurbin/airoboros-3.1" excluding mathjson instances. |
UchihaMadara/validate-dataset-model1 | ---
dataset_info:
features:
- name: text
dtype: string
- name: sentiments
sequence: int64
splits:
- name: train
num_bytes: 98465
num_examples: 800
download_size: 44572
dataset_size: 98465
---
# Dataset Card for "validate-dataset-model1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kheopss/large_dataset_from_prompt | ---
dataset_info:
features:
- name: json_input
dtype: string
- name: titre
dtype: string
- name: prompt0
dtype: string
- name: prompt
dtype: string
- name: response
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 72999584
num_examples: 9800
download_size: 23396010
dataset_size: 72999584
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vincentiussgk/pneumonia_TA_split_classlabel | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
dataset_info:
features:
- name: file_path
dtype: string
- name: label
dtype: int64
- name: image
dtype: image
splits:
- name: test
num_bytes: 78428574.0
num_examples: 225
- name: train
num_bytes: 339946620.0
num_examples: 900
download_size: 417503898
dataset_size: 418375194.0
---
# Dataset Card for "pneumonia_TA_split_classlabel"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yhavinga/imdb_dutch | ---
pretty_name: IMDB
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- nl
- en
license:
- other
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: imdb-movie-reviews
train-eval-index:
- config: plain_text
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
- name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
dataset_info:
features:
- name: text
dtype: string
- name: text_en
dtype: string
- name: label
dtype:
class_label:
names:
0: neg
1: pos
config_name: plain_text
splits:
- name: train
num_bytes: 69589646
num_examples: 24992
- name: test
num_bytes: 67958995
num_examples: 24992
- name: unsupervised
num_bytes: 139649169
num_examples: 49984
download_size: 108170940
dataset_size: 277197810
---
# Dataset Card for "imdb_dutch"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Large Movie Review Dataset translated to Dutch.
This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets.
We provide a set of 24,992 highly polar movie reviews for training, and 24,992 for testing. There is additional unlabeled data for use as well.
### Translation to Dutch
The dataset was translated with [yhavinga/ul2-large-en-nl](https://huggingface.co/yhavinga/ul2-large-en-nl).
The translation code is available in the src directory.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
This dataset contains Dutch and English data.
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 108 MiB
- **Size of the generated dataset:** 277 MiB
An example of 'train' looks as follows.
```
{
"label": 0,
"text": "Holy shit. Dit was de slechtste film die ik in lange tijd heb gezien."
"text_en": "Holy crap. This was the worst film I have seen in a long time."
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `text`: a `string` feature.
- `text_en`: a `string` feature.
- `label`: a classification label, with possible values including `neg` (0), `pos` (1).
### Data Splits
| name |train|unsupervised|test |
|----------|----:|-----------:|----:|
|plain_text|24992| 49984|24992|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
}
```
### Contributions
Thanks to [@ghazi-f](https://github.com/ghazi-f), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding
the English `imdb` dataset.
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
|
airnicco8/umls_sent_trans | ---
license: gpl-3.0
---
|
mokyu2106/iroiro_data | ---
license: unknown
---
■■LECO&DEFF置き場■■
主にXLで使用するLECOが格納されています。
作成者の都合上、数としてはponyV6用が一番充実しています(次点でebara)。
※簡易な使い方説明は下位フォルダ内txt参照の事 |
liuyanchen1015/MULTI_VALUE_stsb_proximal_distal_demonstratives | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 11005
num_examples: 50
- name: test
num_bytes: 6828
num_examples: 40
- name: train
num_bytes: 23018
num_examples: 98
download_size: 37881
dataset_size: 40851
---
# Dataset Card for "MULTI_VALUE_stsb_proximal_distal_demonstratives"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Lollitor/FSPROTEINMARKED | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: ID
dtype: string
- name: INPUT
dtype: string
splits:
- name: train
num_bytes: 17749800
num_examples: 16245
download_size: 262288
dataset_size: 17749800
---
# Dataset Card for "FSPROTEINMARKED"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
malhajar/SlimOrca-tr | ---
dataset_info:
features:
- name: conversations
dtype: string
- name: conversations-turkish
dtype: string
splits:
- name: train
num_bytes: 2017451170
num_examples: 517518
download_size: 1025268484
dataset_size: 2017451170
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SlimOrca-tr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kashif/App_Flow | ---
license: apache-2.0
task_categories:
- time-series-forecasting
---
# App Flow
This dataset consists of hourly maximum traffic flow for 128 systems deployed on 16 logic data centers, resulting in 1083 different time series in total.
The length of each series is more than 4 months. Each time series is divided into two segments for training and testing with a ratio of 32:1.
This dataset was collected at Ant Group and does not contain any Personal Identifiable Information and is desensitized and encrypted. |
shawmoon/ekattor_alpaca2 | ---
license: apache-2.0
---
|
jlbaker361/anime_faces_40k | ---
dataset_info:
features:
- name: image
dtype: image
- name: split
dtype: string
- name: src
dtype: string
- name: style
dtype: string
splits:
- name: train
num_bytes: 2163317587.0
num_examples: 40000
download_size: 2152344721
dataset_size: 2163317587.0
---
# Dataset Card for "anime_faces_40k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bdpoff/webllm-json-editor | ---
license: apache-2.0
---
|
ErasmoMestreDasVozez/audio | ---
license: openrail
---
|
Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface_sampled | ---
license: apache-2.0
---
|
MongoDB/airbnb_embeddings | ---
license: apache-2.0
task_categories:
- question-answering
- text-retrieval
- text-to-image
language:
- en
tags:
- vector search
- multimodal
- retrieval augmented generation
size_categories:
- 1K<n<10K
---
## Overview
This dataset consists of AirBnB listings with property descriptions, reviews, and other metadata.
It also contains text embeddings of the property descriptions as well as image embeddings of the listing image. The text embeddings were created using OpenAI's **text-embedding-3-small** model and the image embeddings using OpenAI's [**clip-vit-base-patch32**](https://huggingface.co/openai/clip-vit-base-patch32) model available on Hugging Face.
The text embeddings have 1536 dimensions, while the image embeddings have 512 dimensions.
## Dataset Structure
Here is a full list of fields contained in the dataset. Some noteworthy fields have been highlighted:
- _id: Unique identifier for the listing
- listing_url: URL for the listing on AirBnB
- **name**: Title or name of the listing
- **summary**: Short overview of listing
- **space**: Short description of the space, amenities etc.
- **description**: Full listing description
- neighborhood_overview: Description of surrounding area
- notes: Special instructions or notes
- transit: Nearby public transportation options
- access: How to access the property. Door codes etc.
- interaction: Host's preferred interaction medium
- house_rules: Rules guests must follow
- **property_type**: Type of property
- room_type: Listing's room category
- bed_type: Type of bed provided
- minimum_nights: Minimum stay required
- maximum_nights: Maximum stay allowed
- cancellation_policy: Terms for cancelling booking
- first_review: Date of first review
- last_review: Date of latest review
- **accommodates**: Number of guests accommodated
- **bedrooms**: Number of bedrooms available
- **beds**: Number of beds available
- number_of_reviews: Total reviews received
- bathrooms: Number of bathrooms available
- **amenities**: List of amenities offered
- **price**: Nightly price for listing
- security_deposit: Required security deposit amount
- cleaning_fee: Additional cleaning fee charged
- extra_people: Fee for additional guests
- guests_included: Number of guests included in the base price
- **images**: Links to listing images
- host: Information about the host
- **address**: Physical address of listing
- **availability**: Availability dates for listing
- **review_scores**: Aggregate review scores
- reviews: Individual guest reviews
- weekly_price: Discounted price for week
- monthly_price: Discounted price for month
- **text_embeddings**: Embeddings of the property description in the `space` field
- **image_embeddings**: Embeddings of the `picture_url` in the `images` field
## Usage
This dataset can be useful for:
- Building Multimodal Search applications. Embed text queries using the CLIP model, and retrieve relevant images using the image embeddings provided.
- Building Hybrid Search applications. Use the embeddings provided for vector search and the metadata fields for pre-filtering and/or full-text search.
- Building RAG applications
## Ingest Data
To experiment with this dataset using MongoDB Atlas, first [create a MongoDB Atlas account](https://www.mongodb.com/cloud/atlas/register?utm_campaign=devrel&utm_source=community&utm_medium=organic_social&utm_content=Hugging%20Face%20Dataset&utm_term=apoorva.joshi).
You can then use the following script to load this dataset into your MongoDB Atlas cluster:
```
import os
from pymongo import MongoClient
import datasets
from datasets import load_dataset
from bson import json_util
# MongoDB Atlas URI and client setup
uri = os.environ.get('MONGODB_ATLAS_URI')
client = MongoClient(uri)
# Change to the appropriate database and collection names
db_name = 'your_database_name' # Change this to your actual database name
collection_name = 'airbnb_embeddings' # Change this to your actual collection name
collection = client[db_name][collection_name]
# Load the "airbnb_embeddings" dataset from Hugging Face
dataset = load_dataset("MongoDB/airbnb_embeddings")
insert_data = []
# Iterate through the dataset and prepare the documents for insertion
# The script below ingests 1000 records into the database at a time
for item in dataset['train']:
# Convert the dataset item to MongoDB document format
doc_item = json_util.loads(json_util.dumps(item))
insert_data.append(doc_item)
# Insert in batches of 1000 documents
if len(insert_data) == 1000:
collection.insert_many(insert_data)
print("1000 records ingested")
insert_data = []
# Insert any remaining documents
if len(insert_data) > 0:
collection.insert_many(insert_data)
print("{} records ingested".format(len(insert_data)))
print("All records ingested successfully!")
``` |
jkorsvik/nowiki_abstract_urls_20230120 | ---
dataset_info:
features:
- name: title
dtype: string
- name: url
dtype: string
- name: abstract
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 126004455
num_examples: 605457
download_size: 66525868
dataset_size: 126004455
---
# Dataset Card for "nowiki_abstract_urls_20230120"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Thermostatic/UltraAlpaca | ---
license: mit
---
|
achinthani/argilla-test | ---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for argilla-test
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("achinthani/argilla-test")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("achinthani/argilla-test")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| text | Text | text | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| sentiment | Sentiment | label_selection | True | N/A | ['positive', 'neutral', 'negative'] |
| mixed-emotion | Mixed-emotion | multi_label_selection | True | N/A | ['joy', 'anger', 'sadness', 'fear', 'surprise', 'love'] |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"text": "i didnt feel humiliated"
},
"metadata": {},
"responses": [
{
"status": "submitted",
"user_id": "1566e368-1256-40f4-9dbf-a022ba5d117c",
"values": {
"mixed-emotion": {
"value": [
"anger"
]
},
"sentiment": {
"value": "positive"
}
}
}
],
"suggestions": [],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": null,
"metadata": "{}",
"mixed-emotion": [
{
"status": "submitted",
"user_id": "1566e368-1256-40f4-9dbf-a022ba5d117c",
"value": [
"anger"
]
}
],
"mixed-emotion-suggestion": null,
"mixed-emotion-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"sentiment": [
{
"status": "submitted",
"user_id": "1566e368-1256-40f4-9dbf-a022ba5d117c",
"value": "positive"
}
],
"sentiment-suggestion": null,
"sentiment-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"text": "i didnt feel humiliated"
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **text** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **sentiment** is of type `label_selection` with the following allowed values ['positive', 'neutral', 'negative'].
* **mixed-emotion** is of type `multi_label_selection` with the following allowed values ['joy', 'anger', 'sadness', 'fear', 'surprise', 'love'].
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **sentiment-suggestion** is of type `label_selection` with the following allowed values ['positive', 'neutral', 'negative'].
* (optional) **mixed-emotion-suggestion** is of type `multi_label_selection` with the following allowed values ['joy', 'anger', 'sadness', 'fear', 'surprise', 'love'].
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
open-llm-leaderboard/details_Kukedlc__NeuralGanesha-7b | ---
pretty_name: Evaluation run of Kukedlc/NeuralGanesha-7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Kukedlc/NeuralGanesha-7b](https://huggingface.co/Kukedlc/NeuralGanesha-7b) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Kukedlc__NeuralGanesha-7b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-29T19:01:57.597931](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__NeuralGanesha-7b/blob/main/results_2024-03-29T19-01-57.597931.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6506645519041115,\n\
\ \"acc_stderr\": 0.03221755959909156,\n \"acc_norm\": 0.6499101448191003,\n\
\ \"acc_norm_stderr\": 0.032892904909528356,\n \"mc1\": 0.5973072215422277,\n\
\ \"mc1_stderr\": 0.01716883093518721,\n \"mc2\": 0.7414366005927558,\n\
\ \"mc2_stderr\": 0.014446620698720188\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7150170648464164,\n \"acc_stderr\": 0.013191348179838793,\n\
\ \"acc_norm\": 0.7397610921501706,\n \"acc_norm_stderr\": 0.012821930225112573\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7134037044413464,\n\
\ \"acc_stderr\": 0.004512471612415587,\n \"acc_norm\": 0.8884684325831508,\n\
\ \"acc_norm_stderr\": 0.0031414591751392712\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n\
\ \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n\
\ \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6973684210526315,\n \"acc_stderr\": 0.03738520676119669,\n\
\ \"acc_norm\": 0.6973684210526315,\n \"acc_norm_stderr\": 0.03738520676119669\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n\
\ \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.02815283794249386,\n\
\ \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.02815283794249386\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n\
\ \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.630057803468208,\n\
\ \"acc_stderr\": 0.0368122963339432,\n \"acc_norm\": 0.630057803468208,\n\
\ \"acc_norm_stderr\": 0.0368122963339432\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.04878608714466996,\n\
\ \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.04878608714466996\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n\
\ \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5702127659574469,\n \"acc_stderr\": 0.03236214467715564,\n\
\ \"acc_norm\": 0.5702127659574469,\n \"acc_norm_stderr\": 0.03236214467715564\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n\
\ \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4021164021164021,\n \"acc_stderr\": 0.025253032554997695,\n \"\
acc_norm\": 0.4021164021164021,\n \"acc_norm_stderr\": 0.025253032554997695\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n\
\ \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n\
\ \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7870967741935484,\n \"acc_stderr\": 0.02328766512726855,\n \"\
acc_norm\": 0.7870967741935484,\n \"acc_norm_stderr\": 0.02328766512726855\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5221674876847291,\n \"acc_stderr\": 0.035145285621750066,\n \"\
acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.035145285621750066\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\"\
: 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009182,\n\
\ \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009182\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8080808080808081,\n \"acc_stderr\": 0.028057791672989017,\n \"\
acc_norm\": 0.8080808080808081,\n \"acc_norm_stderr\": 0.028057791672989017\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.021995311963644237,\n\
\ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.021995311963644237\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6615384615384615,\n \"acc_stderr\": 0.023991500500313036,\n\
\ \"acc_norm\": 0.6615384615384615,\n \"acc_norm_stderr\": 0.023991500500313036\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3333333333333333,\n \"acc_stderr\": 0.02874204090394848,\n \
\ \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.02874204090394848\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6680672268907563,\n \"acc_stderr\": 0.03058869701378364,\n \
\ \"acc_norm\": 0.6680672268907563,\n \"acc_norm_stderr\": 0.03058869701378364\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.39072847682119205,\n \"acc_stderr\": 0.03983798306659806,\n \"\
acc_norm\": 0.39072847682119205,\n \"acc_norm_stderr\": 0.03983798306659806\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8477064220183487,\n \"acc_stderr\": 0.015405084393157074,\n \"\
acc_norm\": 0.8477064220183487,\n \"acc_norm_stderr\": 0.015405084393157074\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5231481481481481,\n \"acc_stderr\": 0.03406315360711507,\n \"\
acc_norm\": 0.5231481481481481,\n \"acc_norm_stderr\": 0.03406315360711507\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8382352941176471,\n \"acc_stderr\": 0.025845017986926917,\n \"\
acc_norm\": 0.8382352941176471,\n \"acc_norm_stderr\": 0.025845017986926917\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8016877637130801,\n \"acc_stderr\": 0.02595502084162113,\n \
\ \"acc_norm\": 0.8016877637130801,\n \"acc_norm_stderr\": 0.02595502084162113\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n\
\ \"acc_stderr\": 0.031381476375754995,\n \"acc_norm\": 0.6771300448430493,\n\
\ \"acc_norm_stderr\": 0.031381476375754995\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.03641297081313729,\n\
\ \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.03641297081313729\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228732,\n \"\
acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228732\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.03408997886857529,\n\
\ \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.03408997886857529\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4107142857142857,\n\
\ \"acc_stderr\": 0.046695106638751906,\n \"acc_norm\": 0.4107142857142857,\n\
\ \"acc_norm_stderr\": 0.046695106638751906\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.021262719400406974,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.021262719400406974\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8199233716475096,\n\
\ \"acc_stderr\": 0.013740797258579825,\n \"acc_norm\": 0.8199233716475096,\n\
\ \"acc_norm_stderr\": 0.013740797258579825\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.02394851290546836,\n\
\ \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.02394851290546836\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4324022346368715,\n\
\ \"acc_stderr\": 0.01656897123354861,\n \"acc_norm\": 0.4324022346368715,\n\
\ \"acc_norm_stderr\": 0.01656897123354861\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7124183006535948,\n \"acc_stderr\": 0.025917806117147158,\n\
\ \"acc_norm\": 0.7124183006535948,\n \"acc_norm_stderr\": 0.025917806117147158\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7106109324758842,\n\
\ \"acc_stderr\": 0.025755865922632945,\n \"acc_norm\": 0.7106109324758842,\n\
\ \"acc_norm_stderr\": 0.025755865922632945\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7376543209876543,\n \"acc_stderr\": 0.024477222856135114,\n\
\ \"acc_norm\": 0.7376543209876543,\n \"acc_norm_stderr\": 0.024477222856135114\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \
\ \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47392438070404175,\n\
\ \"acc_stderr\": 0.012752858346533126,\n \"acc_norm\": 0.47392438070404175,\n\
\ \"acc_norm_stderr\": 0.012752858346533126\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6691176470588235,\n \"acc_stderr\": 0.02858270975389845,\n\
\ \"acc_norm\": 0.6691176470588235,\n \"acc_norm_stderr\": 0.02858270975389845\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.673202614379085,\n \"acc_stderr\": 0.018975427920507208,\n \
\ \"acc_norm\": 0.673202614379085,\n \"acc_norm_stderr\": 0.018975427920507208\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n\
\ \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n\
\ \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128448,\n\
\ \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128448\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454115,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454115\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826371\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n\
\ \"acc_stderr\": 0.03864139923699121,\n \"acc_norm\": 0.5602409638554217,\n\
\ \"acc_norm_stderr\": 0.03864139923699121\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5973072215422277,\n\
\ \"mc1_stderr\": 0.01716883093518721,\n \"mc2\": 0.7414366005927558,\n\
\ \"mc2_stderr\": 0.014446620698720188\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.840568271507498,\n \"acc_stderr\": 0.010288617479454764\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7050796057619408,\n \
\ \"acc_stderr\": 0.01256069801095477\n }\n}\n```"
repo_url: https://huggingface.co/Kukedlc/NeuralGanesha-7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|arc:challenge|25_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|gsm8k|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hellaswag|10_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T19-01-57.597931.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-29T19-01-57.597931.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- '**/details_harness|winogrande|5_2024-03-29T19-01-57.597931.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-29T19-01-57.597931.parquet'
- config_name: results
data_files:
- split: 2024_03_29T19_01_57.597931
path:
- results_2024-03-29T19-01-57.597931.parquet
- split: latest
path:
- results_2024-03-29T19-01-57.597931.parquet
---
# Dataset Card for Evaluation run of Kukedlc/NeuralGanesha-7b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Kukedlc/NeuralGanesha-7b](https://huggingface.co/Kukedlc/NeuralGanesha-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Kukedlc__NeuralGanesha-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-29T19:01:57.597931](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__NeuralGanesha-7b/blob/main/results_2024-03-29T19-01-57.597931.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6506645519041115,
"acc_stderr": 0.03221755959909156,
"acc_norm": 0.6499101448191003,
"acc_norm_stderr": 0.032892904909528356,
"mc1": 0.5973072215422277,
"mc1_stderr": 0.01716883093518721,
"mc2": 0.7414366005927558,
"mc2_stderr": 0.014446620698720188
},
"harness|arc:challenge|25": {
"acc": 0.7150170648464164,
"acc_stderr": 0.013191348179838793,
"acc_norm": 0.7397610921501706,
"acc_norm_stderr": 0.012821930225112573
},
"harness|hellaswag|10": {
"acc": 0.7134037044413464,
"acc_stderr": 0.004512471612415587,
"acc_norm": 0.8884684325831508,
"acc_norm_stderr": 0.0031414591751392712
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.04153948404742398,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.04153948404742398
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6973684210526315,
"acc_stderr": 0.03738520676119669,
"acc_norm": 0.6973684210526315,
"acc_norm_stderr": 0.03738520676119669
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.02815283794249386,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.02815283794249386
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03476590104304134,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03476590104304134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.630057803468208,
"acc_stderr": 0.0368122963339432,
"acc_norm": 0.630057803468208,
"acc_norm_stderr": 0.0368122963339432
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.04878608714466996,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.04878608714466996
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5702127659574469,
"acc_stderr": 0.03236214467715564,
"acc_norm": 0.5702127659574469,
"acc_norm_stderr": 0.03236214467715564
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4021164021164021,
"acc_stderr": 0.025253032554997695,
"acc_norm": 0.4021164021164021,
"acc_norm_stderr": 0.025253032554997695
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7870967741935484,
"acc_stderr": 0.02328766512726855,
"acc_norm": 0.7870967741935484,
"acc_norm_stderr": 0.02328766512726855
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5221674876847291,
"acc_stderr": 0.035145285621750066,
"acc_norm": 0.5221674876847291,
"acc_norm_stderr": 0.035145285621750066
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009182,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009182
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8080808080808081,
"acc_stderr": 0.028057791672989017,
"acc_norm": 0.8080808080808081,
"acc_norm_stderr": 0.028057791672989017
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.021995311963644237,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.021995311963644237
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6615384615384615,
"acc_stderr": 0.023991500500313036,
"acc_norm": 0.6615384615384615,
"acc_norm_stderr": 0.023991500500313036
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.02874204090394848,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.02874204090394848
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6680672268907563,
"acc_stderr": 0.03058869701378364,
"acc_norm": 0.6680672268907563,
"acc_norm_stderr": 0.03058869701378364
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.39072847682119205,
"acc_stderr": 0.03983798306659806,
"acc_norm": 0.39072847682119205,
"acc_norm_stderr": 0.03983798306659806
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8477064220183487,
"acc_stderr": 0.015405084393157074,
"acc_norm": 0.8477064220183487,
"acc_norm_stderr": 0.015405084393157074
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5231481481481481,
"acc_stderr": 0.03406315360711507,
"acc_norm": 0.5231481481481481,
"acc_norm_stderr": 0.03406315360711507
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8382352941176471,
"acc_stderr": 0.025845017986926917,
"acc_norm": 0.8382352941176471,
"acc_norm_stderr": 0.025845017986926917
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8016877637130801,
"acc_stderr": 0.02595502084162113,
"acc_norm": 0.8016877637130801,
"acc_norm_stderr": 0.02595502084162113
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.031381476375754995,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.031381476375754995
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.03641297081313729,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.03641297081313729
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228732,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228732
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.0401910747255735,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.0401910747255735
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7484662576687117,
"acc_stderr": 0.03408997886857529,
"acc_norm": 0.7484662576687117,
"acc_norm_stderr": 0.03408997886857529
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4107142857142857,
"acc_stderr": 0.046695106638751906,
"acc_norm": 0.4107142857142857,
"acc_norm_stderr": 0.046695106638751906
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406974,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406974
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8199233716475096,
"acc_stderr": 0.013740797258579825,
"acc_norm": 0.8199233716475096,
"acc_norm_stderr": 0.013740797258579825
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7283236994219653,
"acc_stderr": 0.02394851290546836,
"acc_norm": 0.7283236994219653,
"acc_norm_stderr": 0.02394851290546836
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4324022346368715,
"acc_stderr": 0.01656897123354861,
"acc_norm": 0.4324022346368715,
"acc_norm_stderr": 0.01656897123354861
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7124183006535948,
"acc_stderr": 0.025917806117147158,
"acc_norm": 0.7124183006535948,
"acc_norm_stderr": 0.025917806117147158
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7106109324758842,
"acc_stderr": 0.025755865922632945,
"acc_norm": 0.7106109324758842,
"acc_norm_stderr": 0.025755865922632945
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7376543209876543,
"acc_stderr": 0.024477222856135114,
"acc_norm": 0.7376543209876543,
"acc_norm_stderr": 0.024477222856135114
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4858156028368794,
"acc_stderr": 0.02981549448368206,
"acc_norm": 0.4858156028368794,
"acc_norm_stderr": 0.02981549448368206
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.47392438070404175,
"acc_stderr": 0.012752858346533126,
"acc_norm": 0.47392438070404175,
"acc_norm_stderr": 0.012752858346533126
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6691176470588235,
"acc_stderr": 0.02858270975389845,
"acc_norm": 0.6691176470588235,
"acc_norm_stderr": 0.02858270975389845
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.673202614379085,
"acc_stderr": 0.018975427920507208,
"acc_norm": 0.673202614379085,
"acc_norm_stderr": 0.018975427920507208
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.726530612244898,
"acc_stderr": 0.028535560337128448,
"acc_norm": 0.726530612244898,
"acc_norm_stderr": 0.028535560337128448
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454115,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454115
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699121,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699121
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5973072215422277,
"mc1_stderr": 0.01716883093518721,
"mc2": 0.7414366005927558,
"mc2_stderr": 0.014446620698720188
},
"harness|winogrande|5": {
"acc": 0.840568271507498,
"acc_stderr": 0.010288617479454764
},
"harness|gsm8k|5": {
"acc": 0.7050796057619408,
"acc_stderr": 0.01256069801095477
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ShariThomas/dataset | ---
license: mit
---
|
unduck/2022cfpbmortgage | ---
license: unknown
---
|
open-llm-leaderboard/details_eldogbbhed__Peagle-9b | ---
pretty_name: Evaluation run of eldogbbhed/Peagle-9b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [eldogbbhed/Peagle-9b](https://huggingface.co/eldogbbhed/Peagle-9b) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_eldogbbhed__Peagle-9b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-11T04:56:46.031904](https://huggingface.co/datasets/open-llm-leaderboard/details_eldogbbhed__Peagle-9b/blob/main/results_2024-03-11T04-56-46.031904.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6480227913815262,\n\
\ \"acc_stderr\": 0.032309008293723906,\n \"acc_norm\": 0.6486844835270161,\n\
\ \"acc_norm_stderr\": 0.03296580206090119,\n \"mc1\": 0.5532435740514076,\n\
\ \"mc1_stderr\": 0.017403977522557144,\n \"mc2\": 0.7015765443534774,\n\
\ \"mc2_stderr\": 0.014919144015455953\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6843003412969283,\n \"acc_stderr\": 0.01358257109581529,\n\
\ \"acc_norm\": 0.7150170648464164,\n \"acc_norm_stderr\": 0.013191348179838795\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6965743875721968,\n\
\ \"acc_stderr\": 0.004587978625582481,\n \"acc_norm\": 0.8734315873332006,\n\
\ \"acc_norm_stderr\": 0.003318093579702919\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n\
\ \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n\
\ \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n\
\ \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.59,\n\
\ \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.59,\n \
\ \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7056603773584905,\n \"acc_stderr\": 0.028049186315695255,\n\
\ \"acc_norm\": 0.7056603773584905,\n \"acc_norm_stderr\": 0.028049186315695255\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \
\ \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n\
\ \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n\
\ \"acc_stderr\": 0.03614665424180826,\n \"acc_norm\": 0.6589595375722543,\n\
\ \"acc_norm_stderr\": 0.03614665424180826\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n\
\ \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5659574468085107,\n \"acc_stderr\": 0.032400380867927465,\n\
\ \"acc_norm\": 0.5659574468085107,\n \"acc_norm_stderr\": 0.032400380867927465\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482758,\n\
\ \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482758\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3915343915343915,\n \"acc_stderr\": 0.025138091388851102,\n \"\
acc_norm\": 0.3915343915343915,\n \"acc_norm_stderr\": 0.025138091388851102\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n\
\ \"acc_stderr\": 0.04463112720677172,\n \"acc_norm\": 0.46825396825396826,\n\
\ \"acc_norm_stderr\": 0.04463112720677172\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7935483870967742,\n\
\ \"acc_stderr\": 0.023025899617188716,\n \"acc_norm\": 0.7935483870967742,\n\
\ \"acc_norm_stderr\": 0.023025899617188716\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5270935960591133,\n \"acc_stderr\": 0.03512819077876106,\n\
\ \"acc_norm\": 0.5270935960591133,\n \"acc_norm_stderr\": 0.03512819077876106\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\"\
: 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.793939393939394,\n \"acc_stderr\": 0.0315841532404771,\n\
\ \"acc_norm\": 0.793939393939394,\n \"acc_norm_stderr\": 0.0315841532404771\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7929292929292929,\n \"acc_stderr\": 0.02886977846026705,\n \"\
acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.02886977846026705\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.02098685459328972,\n\
\ \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.02098685459328972\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6692307692307692,\n \"acc_stderr\": 0.023854795680971118,\n\
\ \"acc_norm\": 0.6692307692307692,\n \"acc_norm_stderr\": 0.023854795680971118\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3296296296296296,\n \"acc_stderr\": 0.028661201116524572,\n \
\ \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.028661201116524572\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6428571428571429,\n \"acc_stderr\": 0.031124619309328177,\n\
\ \"acc_norm\": 0.6428571428571429,\n \"acc_norm_stderr\": 0.031124619309328177\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3973509933774834,\n \"acc_stderr\": 0.03995524007681681,\n \"\
acc_norm\": 0.3973509933774834,\n \"acc_norm_stderr\": 0.03995524007681681\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8330275229357799,\n \"acc_stderr\": 0.015990154885073368,\n \"\
acc_norm\": 0.8330275229357799,\n \"acc_norm_stderr\": 0.015990154885073368\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5416666666666666,\n \"acc_stderr\": 0.03398110890294636,\n \"\
acc_norm\": 0.5416666666666666,\n \"acc_norm_stderr\": 0.03398110890294636\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8284313725490197,\n \"acc_stderr\": 0.026460569561240644,\n \"\
acc_norm\": 0.8284313725490197,\n \"acc_norm_stderr\": 0.026460569561240644\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7805907172995781,\n \"acc_stderr\": 0.026939106581553945,\n \
\ \"acc_norm\": 0.7805907172995781,\n \"acc_norm_stderr\": 0.026939106581553945\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n\
\ \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n\
\ \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7557251908396947,\n \"acc_stderr\": 0.037683359597287434,\n\
\ \"acc_norm\": 0.7557251908396947,\n \"acc_norm_stderr\": 0.037683359597287434\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7520661157024794,\n \"acc_stderr\": 0.03941897526516302,\n \"\
acc_norm\": 0.7520661157024794,\n \"acc_norm_stderr\": 0.03941897526516302\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n\
\ \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n\
\ \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n\
\ \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.021262719400406964,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.021262719400406964\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8084291187739464,\n\
\ \"acc_stderr\": 0.014072859310451949,\n \"acc_norm\": 0.8084291187739464,\n\
\ \"acc_norm_stderr\": 0.014072859310451949\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7254335260115607,\n \"acc_stderr\": 0.024027745155265023,\n\
\ \"acc_norm\": 0.7254335260115607,\n \"acc_norm_stderr\": 0.024027745155265023\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4659217877094972,\n\
\ \"acc_stderr\": 0.016683615837486867,\n \"acc_norm\": 0.4659217877094972,\n\
\ \"acc_norm_stderr\": 0.016683615837486867\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7091503267973857,\n \"acc_stderr\": 0.02600480036395213,\n\
\ \"acc_norm\": 0.7091503267973857,\n \"acc_norm_stderr\": 0.02600480036395213\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n\
\ \"acc_stderr\": 0.026003301117885142,\n \"acc_norm\": 0.7009646302250804,\n\
\ \"acc_norm_stderr\": 0.026003301117885142\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7283950617283951,\n \"acc_stderr\": 0.024748624490537368,\n\
\ \"acc_norm\": 0.7283950617283951,\n \"acc_norm_stderr\": 0.024748624490537368\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \
\ \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46740547588005216,\n\
\ \"acc_stderr\": 0.012743072942653349,\n \"acc_norm\": 0.46740547588005216,\n\
\ \"acc_norm_stderr\": 0.012743072942653349\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6580882352941176,\n \"acc_stderr\": 0.028814722422254187,\n\
\ \"acc_norm\": 0.6580882352941176,\n \"acc_norm_stderr\": 0.028814722422254187\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6699346405228758,\n \"acc_stderr\": 0.019023726160724553,\n \
\ \"acc_norm\": 0.6699346405228758,\n \"acc_norm_stderr\": 0.019023726160724553\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n\
\ \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n\
\ \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7183673469387755,\n \"acc_stderr\": 0.028795185574291293,\n\
\ \"acc_norm\": 0.7183673469387755,\n \"acc_norm_stderr\": 0.028795185574291293\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454125,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454125\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.87,\n \"acc_stderr\": 0.03379976689896308,\n \
\ \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.03379976689896308\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5662650602409639,\n\
\ \"acc_stderr\": 0.03858158940685517,\n \"acc_norm\": 0.5662650602409639,\n\
\ \"acc_norm_stderr\": 0.03858158940685517\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5532435740514076,\n\
\ \"mc1_stderr\": 0.017403977522557144,\n \"mc2\": 0.7015765443534774,\n\
\ \"mc2_stderr\": 0.014919144015455953\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8034727703235991,\n \"acc_stderr\": 0.011168120593569558\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6611068991660348,\n \
\ \"acc_stderr\": 0.013037955768562513\n }\n}\n```"
repo_url: https://huggingface.co/eldogbbhed/Peagle-9b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|arc:challenge|25_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|gsm8k|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hellaswag|10_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-11T04-56-46.031904.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-11T04-56-46.031904.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- '**/details_harness|winogrande|5_2024-03-11T04-56-46.031904.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-11T04-56-46.031904.parquet'
- config_name: results
data_files:
- split: 2024_03_11T04_56_46.031904
path:
- results_2024-03-11T04-56-46.031904.parquet
- split: latest
path:
- results_2024-03-11T04-56-46.031904.parquet
---
# Dataset Card for Evaluation run of eldogbbhed/Peagle-9b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [eldogbbhed/Peagle-9b](https://huggingface.co/eldogbbhed/Peagle-9b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_eldogbbhed__Peagle-9b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-11T04:56:46.031904](https://huggingface.co/datasets/open-llm-leaderboard/details_eldogbbhed__Peagle-9b/blob/main/results_2024-03-11T04-56-46.031904.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6480227913815262,
"acc_stderr": 0.032309008293723906,
"acc_norm": 0.6486844835270161,
"acc_norm_stderr": 0.03296580206090119,
"mc1": 0.5532435740514076,
"mc1_stderr": 0.017403977522557144,
"mc2": 0.7015765443534774,
"mc2_stderr": 0.014919144015455953
},
"harness|arc:challenge|25": {
"acc": 0.6843003412969283,
"acc_stderr": 0.01358257109581529,
"acc_norm": 0.7150170648464164,
"acc_norm_stderr": 0.013191348179838795
},
"harness|hellaswag|10": {
"acc": 0.6965743875721968,
"acc_stderr": 0.004587978625582481,
"acc_norm": 0.8734315873332006,
"acc_norm_stderr": 0.003318093579702919
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.04153948404742398,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.04153948404742398
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.59,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.59,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7056603773584905,
"acc_stderr": 0.028049186315695255,
"acc_norm": 0.7056603773584905,
"acc_norm_stderr": 0.028049186315695255
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.75,
"acc_stderr": 0.03621034121889507,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03621034121889507
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.03614665424180826,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.03614665424180826
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5659574468085107,
"acc_stderr": 0.032400380867927465,
"acc_norm": 0.5659574468085107,
"acc_norm_stderr": 0.032400380867927465
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482758,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482758
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3915343915343915,
"acc_stderr": 0.025138091388851102,
"acc_norm": 0.3915343915343915,
"acc_norm_stderr": 0.025138091388851102
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677172,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677172
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7935483870967742,
"acc_stderr": 0.023025899617188716,
"acc_norm": 0.7935483870967742,
"acc_norm_stderr": 0.023025899617188716
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5270935960591133,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.5270935960591133,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.793939393939394,
"acc_stderr": 0.0315841532404771,
"acc_norm": 0.793939393939394,
"acc_norm_stderr": 0.0315841532404771
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.02886977846026705,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.02886977846026705
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.02098685459328972,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.02098685459328972
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6692307692307692,
"acc_stderr": 0.023854795680971118,
"acc_norm": 0.6692307692307692,
"acc_norm_stderr": 0.023854795680971118
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3296296296296296,
"acc_stderr": 0.028661201116524572,
"acc_norm": 0.3296296296296296,
"acc_norm_stderr": 0.028661201116524572
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6428571428571429,
"acc_stderr": 0.031124619309328177,
"acc_norm": 0.6428571428571429,
"acc_norm_stderr": 0.031124619309328177
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3973509933774834,
"acc_stderr": 0.03995524007681681,
"acc_norm": 0.3973509933774834,
"acc_norm_stderr": 0.03995524007681681
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8330275229357799,
"acc_stderr": 0.015990154885073368,
"acc_norm": 0.8330275229357799,
"acc_norm_stderr": 0.015990154885073368
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5416666666666666,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.5416666666666666,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8284313725490197,
"acc_stderr": 0.026460569561240644,
"acc_norm": 0.8284313725490197,
"acc_norm_stderr": 0.026460569561240644
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7805907172995781,
"acc_stderr": 0.026939106581553945,
"acc_norm": 0.7805907172995781,
"acc_norm_stderr": 0.026939106581553945
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7557251908396947,
"acc_stderr": 0.037683359597287434,
"acc_norm": 0.7557251908396947,
"acc_norm_stderr": 0.037683359597287434
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7520661157024794,
"acc_stderr": 0.03941897526516302,
"acc_norm": 0.7520661157024794,
"acc_norm_stderr": 0.03941897526516302
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.0401910747255735,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.0401910747255735
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.754601226993865,
"acc_stderr": 0.03380939813943354,
"acc_norm": 0.754601226993865,
"acc_norm_stderr": 0.03380939813943354
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.04718471485219588,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.04718471485219588
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406964,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406964
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8084291187739464,
"acc_stderr": 0.014072859310451949,
"acc_norm": 0.8084291187739464,
"acc_norm_stderr": 0.014072859310451949
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7254335260115607,
"acc_stderr": 0.024027745155265023,
"acc_norm": 0.7254335260115607,
"acc_norm_stderr": 0.024027745155265023
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4659217877094972,
"acc_stderr": 0.016683615837486867,
"acc_norm": 0.4659217877094972,
"acc_norm_stderr": 0.016683615837486867
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7091503267973857,
"acc_stderr": 0.02600480036395213,
"acc_norm": 0.7091503267973857,
"acc_norm_stderr": 0.02600480036395213
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7009646302250804,
"acc_stderr": 0.026003301117885142,
"acc_norm": 0.7009646302250804,
"acc_norm_stderr": 0.026003301117885142
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7283950617283951,
"acc_stderr": 0.024748624490537368,
"acc_norm": 0.7283950617283951,
"acc_norm_stderr": 0.024748624490537368
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4858156028368794,
"acc_stderr": 0.02981549448368206,
"acc_norm": 0.4858156028368794,
"acc_norm_stderr": 0.02981549448368206
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46740547588005216,
"acc_stderr": 0.012743072942653349,
"acc_norm": 0.46740547588005216,
"acc_norm_stderr": 0.012743072942653349
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6580882352941176,
"acc_stderr": 0.028814722422254187,
"acc_norm": 0.6580882352941176,
"acc_norm_stderr": 0.028814722422254187
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6699346405228758,
"acc_stderr": 0.019023726160724553,
"acc_norm": 0.6699346405228758,
"acc_norm_stderr": 0.019023726160724553
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7183673469387755,
"acc_stderr": 0.028795185574291293,
"acc_norm": 0.7183673469387755,
"acc_norm_stderr": 0.028795185574291293
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454125,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454125
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.87,
"acc_stderr": 0.03379976689896308,
"acc_norm": 0.87,
"acc_norm_stderr": 0.03379976689896308
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5662650602409639,
"acc_stderr": 0.03858158940685517,
"acc_norm": 0.5662650602409639,
"acc_norm_stderr": 0.03858158940685517
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5532435740514076,
"mc1_stderr": 0.017403977522557144,
"mc2": 0.7015765443534774,
"mc2_stderr": 0.014919144015455953
},
"harness|winogrande|5": {
"acc": 0.8034727703235991,
"acc_stderr": 0.011168120593569558
},
"harness|gsm8k|5": {
"acc": 0.6611068991660348,
"acc_stderr": 0.013037955768562513
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/details_LewisDeBenoisIV__BillyTheKid1803 | ---
pretty_name: Evaluation run of LewisDeBenoisIV/BillyTheKid1803
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [LewisDeBenoisIV/BillyTheKid1803](https://huggingface.co/LewisDeBenoisIV/BillyTheKid1803)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_LewisDeBenoisIV__BillyTheKid1803\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-21T17:30:27.301289](https://huggingface.co/datasets/open-llm-leaderboard/details_LewisDeBenoisIV__BillyTheKid1803/blob/main/results_2024-03-21T17-30-27.301289.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6557693812415362,\n\
\ \"acc_stderr\": 0.03195209999760263,\n \"acc_norm\": 0.6557197300011551,\n\
\ \"acc_norm_stderr\": 0.032610115600294025,\n \"mc1\": 0.5716034271725826,\n\
\ \"mc1_stderr\": 0.017323088597314747,\n \"mc2\": 0.7215911155171624,\n\
\ \"mc2_stderr\": 0.014571763260866224\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6834470989761092,\n \"acc_stderr\": 0.013592431519068079,\n\
\ \"acc_norm\": 0.7184300341296929,\n \"acc_norm_stderr\": 0.013143376735009022\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7043417645887273,\n\
\ \"acc_stderr\": 0.004554054537692003,\n \"acc_norm\": 0.8809002190798646,\n\
\ \"acc_norm_stderr\": 0.0032324391398815527\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6666666666666666,\n\
\ \"acc_stderr\": 0.04072314811876837,\n \"acc_norm\": 0.6666666666666666,\n\
\ \"acc_norm_stderr\": 0.04072314811876837\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n\
\ \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n\
\ \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7056603773584905,\n \"acc_stderr\": 0.02804918631569525,\n\
\ \"acc_norm\": 0.7056603773584905,\n \"acc_norm_stderr\": 0.02804918631569525\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7986111111111112,\n\
\ \"acc_stderr\": 0.03353647469713839,\n \"acc_norm\": 0.7986111111111112,\n\
\ \"acc_norm_stderr\": 0.03353647469713839\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"\
acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720684,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720684\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n\
\ \"acc_stderr\": 0.0356760379963917,\n \"acc_norm\": 0.6763005780346821,\n\
\ \"acc_norm_stderr\": 0.0356760379963917\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n\
\ \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.77,\n\
\ \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146268,\n\
\ \"acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146268\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n\
\ \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.41005291005291006,\n \"acc_stderr\": 0.02533120243894443,\n \"\
acc_norm\": 0.41005291005291006,\n \"acc_norm_stderr\": 0.02533120243894443\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n\
\ \"acc_stderr\": 0.04463112720677171,\n \"acc_norm\": 0.46825396825396826,\n\
\ \"acc_norm_stderr\": 0.04463112720677171\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7838709677419354,\n \"acc_stderr\": 0.02341529343356853,\n \"\
acc_norm\": 0.7838709677419354,\n \"acc_norm_stderr\": 0.02341529343356853\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n \"\
acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\"\
: 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8080808080808081,\n \"acc_stderr\": 0.028057791672989017,\n \"\
acc_norm\": 0.8080808080808081,\n \"acc_norm_stderr\": 0.028057791672989017\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.02098685459328973,\n\
\ \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.02098685459328973\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402534,\n\
\ \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402534\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.32222222222222224,\n \"acc_stderr\": 0.028493465091028593,\n \
\ \"acc_norm\": 0.32222222222222224,\n \"acc_norm_stderr\": 0.028493465091028593\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.030388353551886793,\n\
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.030388353551886793\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"\
acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8568807339449541,\n \"acc_stderr\": 0.01501446249716859,\n \"\
acc_norm\": 0.8568807339449541,\n \"acc_norm_stderr\": 0.01501446249716859\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5185185185185185,\n \"acc_stderr\": 0.034076320938540516,\n \"\
acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.034076320938540516\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8529411764705882,\n \"acc_stderr\": 0.024857478080250447,\n \"\
acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.024857478080250447\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.810126582278481,\n \"acc_stderr\": 0.025530100460233494,\n \
\ \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.025530100460233494\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n\
\ \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n\
\ \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.03498149385462472,\n\
\ \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.03498149385462472\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"\
acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.0335195387952127,\n\
\ \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.0335195387952127\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n\
\ \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.021262719400406964,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.021262719400406964\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8237547892720306,\n\
\ \"acc_stderr\": 0.013625556907993462,\n \"acc_norm\": 0.8237547892720306,\n\
\ \"acc_norm_stderr\": 0.013625556907993462\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7456647398843931,\n \"acc_stderr\": 0.023445826276545546,\n\
\ \"acc_norm\": 0.7456647398843931,\n \"acc_norm_stderr\": 0.023445826276545546\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.45027932960893857,\n\
\ \"acc_stderr\": 0.016639615236845807,\n \"acc_norm\": 0.45027932960893857,\n\
\ \"acc_norm_stderr\": 0.016639615236845807\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.025261691219729484,\n\
\ \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.025261691219729484\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7138263665594855,\n\
\ \"acc_stderr\": 0.02567025924218893,\n \"acc_norm\": 0.7138263665594855,\n\
\ \"acc_norm_stderr\": 0.02567025924218893\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.75,\n \"acc_stderr\": 0.02409347123262133,\n \
\ \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.02409347123262133\n \
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\"\
: 0.5,\n \"acc_stderr\": 0.029827499313594685,\n \"acc_norm\": 0.5,\n\
\ \"acc_norm_stderr\": 0.029827499313594685\n },\n \"harness|hendrycksTest-professional_law|5\"\
: {\n \"acc\": 0.4758800521512386,\n \"acc_stderr\": 0.012755368722863935,\n\
\ \"acc_norm\": 0.4758800521512386,\n \"acc_norm_stderr\": 0.012755368722863935\n\
\ },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\"\
: 0.6875,\n \"acc_stderr\": 0.02815637344037142,\n \"acc_norm\": 0.6875,\n\
\ \"acc_norm_stderr\": 0.02815637344037142\n },\n \"harness|hendrycksTest-professional_psychology|5\"\
: {\n \"acc\": 0.6830065359477124,\n \"acc_stderr\": 0.018824219512706207,\n\
\ \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.018824219512706207\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142777,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142777\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n\
\ \"acc_stderr\": 0.02587064676616914,\n \"acc_norm\": 0.8407960199004975,\n\
\ \"acc_norm_stderr\": 0.02587064676616914\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n\
\ \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n\
\ \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160893,\n\
\ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160893\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5716034271725826,\n\
\ \"mc1_stderr\": 0.017323088597314747,\n \"mc2\": 0.7215911155171624,\n\
\ \"mc2_stderr\": 0.014571763260866224\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8232044198895028,\n \"acc_stderr\": 0.010721923287918742\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7028051554207733,\n \
\ \"acc_stderr\": 0.012588685966624187\n }\n}\n```"
repo_url: https://huggingface.co/LewisDeBenoisIV/BillyTheKid1803
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|arc:challenge|25_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|gsm8k|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hellaswag|10_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T17-30-27.301289.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-21T17-30-27.301289.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- '**/details_harness|winogrande|5_2024-03-21T17-30-27.301289.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-21T17-30-27.301289.parquet'
- config_name: results
data_files:
- split: 2024_03_21T17_30_27.301289
path:
- results_2024-03-21T17-30-27.301289.parquet
- split: latest
path:
- results_2024-03-21T17-30-27.301289.parquet
---
# Dataset Card for Evaluation run of LewisDeBenoisIV/BillyTheKid1803
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [LewisDeBenoisIV/BillyTheKid1803](https://huggingface.co/LewisDeBenoisIV/BillyTheKid1803) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_LewisDeBenoisIV__BillyTheKid1803",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-21T17:30:27.301289](https://huggingface.co/datasets/open-llm-leaderboard/details_LewisDeBenoisIV__BillyTheKid1803/blob/main/results_2024-03-21T17-30-27.301289.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6557693812415362,
"acc_stderr": 0.03195209999760263,
"acc_norm": 0.6557197300011551,
"acc_norm_stderr": 0.032610115600294025,
"mc1": 0.5716034271725826,
"mc1_stderr": 0.017323088597314747,
"mc2": 0.7215911155171624,
"mc2_stderr": 0.014571763260866224
},
"harness|arc:challenge|25": {
"acc": 0.6834470989761092,
"acc_stderr": 0.013592431519068079,
"acc_norm": 0.7184300341296929,
"acc_norm_stderr": 0.013143376735009022
},
"harness|hellaswag|10": {
"acc": 0.7043417645887273,
"acc_stderr": 0.004554054537692003,
"acc_norm": 0.8809002190798646,
"acc_norm_stderr": 0.0032324391398815527
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.04072314811876837,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.04072314811876837
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7056603773584905,
"acc_stderr": 0.02804918631569525,
"acc_norm": 0.7056603773584905,
"acc_norm_stderr": 0.02804918631569525
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7986111111111112,
"acc_stderr": 0.03353647469713839,
"acc_norm": 0.7986111111111112,
"acc_norm_stderr": 0.03353647469713839
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6763005780346821,
"acc_stderr": 0.0356760379963917,
"acc_norm": 0.6763005780346821,
"acc_norm_stderr": 0.0356760379963917
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5787234042553191,
"acc_stderr": 0.03227834510146268,
"acc_norm": 0.5787234042553191,
"acc_norm_stderr": 0.03227834510146268
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5448275862068965,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.5448275862068965,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41005291005291006,
"acc_stderr": 0.02533120243894443,
"acc_norm": 0.41005291005291006,
"acc_norm_stderr": 0.02533120243894443
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677171,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677171
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7838709677419354,
"acc_stderr": 0.02341529343356853,
"acc_norm": 0.7838709677419354,
"acc_norm_stderr": 0.02341529343356853
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8080808080808081,
"acc_stderr": 0.028057791672989017,
"acc_norm": 0.8080808080808081,
"acc_norm_stderr": 0.028057791672989017
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.02098685459328973,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.02098685459328973
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.023901157979402534,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.023901157979402534
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32222222222222224,
"acc_stderr": 0.028493465091028593,
"acc_norm": 0.32222222222222224,
"acc_norm_stderr": 0.028493465091028593
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.030388353551886793,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.030388353551886793
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8568807339449541,
"acc_stderr": 0.01501446249716859,
"acc_norm": 0.8568807339449541,
"acc_norm_stderr": 0.01501446249716859
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.034076320938540516,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.034076320938540516
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.024857478080250447,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.024857478080250447
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.025530100460233494,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.025530100460233494
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.03498149385462472,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.03498149385462472
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.0335195387952127,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.0335195387952127
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010212,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010212
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406964,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406964
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8237547892720306,
"acc_stderr": 0.013625556907993462,
"acc_norm": 0.8237547892720306,
"acc_norm_stderr": 0.013625556907993462
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7456647398843931,
"acc_stderr": 0.023445826276545546,
"acc_norm": 0.7456647398843931,
"acc_norm_stderr": 0.023445826276545546
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.45027932960893857,
"acc_stderr": 0.016639615236845807,
"acc_norm": 0.45027932960893857,
"acc_norm_stderr": 0.016639615236845807
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7352941176470589,
"acc_stderr": 0.025261691219729484,
"acc_norm": 0.7352941176470589,
"acc_norm_stderr": 0.025261691219729484
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7138263665594855,
"acc_stderr": 0.02567025924218893,
"acc_norm": 0.7138263665594855,
"acc_norm_stderr": 0.02567025924218893
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.75,
"acc_stderr": 0.02409347123262133,
"acc_norm": 0.75,
"acc_norm_stderr": 0.02409347123262133
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5,
"acc_stderr": 0.029827499313594685,
"acc_norm": 0.5,
"acc_norm_stderr": 0.029827499313594685
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4758800521512386,
"acc_stderr": 0.012755368722863935,
"acc_norm": 0.4758800521512386,
"acc_norm_stderr": 0.012755368722863935
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6875,
"acc_stderr": 0.02815637344037142,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.02815637344037142
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6830065359477124,
"acc_stderr": 0.018824219512706207,
"acc_norm": 0.6830065359477124,
"acc_norm_stderr": 0.018824219512706207
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.028123429335142777,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.028123429335142777
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.02587064676616914,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.02587064676616914
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.027966785859160893,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.027966785859160893
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5716034271725826,
"mc1_stderr": 0.017323088597314747,
"mc2": 0.7215911155171624,
"mc2_stderr": 0.014571763260866224
},
"harness|winogrande|5": {
"acc": 0.8232044198895028,
"acc_stderr": 0.010721923287918742
},
"harness|gsm8k|5": {
"acc": 0.7028051554207733,
"acc_stderr": 0.012588685966624187
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
CyberHarem/mukago_demonslayer | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of mukago (Kimetsu no Yaiba)
This is the dataset of mukago (Kimetsu no Yaiba), containing 29 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
frankminors123/chinese-shepherd-critic-dataset | ---
license: apache-2.0
task_categories:
- text-generation
- question-answering
language:
- zh
size_categories:
- 1M<n<10M
---
The dataset comes from the work introduced in "Shepherd: A Critic for Language Model Generation". We translated it into Simplified Chinese based on Google Translate, and made appropriate manual checks. We hope to do more valuable work in the Chinese field, and at the same time, we also hope that capable researchers can better check the sentences based on Chinese grammar or make further rewrites. |
huggingartists/tom-waits | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/tom-waits"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.818237 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/505d2d5d1d43304dca446fd2e788a0f8.750x750x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/tom-waits">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tom Waits</div>
<a href="https://genius.com/artists/tom-waits">
<div style="text-align: center; font-size: 14px;">@tom-waits</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/tom-waits).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/tom-waits")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|681| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/tom-waits")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
CyberHarem/ark_royal_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of ark_royal (Kantai Collection)
This is the dataset of ark_royal (Kantai Collection), containing 396 images and their tags.
The core tags of this character are `red_hair, short_hair, bob_cut, bangs, blue_eyes, hairband, blunt_bangs, breasts, ribbon, red_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 396 | 390.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ark_royal_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 396 | 255.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ark_royal_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 874 | 516.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ark_royal_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 396 | 360.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ark_royal_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 874 | 684.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ark_royal_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ark_royal_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 14 |  |  |  |  |  | 1girl, bow_(weapon), cleavage_cutout, fingerless_gloves, long_sleeves, solo, tiara, white_pantyhose, white_shorts, corset, holding_weapon, quiver, small_breasts, arrow_(projectile), brown_gloves, overskirt, rigging, looking_at_viewer, red_bow, jacket, red_flower, cowboy_shot, machinery |
| 1 | 5 |  |  |  |  |  | 1girl, belt, bow_(weapon), cleavage_cutout, corset, cowboy_shot, long_sleeves, looking_at_viewer, quiver, rigging, solo, tiara, white_pantyhose, white_shorts, arrow_(projectile), fingerless_gloves, overskirt, red_rose, small_breasts, machinery, holding_weapon, outdoors, sky |
| 2 | 6 |  |  |  |  |  | 1girl, cleavage_cutout, corset, long_sleeves, looking_at_viewer, solo, tiara, white_pantyhose, white_shorts, belt, fingerless_gloves, red_rose, simple_background, white_background, brown_gloves, smile, character_name, cowboy_shot, jacket, medium_breasts |
| 3 | 16 |  |  |  |  |  | 1girl, cleavage_cutout, fingerless_gloves, long_sleeves, looking_at_viewer, solo, tiara, upper_body, brown_gloves, corset, red_rose, simple_background, white_background, smile, small_breasts |
| 4 | 7 |  |  |  |  |  | 1girl, cleavage_cutout, long_sleeves, looking_at_viewer, red_rose, solo, tiara, upper_body, simple_background, corset, medium_breasts, smile, twitter_username, white_background |
| 5 | 8 |  |  |  |  |  | 1girl, detached_collar, looking_at_viewer, playboy_bunny, rabbit_ears, simple_background, solo, cowboy_shot, fake_animal_ears, white_background, white_leotard, wrist_cuffs, medium_breasts, strapless_leotard, black_pantyhose, dated, red_bowtie, tiara, hand_on_hip, rabbit_tail, red_rose, twitter_username, white_pantyhose |
| 6 | 6 |  |  |  |  |  | blush, tiara, hetero, mosaic_censoring, penis, solo_focus, 1boy, 1girl, all_fours, ass, sweat, anus, bed_sheet, completely_nude, doggystyle, fellatio, looking_at_viewer, pov, pussy, sex_from_behind, vaginal |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bow_(weapon) | cleavage_cutout | fingerless_gloves | long_sleeves | solo | tiara | white_pantyhose | white_shorts | corset | holding_weapon | quiver | small_breasts | arrow_(projectile) | brown_gloves | overskirt | rigging | looking_at_viewer | red_bow | jacket | red_flower | cowboy_shot | machinery | belt | red_rose | outdoors | sky | simple_background | white_background | smile | character_name | medium_breasts | upper_body | twitter_username | detached_collar | playboy_bunny | rabbit_ears | fake_animal_ears | white_leotard | wrist_cuffs | strapless_leotard | black_pantyhose | dated | red_bowtie | hand_on_hip | rabbit_tail | blush | hetero | mosaic_censoring | penis | solo_focus | 1boy | all_fours | ass | sweat | anus | bed_sheet | completely_nude | doggystyle | fellatio | pov | pussy | sex_from_behind | vaginal |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:------------------|:--------------------|:---------------|:-------|:--------|:------------------|:---------------|:---------|:-----------------|:---------|:----------------|:---------------------|:---------------|:------------|:----------|:--------------------|:----------|:---------|:-------------|:--------------|:------------|:-------|:-----------|:-----------|:------|:--------------------|:-------------------|:--------|:-----------------|:-----------------|:-------------|:-------------------|:------------------|:----------------|:--------------|:-------------------|:----------------|:--------------|:--------------------|:------------------|:--------|:-------------|:--------------|:--------------|:--------|:---------|:-------------------|:--------|:-------------|:-------|:------------|:------|:--------|:-------|:------------|:------------------|:-------------|:-----------|:------|:--------|:------------------|:----------|
| 0 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | X | X | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | X | X | X | X | X | X | X | X | | | | | X | | | X | | X | | X | | X | X | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 16 |  |  |  |  |  | X | | X | X | X | X | X | | | X | | | X | | X | | | X | | | | | | | X | | | X | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | | X | | X | X | X | | | X | | | | | | | | X | | | | | | | X | | | X | X | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 8 |  |  |  |  |  | X | | | | | X | X | X | | | | | | | | | | X | | | | X | | | X | | | X | X | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
zolak/twitter_dataset_80_1713164081 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 463038
num_examples: 1183
download_size: 240984
dataset_size: 463038
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sushvij/generativeaisample2 | ---
license: openrail
task_categories:
- zero-shot-classification
language:
- en
--- |
kerinin/hackernews-stories | ---
dataset_info:
features:
- name: labels
dtype: float32
- name: Title
dtype: string
- name: Text
dtype: string
splits:
- name: train
num_bytes: 2855338468.84556
num_examples: 313317
- name: validation
num_bytes: 713841452.15444
num_examples: 78330
download_size: 2306170034
dataset_size: 3569179921.0
---
# Dataset Card for "hackernews-stories"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ShubhamChoksi/IMDB_Movies | ---
license: mit
---
|
sanjay920/1million-gpt-4-llama | ---
language:
- en
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: weight
dtype: 'null'
- name: text
dtype: string
splits:
- name: train
num_bytes: 3630771342
num_examples: 994896
download_size: 1980759415
dataset_size: 3630771342
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
[Open-Orca/1million-gpt-4](https://huggingface.co/datasets/Open-Orca/1million-gpt-4) converted to sharegpt and llama chat format
|
FINNUMBER/FINCH_TRAIN_ALL_900_per100_NEWFORMAT | ---
dataset_info:
features:
- name: task
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 3755138
num_examples: 900
download_size: 2064021
dataset_size: 3755138
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jtatman/medical_instruction_format | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: system
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 115532598
num_examples: 47122
download_size: 53812365
dataset_size: 115532598
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "medical_instruction_format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
one-sec-cv12/chunk_233 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 21135266352.875
num_examples: 220049
download_size: 19423908394
dataset_size: 21135266352.875
---
# Dataset Card for "chunk_233"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
enoahjr/twitter_dataset_1713217149 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 419998
num_examples: 1186
download_size: 225592
dataset_size: 419998
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Tsuinzues/spiderps4 | ---
license: openrail
---
|
Jean-Baptiste/financial_news_sentiment | ---
language:
- en
dataset_info:
splits:
- name: test
num_examples: 267
- name: train
num_examples: 1512
annotations_creators:
- expert-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: financial_news_sentiment
size_categories:
- 1K<n<10K
tags: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
- sentiment-classification
---
# Dataset Card for "financial_news_sentiment"
Manually validated sentiment for ~2000 Canadian news articles.
The dataset also include a column topic which contains one of the following value:
* acquisition
* other
* quaterly financial release
* appointment to new position
* dividend
* corporate update
* drillings results
* conference
* share repurchase program
* grant of stocks
This was generated automatically using a zero-shot classification model and **was not** reviewed manually. |
mahdighaemi/IBIT_TINY | ---
license: apache-2.0
---
|
longhoang06/text-recognition | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 6858787617.0
num_examples: 100000
download_size: 6858941356
dataset_size: 6858787617.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "text-recognition"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AdapterOcean/med_alpaca_standardized_cluster_69 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 72452566
num_examples: 7251
download_size: 21740260
dataset_size: 72452566
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_69"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lansinuote/simple_facenet | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 702629475.6255275
num_examples: 17334
- name: test
num_bytes: 8106951.374472453
num_examples: 200
download_size: 710565269
dataset_size: 710736427.0
---
# Dataset Card for "simple_facenet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
OmegaGamage/mwp-instruct-tune-dataset | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: mwp
dtype: string
splits:
- name: train
num_bytes: 1986871
num_examples: 8278
download_size: 807667
dataset_size: 1986871
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "mwp-instruct-tune-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-staging-eval-project-samsum-0c672345-10275362 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: google/pegasus-xsum
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: train
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-xsum
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ikadebi](https://huggingface.co/ikadebi) for evaluating this model. |
damlab/HIV_PI | ---
license: mit
---
# Dataset Description
## Dataset Summary
This dataset was derived from the Stanford HIV Genotype-Phenotype database and contains 1,733 HIV protease sequences. A
pproximately half of the sequences are resistant to at least one antiretroviral therapeutic (ART).
Supported Tasks and Leaderboards: None
Languages: English
## Dataset Structure
### Data Instances
Each column represents the protein amino acid sequence of the HIV protease protein. The ID field indicates the Genbank reference ID for future cross-referencing. There are 1,733 total protease sequences.
Data Fields: ID, sequence, fold, FPV, IDV, NFV, SQV
Data Splits: None
## Dataset Creation
Curation Rationale: This dataset was curated to train a model (HIV-BERT-PI) designed to predict whether an HIV protease sequence would result in resistance to certain antiretroviral (ART) drugs.
Initial Data Collection and Normalization: Dataset was downloaded and curated on 12/21/2021.
## Considerations for Using the Data
Social Impact of Dataset: Due to the tendency of HIV to mutate, drug resistance is a common issue when attempting to treat those infected with HIV.
Protease inhibitors are a class of drugs that HIV is known to develop resistance via mutations.
Thus, by providing a collection of protease sequences known to be resistant to one or more drugs, this dataset provides a significant collection of data that could be utilized to perform computational analysis of protease resistance mutations.
Discussion of Biases: Due to the sampling nature of this database, it is predominantly composed of subtype B sequences from North America and Europe with only minor contributions of Subtype C, A, and D.
Currently, there was no effort made to balance the performance across these classes.
As such, one should consider refinement with additional sequences to perform well on non-B sequences.
## Additional Information:
- Dataset Curators: Will Dampier
- Citation Information: TBA
|
mbzuai-oryx/Clima500 | ---
license: cc-by-nc-sa-4.0
---
# Clima500 Dataset
GitHub Repo : https://github.com/mbzuai-oryx/ClimateGPT
Please cite our work :
```
@inproceedings{mullappilly2023arabic,
title={Arabic Mini-ClimateGPT: A Climate Change and Sustainability Tailored Arabic LLM},
author={Mullappilly, Sahal and Shaker, Abdelrahman and Thawakar, Omkar and Cholakkal, Hisham and Anwer, Rao and Khan, Salman and Khan, Fahad},
booktitle={Findings of the Association for Computational Linguistics: EMNLP 2023},
pages={14126--14136},
year={2023}
}
``` |
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-169e67-1524755111 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev
eval_info:
task: text_zero_shot_classification
model: mathemakitten/opt-125m
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev
dataset_config: mathemakitten--winobias_antistereotype_dev
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: mathemakitten/opt-125m
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
Rocinante/bbq_cleaned | ---
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
- name: data_source
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 50292
num_examples: 150
download_size: 28559
dataset_size: 50292
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "bbq_cleaned"
Get the source data from here: https://huggingface.co/datasets/lighteval/bbq_helm/
And then manually selected.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
virtualvoidsteve/code_correction_dataset | ---
dataset_info:
features:
- name: corrupted
dtype: string
- name: corrected
dtype: string
splits:
- name: train
num_bytes: 95006
num_examples: 114
download_size: 30646
dataset_size: 95006
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bigbio/anat_em |
---
language:
- en
bigbio_language:
- English
license: cc-by-sa-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_SA_3p0
pretty_name: AnatEM
homepage: http://nactem.ac.uk/anatomytagger/#AnatEM
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for AnatEM
## Dataset Description
- **Homepage:** http://nactem.ac.uk/anatomytagger/#AnatEM
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
The extended Anatomical Entity Mention corpus (AnatEM) consists of 1212 documents (approx. 250,000 words) manually annotated to identify over 13,000 mentions of anatomical entities. Each annotation is assigned one of 12 granularity-based types such as Cellular component, Tissue and Organ, defined with reference to the Common Anatomy Reference Ontology.
## Citation Information
```
@article{pyysalo2014anatomical,
title={Anatomical entity mention recognition at literature scale},
author={Pyysalo, Sampo and Ananiadou, Sophia},
journal={Bioinformatics},
volume={30},
number={6},
pages={868--875},
year={2014},
publisher={Oxford University Press}
}
```
|
aimusicgen/pop909_clean_data | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 476572138
num_examples: 153033
download_size: 41480213
dataset_size: 476572138
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
OliBomby/ORS13402 | ---
license: mit
---
|
polinaeterna/smol | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: test
num_bytes: 28
num_examples: 2
- name: train
num_bytes: 44
num_examples: 2
download_size: 1776
dataset_size: 72
---
# Dataset Card for "smol"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bhabh2001/Llama2_test | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 9058
num_examples: 60
download_size: 4157
dataset_size: 9058
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
affahrizain/mixtral-Bactrian-indo | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 53253123
num_examples: 67017
download_size: 28397287
dataset_size: 53253123
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chathuranga-jayanath/selfapr-manipulation-bug-error-context-method-10000 | ---
dataset_info:
features:
- name: fix
dtype: string
- name: ctx
dtype: string
splits:
- name: train
num_bytes: 5017924
num_examples: 8000
- name: validation
num_bytes: 614517
num_examples: 1000
- name: test
num_bytes: 608165
num_examples: 1000
download_size: 2850672
dataset_size: 6240606
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
pawkanarek/n1007_2024-03-23-10-44-15 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 2580356.827209533
num_examples: 755
- name: test
num_bytes: 861258.1727904667
num_examples: 252
download_size: 1867648
dataset_size: 3441615.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Ujan/github_classification | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: names
dtype: string
- name: readmes
dtype: string
- name: topics
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 51303107.05622984
num_examples: 10414
- name: validation
num_bytes: 6414119.971885082
num_examples: 1302
- name: test
num_bytes: 6414119.971885082
num_examples: 1302
download_size: 29047991
dataset_size: 64131347.00000001
---
# Dataset Card for "github_classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Worldwars/caka1 | ---
license: artistic-2.0
---
|
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_64 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1179587260.0
num_examples: 231655
download_size: 1197039919
dataset_size: 1179587260.0
---
# Dataset Card for "chunk_64"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zaanind/llamanmt | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 17230781
num_examples: 80684
download_size: 4778305
dataset_size: 17230781
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llamanmt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ekazuki/text_to_french_parliament_group_written_questions | ---
dataset_info:
features:
- name: text
dtype: string
- name: group
dtype: string
splits:
- name: train
num_bytes: 93969142.4
num_examples: 85328
- name: test
num_bytes: 23492285.6
num_examples: 21332
download_size: 65864902
dataset_size: 117461428.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
jxie/celeba-hq | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': female
'1': male
splits:
- name: train
num_bytes: 2581721367.0
num_examples: 28000
- name: validation
num_bytes: 181391512.0
num_examples: 2000
download_size: 2762725456
dataset_size: 2763112879.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
AdapterOcean/oasst_top1_standardized_cluster_2_std | ---
dataset_info:
features:
- name: message_type
dtype: string
- name: message
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4763657
num_examples: 6772
download_size: 2440426
dataset_size: 4763657
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "oasst_top1_standardized_cluster_2_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/tachibana_hibiki_senkizesshousymphogear | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Tachibana Hibiki
This is the dataset of Tachibana Hibiki, containing 300 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 300 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 691 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 300 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 300 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 300 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 300 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 300 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 691 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 691 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 691 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
cpans/idcard_name | ---
license: apache-2.0
datasets:
- cpans/idcard_name
metrics:
- accuracy
pipeline_tag: image-to-text
tags:
- code
---
身份证ocr识别 证件提取矫正 验证码自动化 <a href="https://github.com/CCCpan/Gebaini"> 模型获取 </a>
To obtain free models for identity card (ID) OCR (Optical Character Recognition) recognition, you can explore various open-source platforms and repositories such as GitHub, Model Zoo, or specific frameworks' model hubs like TensorFlow Hub or PyTorch Hub. ID OCR recognition models are designed to extract text from identity cards, including personal details like name, ID number, date of birth, and other relevant information. These models are trained on diverse datasets to accurately recognize and extract text from various ID card formats and designs.
<a href="https://github.com/CCCpan/Gebaini"> Click on me free access </a>

|
Kaue123456/BrutusAndreLuizChapeu | ---
license: openrail
---
|
gofilipa/gender_congress_117-118 | ---
license: apache-2.0
---
|
Maciel/ShareGPT_Dialogue | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
### Dataset Summary
本数据集来自ShareGPT数据,整理成用户和系统多轮对话形式,[原始数据来源](https://github.com/hiyouga/LLaMA-Efficient-Tuning/blob/main/data/sharegpt_zh_27k.json)
### Dataset Structure
数据集中包含两个字段,分别为dialogue和turn。dialogue是用户和系统之间的多轮对话,数据类型为List(Dict),按照列表顺序表示对话顺序,每一轮中human是用户问题,assistant是系统回复。turn表示总共对话轮数,数据类型为Int。
### Data Instances
```
{
"dialogue":
[
{
"human": "编写一个用户故事,说明一个用户如何登录应用程序,然后在第二段中编写其用户验收测试。",
"assistant": "作为用户,我希望能够登录到应用程序,以便我可以访问我的个性化内容和设置。\n为确保登录功能正常工作,可以进行以下验收测试:\n1. 验证通过单击应用程序主页上的登录按钮可以访问登录页面。\n2. 确保登录表单接受有效的电子邮件地址和密码组合。\n3. 确认在成功登录后应用程序将用户重定向到其个性化主页。\n4. 测试应用程序在输入无效电子邮件地址或密码时显示错误消息。\n5. 通过单击其主页上的注销按钮验证用户能够注销应用程序。",
},
{
"human": "编写一篇用户故事,解释管理员如何管理将数据注入到应用程序中;第二段可以写用户验收测试。",
"assistant": "作为管理员,我希望能够管理数据导入到应用程序中,以便我能确保应用程序具有最新和准确的信息。\n为确保数据导入管理功能如预期工作,可以执行以下验收测试:\n1.验证管理员能够通过单击管理仪表板中相应的链接访问数据导入管理页面。\n2.确保管理员能够以支持的文件格式(如CSV、JSON)上传新的数据集。\n3.确认应用程序能够成功地解析和导入上传的文件中的数据。\n4.测试当上传不受支持的文件格式时,应用程序是否会显示错误消息。\n5.验证管理员能够在应用程序中查看导入的数据并进行任何必要的更新或更改。\n6.确认管理员所做的更改已保存并反映在应用程序的数据中。",
}
],
"turn": 2
}
```
|
AndyLiu0104/Soldering-Data-Tiny-1016-solder-pad | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 16637648.25
num_examples: 9606
download_size: 10646978
dataset_size: 16637648.25
---
# Dataset Card for "Soldering-Data-Tiny-1016-solder-pad"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
IFSTalfredoswald/MBTI | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
vigneshgs7/seg_open_try10 | ---
dataset_info:
features:
- name: image
sequence:
sequence:
sequence: uint8
- name: mask
sequence:
sequence:
sequence: uint8
splits:
- name: train
num_bytes: 1878041176
num_examples: 11
download_size: 260771489
dataset_size: 1878041176
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "seg_open_try10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hezarai/common-voice-13-fa | ---
task_categories:
- automatic-speech-recognition
language:
- fa
tags:
- hezar
pretty_name: CommonVoice 13 (Persian)
size_categories:
- 10K<n<100K
---
The Persian portion of the original CommonVoice 13 dataset at https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0
#### Load
```python
# Using HF Datasets
from datasets import load_dataset
dataset = load_dataset("hezarai/common-voice-13-fa", split="train")
# Using Hezar
from hezar.data import Dataset
dataset = Dataset.load("hezarai/common-voice-13-fa", split="train")
``` |
DynamicSuperbPrivate/ReverberationDetectionMediumRoom_VoxcelebRirsNoises | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 3088840873.0
num_examples: 24000
- name: validation
num_bytes: 671565921.0
num_examples: 5218
- name: test
num_bytes: 1254583639.0
num_examples: 9748
download_size: 5001518099
dataset_size: 5014990433.0
---
# Dataset Card for "ReverberationDetectionmediumroom_VoxcelebRirsNoises"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RahulRaman/counting-object-sd-dataset5-clean5 | ---
dataset_info:
features:
- name: input_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 47253801.0
num_examples: 496
download_size: 23715117
dataset_size: 47253801.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ibragim-bad/lambada_openai | ---
dataset_info:
features:
- name: text
dtype: string
- name: target
dtype: string
- name: prompt
dtype: string
splits:
- name: test
num_bytes: 3426549
num_examples: 5153
download_size: 2334974
dataset_size: 3426549
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
sbzl/Ss | ---
license: mit
language:
- en
--- |
LambdaTests/VQAv2_sample_validation_benchmarks_partition_1 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: response
dtype: string
splits:
- name: train
num_bytes: 63
num_examples: 2
download_size: 0
dataset_size: 63
---
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
erikaxenia/id_card | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 75251549.0
num_examples: 276
- name: valid
num_bytes: 7840082.0
num_examples: 38
- name: test
num_bytes: 4404357.0
num_examples: 50
download_size: 0
dataset_size: 87495988.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
# Dataset Card for "id_card"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HyperionHF/tom | ---
license: mit
dataset_info:
features:
- name: agent
dtype: string
- name: agent_preferences
sequence: string
- name: chosen_object
dtype: string
- name: true_chosen_object
dtype: string
- name: chosen_object_is_noisy
dtype: bool
- name: other_objects
sequence: string
- name: location_chosen_object
dtype: int64
- name: location_true_chosen_object
dtype: int64
- name: preference_idx
dtype: int64
- name: preference_idx_true
dtype: int64
- name: example_idx
dtype: int64
- name: example_text
dtype: string
- name: target_idx
dtype: int64
- name: logical_representation
struct:
- name: input
sequence: string
- name: target
dtype: string
- name: logical_representation_with_indirection
struct:
- name: input
sequence: string
- name: target
dtype: int64
splits:
- name: train
num_bytes: 5701970
num_examples: 13046
- name: dev
num_bytes: 576759
num_examples: 1320
- name: test_own_preference
num_bytes: 63618
num_examples: 144
- name: test_others_preference
num_bytes: 60138
num_examples: 144
- name: test_location
num_bytes: 170682
num_examples: 396
- name: test_distraction
num_bytes: 64125
num_examples: 150
download_size: 702278
dataset_size: 6637292
---
|
ssbuild/alpaca_prosocial-dialog | ---
license: apache-2.0
---
|
TDA58/autotrain-data-20mz-k0nd-nqy1 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: autotrain_text
dtype: string
splits:
- name: train
num_bytes: 687846
num_examples: 800
- name: validation
num_bytes: 160463
num_examples: 200
download_size: 452583
dataset_size: 848309
---
# Dataset Card for "autotrain-data-20mz-k0nd-nqy1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yn01/test_20240123_01 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 97985
num_examples: 614
download_size: 25234
dataset_size: 97985
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kgr123/quality_counter_5632_4_uniq | ---
dataset_info:
features:
- name: context
dtype: string
- name: word
dtype: string
- name: claim
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 555868842
num_examples: 20000
- name: validation
num_bytes: 222509234
num_examples: 8000
- name: test
num_bytes: 56238528
num_examples: 2300
download_size: 26504032
dataset_size: 834616604
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
mlsa-iai-msu-lab/ru_sci_bench | ---
license: mit
dataset_info:
features:
- name: paper_id
dtype: int64
- name: title
dtype: string
- name: abstract
dtype: string
splits:
- name: en
num_bytes: 170801402
num_examples: 182436
- name: ru
num_bytes: 312579538
num_examples: 194071
download_size: 246316956
dataset_size: 483380940
configs:
- config_name: default
data_files:
- split: en
path: data/en-*
- split: ru
path: data/ru-*
language:
- ru
- en
pretty_name: ruSciBench
size_categories:
- 100K<n<1M
---
# ruSciBench text data
Titles and abstracts of research papers in english and russian for ruSciBench benchmark.
[Habr post](https://habr.com/ru/articles/781032/)
[Benchmark code](https://github.com/mlsa-iai-msu-lab/ru-sci-bench)
[Google Colab example](https://colab.research.google.com/drive/1S04UOLRYuI2J5qM2QkoWqSbv6GO94YrP?usp=sharing)
### Authors
Benchmark developed by MLSA Lab of Institute for AI, MSU.
### Acknowledgement
The research is part of the project #23-Ш05-21 SES MSU "Development of mathematical methods of machine learning for processing large-volume textual scientific information". We would like to thank [eLibrary](https://elibrary.ru/) for provided datasets.
### Contacts
Nikolai Gerasimenko (nikgerasimenko@gmail.com), Alexey Vatolin (vatolinalex@gmail.com) |
maple1/demo | ---
license: openrail
---
|
coelhobrbr/bolinha01 | ---
license: openrail
---
|
open-llm-leaderboard/details_ai-business__Luban-13B | ---
pretty_name: Evaluation run of ai-business/Luban-13B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ai-business/Luban-13B](https://huggingface.co/ai-business/Luban-13B) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ai-business__Luban-13B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-17T11:15:33.793306](https://huggingface.co/datasets/open-llm-leaderboard/details_ai-business__Luban-13B/blob/main/results_2023-09-17T11-15-33.793306.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.007340604026845637,\n\
\ \"em_stderr\": 0.0008741896875346207,\n \"f1\": 0.10464869966443034,\n\
\ \"f1_stderr\": 0.0019947106278579182,\n \"acc\": 0.431315608856773,\n\
\ \"acc_stderr\": 0.010029949190396351\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.007340604026845637,\n \"em_stderr\": 0.0008741896875346207,\n\
\ \"f1\": 0.10464869966443034,\n \"f1_stderr\": 0.0019947106278579182\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09704321455648218,\n \
\ \"acc_stderr\": 0.008153768274554716\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7655880031570639,\n \"acc_stderr\": 0.011906130106237985\n\
\ }\n}\n```"
repo_url: https://huggingface.co/ai-business/Luban-13B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_09_17T11_15_33.793306
path:
- '**/details_harness|drop|3_2023-09-17T11-15-33.793306.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-17T11-15-33.793306.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_17T11_15_33.793306
path:
- '**/details_harness|gsm8k|5_2023-09-17T11-15-33.793306.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-17T11-15-33.793306.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_17T11_15_33.793306
path:
- '**/details_harness|winogrande|5_2023-09-17T11-15-33.793306.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-17T11-15-33.793306.parquet'
- config_name: results
data_files:
- split: 2023_08_29T11_08_27.769283
path:
- results_2023-08-29T11:08:27.769283.parquet
- split: 2023_09_17T11_15_33.793306
path:
- results_2023-09-17T11-15-33.793306.parquet
- split: latest
path:
- results_2023-09-17T11-15-33.793306.parquet
---
# Dataset Card for Evaluation run of ai-business/Luban-13B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/ai-business/Luban-13B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [ai-business/Luban-13B](https://huggingface.co/ai-business/Luban-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ai-business__Luban-13B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T11:15:33.793306](https://huggingface.co/datasets/open-llm-leaderboard/details_ai-business__Luban-13B/blob/main/results_2023-09-17T11-15-33.793306.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.007340604026845637,
"em_stderr": 0.0008741896875346207,
"f1": 0.10464869966443034,
"f1_stderr": 0.0019947106278579182,
"acc": 0.431315608856773,
"acc_stderr": 0.010029949190396351
},
"harness|drop|3": {
"em": 0.007340604026845637,
"em_stderr": 0.0008741896875346207,
"f1": 0.10464869966443034,
"f1_stderr": 0.0019947106278579182
},
"harness|gsm8k|5": {
"acc": 0.09704321455648218,
"acc_stderr": 0.008153768274554716
},
"harness|winogrande|5": {
"acc": 0.7655880031570639,
"acc_stderr": 0.011906130106237985
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
tipani/Shanghai-License-Plate-Auction | ---
language:
- "en"
pretty_name: "Shanghai License Plate Auction 2014-2021"
tags:
- "License Plate"
- "Auction"
- "Timeline"
license: "mit"
task_categories:
- "tabular-regression"
- "time-series-forecasting"
---
# Introduction
Second-by-second price updates from the last 60 seconds of the monthly license plate auction in Shanghai from 2014 to 2020, and a few months of 2021. The seconds data is given as a differential compared to the startprice. I managed to correctly predict and score a license plate on all three years that I worked on the project during 2018-2020. But it's not easy as there are lots of other factors affecting success on top of prediction accuracy.
# Read More
To learn the details about the auction process and why it is so darn hard, please read my [article series](https://www.linkedin.com/pulse/part-1-applied-ml-timeline-prediction-shanghai-license-tianyi-pan) on LinkedIn. |
adhisetiawan/bdd10k-colormaps | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 856755130.0
num_examples: 7000
- name: validation
num_bytes: 157758418.0
num_examples: 1000
download_size: 980161348
dataset_size: 1014513548.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Sowmya15/Profanity_English | ---
license: apache-2.0
---
|
OVAWARE/Minecrafter | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 64316640
num_examples: 20556
download_size: 8604268
dataset_size: 64316640
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Minecrafter"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hardel-DW/Pixel-ai | ---
task_categories:
- token-classification
- summarization
- image-classification
language:
- en
tags:
- art
- game
- pixelart
- minecraft
pretty_name: Minecraft Dataset
size_categories:
- 1K<n<10K
--- |
SEACrowd/smsa | ---
tags:
- sentiment-analysis
language:
- ind
---
# smsa
SmSA is a sentence-level sentiment analysis dataset (Purwarianti and Crisdayanti, 2019) is a collection of comments and reviews
in Indonesian obtained from multiple online platforms. The text was crawled and then annotated by several Indonesian linguists
to construct this dataset. There are three possible sentiments on the SmSA dataset: positive, negative, and neutral
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@INPROCEEDINGS{8904199,
author={Purwarianti, Ayu and Crisdayanti, Ida Ayu Putu Ari},
booktitle={2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA)},
title={Improving Bi-LSTM Performance for Indonesian Sentiment Analysis Using Paragraph Vector},
year={2019},
pages={1-5},
doi={10.1109/ICAICTA.2019.8904199}
}
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Wilie, Bryan and Vincentio, Karissa and Winata, Genta Indra and Cahyawijaya, Samuel and Li, Xiaohong and Lim, Zhi Yuan and Soleman, Sidik and Mahendra, Rahmad and Fung, Pascale and Bahar, Syafri and others},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
pages={843--857},
year={2020}
}
```
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/indonlu](https://github.com/IndoNLP/indonlu)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) |
Changlong1/repo_id | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: conversation
list:
- name: from
dtype: string
- name: value
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 6094060
num_examples: 6290
download_size: 3268093
dataset_size: 6094060
---
# Dataset Card for "repo_id"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.