datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
DBQ/Chloe.Product.prices.France | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
- image-classification
- feature-extraction
- image-segmentation
- image-to-image
- image-to-text
- object-detection
- summarization
- zero-shot-image-classification
pretty_name: France - Chloe - Product-level price list
tags:
- webscraping
- ecommerce
- Chloe
- fashion
- fashion product
- image
- fashion image
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: website_name
dtype: string
- name: competence_date
dtype: string
- name: country_code
dtype: string
- name: currency_code
dtype: string
- name: brand
dtype: string
- name: category1_code
dtype: string
- name: category2_code
dtype: string
- name: category3_code
dtype: string
- name: product_code
dtype: string
- name: title
dtype: string
- name: itemurl
dtype: string
- name: imageurl
dtype: string
- name: full_price
dtype: float64
- name: price
dtype: float64
- name: full_price_eur
dtype: float64
- name: price_eur
dtype: float64
- name: flg_discount
dtype: int64
splits:
- name: train
num_bytes: 684003
num_examples: 2459
download_size: 155003
dataset_size: 684003
---
# Chloe web scraped data
## About the website
The **e-commerce industry** in EMEA, particularly in **France**, is increasingly competitive and dynamic. This sector experiences remarkable growth with the rise of digital platforms and evolving consumer behaviors. **Chloe**, operating in this market, offers an array of fashion items and luxury goods. The observed dataset provides crucial information about the **Ecommerce product-list page (PLP)** data on Chloe in France. This data reflects online consumer interactions, preferences, and purchasing patterns, providing valuable insight into the overall market position and performance of Chloe in Frances digital retail landscape. It underlines the potent potential of data analytics in shaping the future trajectories of e-commerce businesses.
## Link to **dataset**
[France - Chloe - Product-level price list dataset](https://www.databoutique.com/buy-data-page/Chloe%20Product-prices%20France/r/reccacrLFdY2bqA41)
|
SoroushVT/mscoco-small | ---
dataset_info:
features:
- name: image
dtype: image
- name: filepath
dtype: string
- name: sentids
list: int32
- name: filename
dtype: string
- name: imgid
dtype: int32
- name: split
dtype: string
- name: sentences
struct:
- name: tokens
list: string
- name: raw
dtype: string
- name: imgid
dtype: int32
- name: sentid
dtype: int32
- name: cocoid
dtype: int32
splits:
- name: train
num_bytes: 3203449478.0
num_examples: 20000
- name: validation
num_bytes: 96102697.0
num_examples: 500
- name: test
num_bytes: 337939999.0
num_examples: 2000
download_size: 754907831
dataset_size: 3637492174.0
---
# Dataset Card for "mscoco-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tarudesu/ViOCD | ---
task_categories:
- text-classification
language:
- vi
tags:
- code
pretty_name: Vietnamese Open-Domain Complaint Detection in E-commerce Websites
size_categories:
- 1K<n<10K
---
# Vietnamese Open-Domain Complaint Detection in E-commerce Websites
This is the official repository for the ViOCD dataset from the paper [Vietnamese Open-Domain Complaint Detection in E-commerce Websites](https://arxiv.org/pdf/2103.10069.pdf), which was accepted at the [SoMeT 2021](https://dblp.org/db/conf/somet/somet2021.html).
# Citation Information
The provided dataset is only used for research purposes!
```
@misc{nguyen2021vietnamese,
title={Vietnamese Complaint Detection on E-Commerce Websites},
author={Nhung Thi-Hong Nguyen and Phuong Phan-Dieu Ha and Luan Thanh Nguyen and Kiet Van Nguyen and Ngan Luu-Thuy Nguyen},
year={2021},
eprint={2104.11969},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Abstract
Customer product reviews play a role in improving the quality of products and services for business organizations or their brands. Complaining is an attitude that expresses dissatisfaction with an event or a product not meeting customer expectations. In this paper, we build a Open-domain Complaint Detection dataset (ViOCD), including 5,485 human-annotated reviews on four categories about product reviews on e-commerce sites. After the data collection phase, we proceed to the annotation task and achieve the inter-annotator agreement Am of 87%. Then, we present an extensive methodology for the research purposes and achieve 92.16% by F1-score for identifying complaints. With the results, in the future, we aim to build a system for open-domain complaint detection in E-commerce websites.
## Dataset
The ViOCD dataset is consist of 5,485 reviews on four categories about product reviews on e-commerce sites.
The dataset is divided into three parts as below:
1. Train set: 4.39K reviews
2. Valid set: 548 reviews
3. Test set: 549 reviews
## Contact
Please feel free to contact us by email luannt@uit.edu.vn if you have any further information! |
Bluebomber182/Agatha-Gillman | ---
license: unknown
---
|
SUST-CSE-Speech/SUBAK.KO | ---
language:
- bn
license: cc-by-4.0
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: file_path
dtype: string
splits:
- name: test
num_bytes: 2345138893.961
num_examples: 6533
- name: validation
num_bytes: 2374606148.554
num_examples: 6594
- name: train
num_bytes: 23111288170.312
num_examples: 64491
download_size: 31898660522
dataset_size: 27831033212.827
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: validation
path: data/validation-*
- split: train
path: data/train-*
tags:
- speech-recognition
- Bangladeshi Bangla
- Bengali
- speech-corpus
---
# Dataset Card for SUBAK.KO
## Table of Contents
- [Dataset Card for SUBAK.KO](#dataset-card-for-SUBAK.KO)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Developed By** Dept. of CSE, SUST, Bangladesh
- **Paper:** [Bangladeshi Bangla speech corpus for automatic speech recognition research](https://www.sciencedirect.com/science/article/abs/pii/S0167639321001370)
- **Point of Contact:** [Prof. Dr. M. Shahidur Rahman, Dept. of CSE, SUST](mailto:rahmanms@sust.edu)
### Dataset Summary
SUBAK.KO (সুবাক্য), a publicly available annotated Bangladeshi standard Bangla speech corpus, is compiled for automatic speech recognition research.
This corpus contains 241 hours of high-quality speech data, including 229 hours of read speech data and 12 hours of broadcast speech data.
The read speech segment is recorded in a noise-proof studio environment from 33 male and 28 female native Bangladeshi Bangla speakers
representing 8 divisions/34 districts of Bangladesh. Furthermore, the read speech segment comprises a total of 1 hour and 30 minutes
of recorded speech provided by two second language (L2) speakers. The broadcast speech segment is collected from YouTube. SUBAK.KO has
been manually annotated under human supervision to ensure gold-standard labels. The [corresponding paper](https://www.sciencedirect.com/science/article/abs/pii/S0167639321001370) reports detailed information about
the development and baseline performance of SUBAK.KO and cross-dataset evaluation in comparison to [LB-ASRTD](https://openslr.org/53/) corpus.
SUBAK.KO is developed by the researchers from the **Department of Computer Science and Engineering (CSE)** at **Shahjalal University of Science and Technology (SUST),
Bangladesh** with financial support from the Higher Education Quality Enhancement Project (AIF Window 4, CP 3888) for “The Development of
Multi-Platform Speech and Language Processing Software for Bangla” of the University Grants Commission (UGC), Bangladesh.
### Example Usage
To load the full SUBAK.KO corpus, use the following code:
```python
from datasets import load_dataset
dataset = load_dataset("SUST-CSE-Speech/SUBAK.KO")
```
To load a specific split of the SUBAK.KO, define the split and set the streaming mode as True in the following way:
```python
from datasets import load_dataset
dataset = load_dataset("SUST-CSE-Speech/SUBAK.KO", split="test", streaming=True)
```
More documentation on streaming can be found [from this link.](https://huggingface.co/docs/datasets/stream#split-dataset)
Alternatively, you can manually download the zipped SUBAK.KO folder from [this HuggingFace directory.](https://huggingface.co/datasets/ahnafsamin/SUBAK.KO/tree/main/Data)
The csv files corresponding to the train, validation and test splits can be found in the same directory.
### Supported Tasks and Leaderboards
This dataset is designed for the automatic speech recognition task. The associated paper provides the baseline results on SUBAK.KO corpus.
### Languages
Bangladeshi standard Bangla
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file and its transcription.
```
{
'audio': {'path': '/home/username/subakko/part5/wav5/e4/TNM22_MESBA_page_257-258_5_5_Labeled_by_Tomal-20.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'transcript': 'তারপর চার মাস তিনি ছিলেন কেন্দ্রীয় গোয়েন্দা সংস্থার তত্বাবধানে এক নিরাপদ জায়গায়',
'path': '/subakko/part5/wav5/e4/TNM22_MESBA_page_257-258_5_5_Labeled_by_Tomal-20.wav'
}
```
### Data Fields
- audio: A dictionary containing the path to the original audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- transcription: The orthographic transcription
- file_path: The relative path to the audio file
### Data Splits
SUBAK.KO has been subdivided into three splits for train, validation and test. It is strongly advised to use identical data splits
for research purposes to facilitate benchmarking across various models.
| | Train | Validation | Test |
| ---------------- | ---------|------------|----------|
| Utterances | 64491 | 6594 | 6533 |
| Duration | 200.3 hrs| 20.5 hrs | 20.3 hrs |
## Additional Information
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/deed.en)
### Citation Information
Please cite the following paper if you use the corpus.
```
@article{kibria2022bangladeshi,
title={Bangladeshi Bangla speech corpus for automatic speech recognition research},
author={Kibria, Shafkat and Samin, Ahnaf Mozib and Kobir, M Humayon and Rahman, M Shahidur and Selim, M Reza and Iqbal, M Zafar},
journal={Speech Communication},
volume={136},
pages={84--97},
year={2022},
publisher={Elsevier}
}
```
### Contributions
Thanks to [Ahnaf Mozib Samin](https://huggingface.co/ahnafsamin) for adding this dataset. |
NickyNicky/guardian_environment_news | ---
dataset_info:
features:
- name: Title
dtype: string
- name: Intro Text
dtype: string
- name: Authors
dtype: string
- name: Article Text
dtype: string
- name: Date Published
dtype: string
splits:
- name: train
num_bytes: 156427227
num_examples: 30059
download_size: 95643205
dataset_size: 156427227
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- en
---
```
https://www.kaggle.com/
``` |
jonathan-roberts1/PatternNet | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': baseball field
'2': basketball court
'3': beach
'4': bridge
'5': cemetery
'6': chaparral
'7': christmas tree farm
'8': closed road
'9': coastal mansion
'10': crosswalk
'11': dense residential
'12': ferry terminal
'13': football field
'14': forest
'15': freeway
'16': golf course
'17': harbor
'18': intersection
'19': mobile home park
'20': nursing home
'21': oil gas field
'22': oil well
'23': overpass
'24': parking lot
'25': parking space
'26': railway
'27': river
'28': runway
'29': runway marking
'30': shipping yard
'31': solar panel
'32': sparse residential
'33': storage tank
'34': swimming pool
'35': tennis court
'36': transformer station
'37': wastewater treatment plant
splits:
- name: train
num_bytes: 821222673.6
num_examples: 30400
download_size: 1422129774
dataset_size: 821222673.6
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "PatternNet"
## Dataset Description
- **Paper** [PatternNet: A benchmark dataset for performance evaluation of remote sensing image retrieval](https://www.sciencedirect.com/science/article/pii/S0924271618300042)
### Licensing Information
For research purposes.
## Citation Information
[PatternNet: A benchmark dataset for performance evaluation of remote sensing image retrieval](https://www.sciencedirect.com/science/article/pii/S0924271618300042)
```
@article{zhou2018patternnet,
title = {PatternNet: A benchmark dataset for performance evaluation of remote sensing image retrieval},
author = {Zhou, Weixun and Newsam, Shawn and Li, Congmin and Shao, Zhenfeng},
year = 2018,
journal = {ISPRS journal of photogrammetry and remote sensing},
publisher = {Elsevier},
volume = 145,
pages = {197--209}
}
``` |
jpdiazpardo/scream_detection_heavy_metal | ---
task_categories:
- audio-classification
language:
- en
dataset_info:
features:
- name: audio
dtype: audio
- name: scream_type
dtype: string
- name: song_name
dtype: string
- name: band_name
dtype: string
- name: album_name
dtype: string
- name: release_year
dtype: int64
- name: video_id
dtype: string
- name: timestamp_start
dtype: float64
- name: timestamp_end
dtype: float64
- name: sample_rate
dtype: int64
splits:
- name: train
num_bytes: 114577942.825
num_examples: 1575
download_size: 119156239
dataset_size: 114577942.825
license: mit
tags:
- music
size_categories:
- 1K<n<10K
pretty_name: Scream classification in heavy metal music
---
# Dataset card for Scream Detection in Heavy Metal Music
This dataset contains the processed dataset used in the paper "Scream Detection in Heavy Metal Music" (Kalbag & Lerch, 2022) from the Georgia Institute of Technology.
This dataset contains annotations of 57 songs, distributed over 34 bands and 47 albums. The vocal events are labelled into 5 classes:
* Clean (or sung vocal)
* Low Fry Scream
* Mid Fry Scream
* High Fry Scream
* Layered Vocals
The label "Layered Vocals" has been applied to cases where there are examples of two or more classes present simultaneously.
**Paper:** [Scream Detection in Heavy Metal Music](https://arxiv.org/pdf/2205.05580.pdf)
Kalbag, V., & Lerch, A. (2022). Scream detection in heavy metal music. arXiv preprint arXiv:2205.05580.
### How to use
Load the dataset from huggingface in your notebook:
```python
!pip install datasets[audio]
import datasets
dataset = datasets.load_dataset("jpdiazpardo/scream_detection_heavy_metal")
```
### Data Fields
* `audio`: the trimmed audio file from the song.
* `scream_type`: the target variable for classification i.e. layered, lowfry, highfry, midfry, clean.
* `song_name`: the name of the song.
* `band_name`: the name of the artist performing the song.
* `album_name`: the name of the album where the song was released.
* `release_year`: the release year of the song.
* `video_id`: the YouTube video id.
* `timestamp_start`: the start time of the snippet from the full audio.
* `tiemstamp_end`: the end time of the snippet from the full audio.
* `sample_rate`: the sampling rate of the audio.
### Youtube playlist: [Scream Detection Dataset](https://www.youtube.com/playlist?list=PLnkRJFUtBDzWOEnVOiWTVxGOWD70LDwtC)
### Source Data
| band_name | album_name | song_name | release_year | duration_seconds | video_id | bit_depth | bitrate | channels | sample_rate | 3class_split | 6class_split |
|-------------------|------------------------------|-------------------------------------|--------------|------------------|-------------|-----------|---------|----------|-------------|--------------|--------------|
| Abbath | Abbath | Ashes Of The Damned | 2016 | 238.097415 | K5pMoSECagE | 16 | 1411200 | 2 | 44100 | test/valid | test/valid |
| After The Burial | Dig Deep | Lost In The Static | 2016 | 271.2787302 | hUNAX1UYeAE | 16 | 1411200 | 2 | 44100 | train | train |
| Amon Amarth | Surtur Rising | Destroyer of the Universe | 2011 | 224.1886621 | 5aaOqUYG8Tw | 16 | 1411200 | 2 | 44100 | train | train |
| Amon Amarth | Twilight of the Thunder God | Live For The Kill | 2008 | 249.7538322 | Bh_5ofa__pY | 16 | 1411200 | 2 | 44100 | train | train |
| Amon Amarth | Twilight of the Thunder God | Twilight Of The Thunder God | 2008 | 265.5898413 | edBYB1VCV0k | 16 | 1411200 | 2 | 44100 | train | train |
| Be'lakor | Stone's Reach | Venator | 2009 | 517.9559375 | ainbICPRV8Y | 16 | 1536000 | 2 | 48000 | train | train |
| Behemoth | I Loved You at Your Darkest | Ecclesia Diabolica Catholica | 2018 | 324.3363265 | HKWqzjQAv14 | 16 | 1411200 | 2 | 44100 | train | train |
| Behemoth | I Loved You at Your Darkest | Bartzabel | 2018 | 320.9462132 | Dhfy9TPga-c | 16 | 1411200 | 2 | 44100 | train | train |
| Behemoth | The Satanist | Blow Your Trumpets Gabriel | 2013 | 297.9352381 | Czx-OIyrQwQ | 16 | 1411200 | 2 | 44100 | train | train |
| Born of Osiris | Angel or Alien | White Nile | 2021 | 229.0300417 | 4ShzP_M7W-k | 16 | 1536000 | 2 | 48000 | train | train |
| Cannibal Corpse | A Skeletal Domain | High Velocity Impact Spatter | 2014 | 246.9442177 | B3F10hXdmQY | 16 | 1411200 | 2 | 44100 | train | train |
| Children of Bodom | Hexed | Under Grass And Clover | 2019 | 213.0663039 | 1gpfzCxiQ-A | 16 | 1411200 | 2 | 44100 | test/valid | test/valid |
| Children of Bodom | Are You Dead Yet? | Living Dead Beat | 2005 | 318.1365986 | gG3JZ5vGJsk | 16 | 1411200 | 2 | 44100 | test/valid | test/valid |
| Children Of Bodom | Are You Dead Yet? | Are You Dead Yet | 2005 | 236.2630385 | aNJXS9X0yY0 | 16 | 1411200 | 2 | 44100 | train | train |
| Children of Bodom | Hate Crew Deathroll | Sixpounder | 2003 | 213.3449433 | 09KScSe4hIc | 16 | 1411200 | 2 | 44100 | test/valid | test/valid |
| Children Of Bodom | Follow the Reaper | Everytime I Die | 2000 | 241.650068 | 5cEK1OLhUKQ | 16 | 1411200 | 2 | 44100 | train | train |
| Children Of Bodom | Are You Dead Yet? | In Your Face | 2005 | 236.2630385 | 5SgN5lvWZwQ | 16 | 1411200 | 2 | 44100 | train | train |
| Dark Tranquillity | Lost to Apathy | Lost to Apathy | 2004 | 240.8838095 | GZqfH1LQEOQ | 16 | 1411200 | 2 | 44100 | test/valid | test/valid |
| Dark Tranquillity | Atoma | Atoma | 2016 | 262.8266667 | C_voh9WFbsM | 16 | 1411200 | 2 | 44100 | test/valid | test/valid |
| Death | Leprosy | Pull the Plug | 1988 | 266.8204989 | _duhhVa-dk8 | 16 | 1411200 | 2 | 44100 | train | train |
| Death | Individual Thought Patterns | The Philosopher | 1993 | 216.3867708 | 8256VJ4hkJU | 16 | 1536000 | 2 | 48000 | train | train |
| Decapitated | Anticult | Kill The Cult | 2017 | 296.1705215 | kQUTQTNChbE | 16 | 1411200 | 2 | 44100 | train | train |
| Decapitated | Blood Mantra | Blood Mantra | 2014 | 305.9693424 | 8gILuUdY2cU | 16 | 1411200 | 2 | 44100 | train | train |
| Ensiferum | Unsung Heroes | In My Sword I Trust | 2012 | 330.6753741 | -2WqQY_xSSM | 16 | 1411200 | 2 | 44100 | test/valid | test/valid |
| Enslaved | Caravans To The Outer Worlds | Caravans To The Outer Worlds | 2021 | 382.3165533 | ErTgN2zoTkA | 16 | 1411200 | 2 | 44100 | train | train |
| Godless | Swarm | Deathcult | 2018 | 250.9844898 | 1CdtbR9JHCA | 16 | 1411200 | 2 | 44100 | train | train |
| Gojira | Magma | Stranded | 2016 | 272.4397279 | FNdC_3LR2AI | 16 | 1411200 | 2 | 44100 | test/valid | test/valid |
| Gojira | Magma | Silvera | 2016 | 214.0647619 | iVvXB-Vwnco | 16 | 1411200 | 2 | 44100 | test/valid | test/valid |
| Immortal | Northern Chaos Gods | Northern Chaos Gods | 2018 | 265.4273016 | c5uP9PlEDro | 16 | 1411200 | 2 | 44100 | train | train |
| In Flames | Reroute to Remain | Cloud Connected | 2002 | 223.3295238 | B7iIS91fMAc | 16 | 1411200 | 2 | 44100 | train | train |
| Lamb of God | Lamb of God | Memento Mori | 2020 | 345.3503958 | hBj0-dIU8HI | 16 | 1536000 | 2 | 48000 | train | train |
| Lamb of God | Ashes of the Wake | Laid to Rest | 2004 | 234.1732426 | HL9kaJZw8iw | 16 | 1411200 | 2 | 44100 | train | train |
| Lamb of God | Ashes of the Wake | Omerta | 2004 | 287.5559184 | -xYZM04JxnQ | 16 | 1411200 | 2 | 44100 | train | train |
| Lamb of God | Ashes of the Wake | Now You've Got Something to Die For | 2004 | 219.8000907 | 0m5fIHHfJTw | 16 | 1411200 | 2 | 44100 | train | train |
| Lamb of God | Ashes of the Wake | The Faded Line | 2004 | 278.8019955 | JuRRnVqv2Vc | 16 | 1411200 | 2 | 44100 | train | train |
| Ne Obliviscaris | Citadel | Pyrrhic | 2014 | 590.1351667 | dCyxGNbBWAk | 16 | 1536000 | 2 | 48000 | test/valid | test/valid |
| Ne Obliviscaris | Portal of I | And Plague Flowers the Kaleidoscope | 2012 | 692.8533333 | BNyYiTdqzAY | 16 | 1536000 | 2 | 48000 | test/valid | test/valid |
| Nevermore | This Godless Endeavor | Born | 2005 | 255.8374603 | impRqn44OCA | 16 | 1411200 | 2 | 44100 | train | train |
| Of Mice & Men | Restoring Force | Bones Exposed | 2014 | 271.0697506 | IO-JbFtgeX4 | 16 | 1411200 | 2 | 44100 | test/valid | test/valid |
| Of Mice & Men | Timeless | Obsolete | 2021 | 270.0712925 | hxu3KXVy48w | 16 | 1411200 | 2 | 44100 | test/valid | test/valid |
| Opeth | Blackwater Park | Blackwater Park | 2001 | 732.8798333 | j4xCb_OU_lM | 16 | 1536000 | 2 | 48000 | train | train |
| Parkway Drive | Horizons | Carrion | 2007 | 188.0119728 | BR2kSva4NT8 | 16 | 1411200 | 2 | 44100 | train | train |
| Rings of Saturn | Lugal Ki En | Senseless Massacre | 2014 | 214.3333333 | F3A_3c882us | 16 | 1536000 | 2 | 48000 | test/valid | test/valid |
| Slayer | Seasons in the Abyss | War Ensemble | 1990 | 302.2541497 | jqnC54vbUbU | 16 | 1411200 | 2 | 44100 | train | train |
| Slayer | South of Heaven | South Of Heaven | 1988 | 298.5333333 | 74nTzbgDGWM | 16 | 1536000 | 2 | 48000 | train | train |
| Slipknot | All Hope Is Gone | Psychosocial | 2008 | 302.1148299 | 5abamRO41fE | 16 | 1411200 | 2 | 44100 | test/valid | test/valid |
| Suffocation | …Of the Dark Light | Clarity Through Deprivation | 2017 | 244.1810431 | HUUBI7RJtr8 | 16 | 1411200 | 2 | 44100 | test/valid | test/valid |
| Suicide Silence | The Cleansing | No Pity for a Coward | 2007 | 191.9361451 | hwxTEcHnC1o | 16 | 1411200 | 2 | 44100 | train | train |
| Suicide Silence | No Time to Bleed | Disengage | 2009 | 246.9442177 | FukeNR1ydOA | 16 | 1411200 | 2 | 44100 | train | train |
| Suicide Silence | The Black Crown | You Only Live Once | 2011 | 192.6095238 | ds9s-pzGD0M | 16 | 1411200 | 2 | 44100 | train | train |
| Suicide Silence | The Black Crown | Slaves To Substance | 2011 | 230.1329705 | k27N-jRofrM | 16 | 1411200 | 2 | 44100 | train | train |
| Tesseract | Odyssey | Nocturne | 2015 | 271.0233107 | get0cXOsSXg | 16 | 1411200 | 2 | 44100 | train | train |
| Textures | Silhouettes | Storm Warning | 2008 | 346.8829025 | 4600fGWcn9o | 16 | 1411200 | 2 | 44100 | train | train |
| Textures | Silhouettes | Old Days Born Anew | 2008 | 337.3627211 | 731QmPnjqe4 | 16 | 1411200 | 2 | 44100 | train | train |
| Thy Art Is Murder | Hate | Reign Of Darkness | 2012 | 236.1004989 | 47Plg93oJ1M | 16 | 1411200 | 2 | 44100 | train | train |
| Veil of Maya | False Idol | Overthrow | 2017 | 237.2847166 | GLu-E42-RmA | 16 | 1411200 | 2 | 44100 | test/valid | test/valid |
| Wintersun | Time I | Time | 2012 | 704.7720635 | ebSxxr726_8 | 16 | 1411200 | 2 | 44100 | train | train |
#### Initial Data Collection and Normalization
The data was collected from the YouTube playlist above and trimmed using the timestamps provided in the dataset.
The audio files were passed through the [Spleeter](https://joss.theoj.org/papers/10.21105/joss.02154) (Hennequin et al., 2020) source separation algorithm to separate the vocals from the other components.
### Licensing Information
MIT License
Copyright (c) 2022 Vedant Kalbag
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
### Citation Information
```
@article{
title={Scream Detection in Heavy Metal Music},
author={Vedant Kalbag and Alexabder Lerch},
journal={ArXiv},
year={2022},
volume={2205.05580}
}
```
```
@article{
Hennequin2020,
doi = {10.21105/joss.02154},
url = {https://doi.org/10.21105/joss.02154},
year = {2020}, publisher = {The Open Journal},
volume = {5}, number = {50}, pages = {2154},
author = {Romain Hennequin and Anis Khlif and Felix Voituret and Manuel Moussallam},
title = {Spleeter: a fast and efficient music source separation tool with pre-trained models},
journal = {Journal of Open Source Software}
}
``` |
CravenMcin22/BlackEnergy | ---
license: bigscience-openrail-m
task_categories:
- question-answering
tags:
- legal
size_categories:
- n>1T
--- |
open-llm-leaderboard/details_Josephgflowers__Tinyllama-616M-Cinder | ---
pretty_name: Evaluation run of Josephgflowers/Tinyllama-616M-Cinder
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Josephgflowers/Tinyllama-616M-Cinder](https://huggingface.co/Josephgflowers/Tinyllama-616M-Cinder)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Josephgflowers__Tinyllama-616M-Cinder\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-31T05:02:45.586654](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__Tinyllama-616M-Cinder/blob/main/results_2024-03-31T05-02-45.586654.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.25024089617746503,\n\
\ \"acc_stderr\": 0.030587439859005337,\n \"acc_norm\": 0.25084872224031984,\n\
\ \"acc_norm_stderr\": 0.03139528132826798,\n \"mc1\": 0.2386780905752754,\n\
\ \"mc1_stderr\": 0.014922629695456421,\n \"mc2\": 0.43410131367024446,\n\
\ \"mc2_stderr\": 0.015406538697451911\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.2431740614334471,\n \"acc_stderr\": 0.01253655414458709,\n\
\ \"acc_norm\": 0.2645051194539249,\n \"acc_norm_stderr\": 0.012889272949313368\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.317167894841665,\n\
\ \"acc_stderr\": 0.004644223294727728,\n \"acc_norm\": 0.3639713204540928,\n\
\ \"acc_norm_stderr\": 0.004801572028920792\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.23,\n \"acc_stderr\": 0.042295258468165044,\n \
\ \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.042295258468165044\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.28888888888888886,\n\
\ \"acc_stderr\": 0.03915450630414251,\n \"acc_norm\": 0.28888888888888886,\n\
\ \"acc_norm_stderr\": 0.03915450630414251\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.18421052631578946,\n \"acc_stderr\": 0.0315469804508223,\n\
\ \"acc_norm\": 0.18421052631578946,\n \"acc_norm_stderr\": 0.0315469804508223\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.25,\n\
\ \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.2188679245283019,\n \"acc_stderr\": 0.025447863825108614,\n\
\ \"acc_norm\": 0.2188679245283019,\n \"acc_norm_stderr\": 0.025447863825108614\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2152777777777778,\n\
\ \"acc_stderr\": 0.034370793441061344,\n \"acc_norm\": 0.2152777777777778,\n\
\ \"acc_norm_stderr\": 0.034370793441061344\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.2,\n \"acc_stderr\": 0.040201512610368466,\n \
\ \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.040201512610368466\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n\
\ \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.27167630057803466,\n\
\ \"acc_stderr\": 0.03391750322321659,\n \"acc_norm\": 0.27167630057803466,\n\
\ \"acc_norm_stderr\": 0.03391750322321659\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.27450980392156865,\n \"acc_stderr\": 0.044405219061793275,\n\
\ \"acc_norm\": 0.27450980392156865,\n \"acc_norm_stderr\": 0.044405219061793275\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.23,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.23,\n\
\ \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.24680851063829787,\n \"acc_stderr\": 0.0281854413012341,\n\
\ \"acc_norm\": 0.24680851063829787,\n \"acc_norm_stderr\": 0.0281854413012341\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2543859649122807,\n\
\ \"acc_stderr\": 0.040969851398436695,\n \"acc_norm\": 0.2543859649122807,\n\
\ \"acc_norm_stderr\": 0.040969851398436695\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.25517241379310346,\n \"acc_stderr\": 0.03632984052707842,\n\
\ \"acc_norm\": 0.25517241379310346,\n \"acc_norm_stderr\": 0.03632984052707842\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.25396825396825395,\n \"acc_stderr\": 0.022418042891113946,\n \"\
acc_norm\": 0.25396825396825395,\n \"acc_norm_stderr\": 0.022418042891113946\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.19047619047619047,\n\
\ \"acc_stderr\": 0.03512207412302052,\n \"acc_norm\": 0.19047619047619047,\n\
\ \"acc_norm_stderr\": 0.03512207412302052\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.19,\n \"acc_stderr\": 0.03942772444036624,\n \
\ \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.03942772444036624\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.3064516129032258,\n\
\ \"acc_stderr\": 0.026226485652553883,\n \"acc_norm\": 0.3064516129032258,\n\
\ \"acc_norm_stderr\": 0.026226485652553883\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.2955665024630542,\n \"acc_stderr\": 0.032104944337514575,\n\
\ \"acc_norm\": 0.2955665024630542,\n \"acc_norm_stderr\": 0.032104944337514575\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909282,\n \"acc_norm\"\
: 0.24,\n \"acc_norm_stderr\": 0.04292346959909282\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.2606060606060606,\n \"acc_stderr\": 0.034277431758165236,\n\
\ \"acc_norm\": 0.2606060606060606,\n \"acc_norm_stderr\": 0.034277431758165236\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.2828282828282828,\n \"acc_stderr\": 0.03208779558786752,\n \"\
acc_norm\": 0.2828282828282828,\n \"acc_norm_stderr\": 0.03208779558786752\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.30569948186528495,\n \"acc_stderr\": 0.03324837939758159,\n\
\ \"acc_norm\": 0.30569948186528495,\n \"acc_norm_stderr\": 0.03324837939758159\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.2692307692307692,\n \"acc_stderr\": 0.02248938979365484,\n \
\ \"acc_norm\": 0.2692307692307692,\n \"acc_norm_stderr\": 0.02248938979365484\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.29259259259259257,\n \"acc_stderr\": 0.027738969632176088,\n \
\ \"acc_norm\": 0.29259259259259257,\n \"acc_norm_stderr\": 0.027738969632176088\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.24369747899159663,\n \"acc_stderr\": 0.027886828078380572,\n\
\ \"acc_norm\": 0.24369747899159663,\n \"acc_norm_stderr\": 0.027886828078380572\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2847682119205298,\n \"acc_stderr\": 0.03684881521389023,\n \"\
acc_norm\": 0.2847682119205298,\n \"acc_norm_stderr\": 0.03684881521389023\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.21284403669724772,\n \"acc_stderr\": 0.017549376389313694,\n \"\
acc_norm\": 0.21284403669724772,\n \"acc_norm_stderr\": 0.017549376389313694\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.375,\n \"acc_stderr\": 0.033016908987210894,\n \"acc_norm\": 0.375,\n\
\ \"acc_norm_stderr\": 0.033016908987210894\n },\n \"harness|hendrycksTest-high_school_us_history|5\"\
: {\n \"acc\": 0.23039215686274508,\n \"acc_stderr\": 0.029554292605695077,\n\
\ \"acc_norm\": 0.23039215686274508,\n \"acc_norm_stderr\": 0.029554292605695077\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.24472573839662448,\n \"acc_stderr\": 0.027985699387036423,\n \
\ \"acc_norm\": 0.24472573839662448,\n \"acc_norm_stderr\": 0.027985699387036423\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.2242152466367713,\n\
\ \"acc_stderr\": 0.027991534258519527,\n \"acc_norm\": 0.2242152466367713,\n\
\ \"acc_norm_stderr\": 0.027991534258519527\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.20610687022900764,\n \"acc_stderr\": 0.03547771004159462,\n\
\ \"acc_norm\": 0.20610687022900764,\n \"acc_norm_stderr\": 0.03547771004159462\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.23140495867768596,\n \"acc_stderr\": 0.03849856098794088,\n \"\
acc_norm\": 0.23140495867768596,\n \"acc_norm_stderr\": 0.03849856098794088\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.2037037037037037,\n\
\ \"acc_stderr\": 0.03893542518824848,\n \"acc_norm\": 0.2037037037037037,\n\
\ \"acc_norm_stderr\": 0.03893542518824848\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.22699386503067484,\n \"acc_stderr\": 0.032910995786157686,\n\
\ \"acc_norm\": 0.22699386503067484,\n \"acc_norm_stderr\": 0.032910995786157686\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.24107142857142858,\n\
\ \"acc_stderr\": 0.04059867246952686,\n \"acc_norm\": 0.24107142857142858,\n\
\ \"acc_norm_stderr\": 0.04059867246952686\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.20388349514563106,\n \"acc_stderr\": 0.03989139859531771,\n\
\ \"acc_norm\": 0.20388349514563106,\n \"acc_norm_stderr\": 0.03989139859531771\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.20512820512820512,\n\
\ \"acc_stderr\": 0.02645350805404035,\n \"acc_norm\": 0.20512820512820512,\n\
\ \"acc_norm_stderr\": 0.02645350805404035\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.26309067688378035,\n\
\ \"acc_stderr\": 0.01574549716904906,\n \"acc_norm\": 0.26309067688378035,\n\
\ \"acc_norm_stderr\": 0.01574549716904906\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.21098265895953758,\n \"acc_stderr\": 0.021966309947043128,\n\
\ \"acc_norm\": 0.21098265895953758,\n \"acc_norm_stderr\": 0.021966309947043128\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2446927374301676,\n\
\ \"acc_stderr\": 0.014378169884098443,\n \"acc_norm\": 0.2446927374301676,\n\
\ \"acc_norm_stderr\": 0.014378169884098443\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.23202614379084968,\n \"acc_stderr\": 0.024170840879341012,\n\
\ \"acc_norm\": 0.23202614379084968,\n \"acc_norm_stderr\": 0.024170840879341012\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2090032154340836,\n\
\ \"acc_stderr\": 0.023093140398374224,\n \"acc_norm\": 0.2090032154340836,\n\
\ \"acc_norm_stderr\": 0.023093140398374224\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.2962962962962963,\n \"acc_stderr\": 0.025407197798890155,\n\
\ \"acc_norm\": 0.2962962962962963,\n \"acc_norm_stderr\": 0.025407197798890155\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.23049645390070922,\n \"acc_stderr\": 0.02512373922687241,\n \
\ \"acc_norm\": 0.23049645390070922,\n \"acc_norm_stderr\": 0.02512373922687241\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.23533246414602346,\n\
\ \"acc_stderr\": 0.01083443254391221,\n \"acc_norm\": 0.23533246414602346,\n\
\ \"acc_norm_stderr\": 0.01083443254391221\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.35294117647058826,\n \"acc_stderr\": 0.029029422815681404,\n\
\ \"acc_norm\": 0.35294117647058826,\n \"acc_norm_stderr\": 0.029029422815681404\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.22712418300653595,\n \"acc_stderr\": 0.01694985327921238,\n \
\ \"acc_norm\": 0.22712418300653595,\n \"acc_norm_stderr\": 0.01694985327921238\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.23636363636363636,\n\
\ \"acc_stderr\": 0.040693063197213754,\n \"acc_norm\": 0.23636363636363636,\n\
\ \"acc_norm_stderr\": 0.040693063197213754\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.24897959183673468,\n \"acc_stderr\": 0.027682979522960234,\n\
\ \"acc_norm\": 0.24897959183673468,\n \"acc_norm_stderr\": 0.027682979522960234\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.25870646766169153,\n\
\ \"acc_stderr\": 0.03096590312357303,\n \"acc_norm\": 0.25870646766169153,\n\
\ \"acc_norm_stderr\": 0.03096590312357303\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.21,\n \"acc_stderr\": 0.04093601807403326,\n \
\ \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.04093601807403326\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.19879518072289157,\n\
\ \"acc_stderr\": 0.03106939026078942,\n \"acc_norm\": 0.19879518072289157,\n\
\ \"acc_norm_stderr\": 0.03106939026078942\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.24561403508771928,\n \"acc_stderr\": 0.03301405946987251,\n\
\ \"acc_norm\": 0.24561403508771928,\n \"acc_norm_stderr\": 0.03301405946987251\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2386780905752754,\n\
\ \"mc1_stderr\": 0.014922629695456421,\n \"mc2\": 0.43410131367024446,\n\
\ \"mc2_stderr\": 0.015406538697451911\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5327545382794001,\n \"acc_stderr\": 0.014022300570434139\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n }\n}\n```"
repo_url: https://huggingface.co/Josephgflowers/Tinyllama-616M-Cinder
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|arc:challenge|25_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|gsm8k|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hellaswag|10_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-31T05-02-45.586654.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-31T05-02-45.586654.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- '**/details_harness|winogrande|5_2024-03-31T05-02-45.586654.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-31T05-02-45.586654.parquet'
- config_name: results
data_files:
- split: 2024_03_31T05_02_45.586654
path:
- results_2024-03-31T05-02-45.586654.parquet
- split: latest
path:
- results_2024-03-31T05-02-45.586654.parquet
---
# Dataset Card for Evaluation run of Josephgflowers/Tinyllama-616M-Cinder
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Josephgflowers/Tinyllama-616M-Cinder](https://huggingface.co/Josephgflowers/Tinyllama-616M-Cinder) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Josephgflowers__Tinyllama-616M-Cinder",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-31T05:02:45.586654](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__Tinyllama-616M-Cinder/blob/main/results_2024-03-31T05-02-45.586654.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.25024089617746503,
"acc_stderr": 0.030587439859005337,
"acc_norm": 0.25084872224031984,
"acc_norm_stderr": 0.03139528132826798,
"mc1": 0.2386780905752754,
"mc1_stderr": 0.014922629695456421,
"mc2": 0.43410131367024446,
"mc2_stderr": 0.015406538697451911
},
"harness|arc:challenge|25": {
"acc": 0.2431740614334471,
"acc_stderr": 0.01253655414458709,
"acc_norm": 0.2645051194539249,
"acc_norm_stderr": 0.012889272949313368
},
"harness|hellaswag|10": {
"acc": 0.317167894841665,
"acc_stderr": 0.004644223294727728,
"acc_norm": 0.3639713204540928,
"acc_norm_stderr": 0.004801572028920792
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.23,
"acc_stderr": 0.042295258468165044,
"acc_norm": 0.23,
"acc_norm_stderr": 0.042295258468165044
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.28888888888888886,
"acc_stderr": 0.03915450630414251,
"acc_norm": 0.28888888888888886,
"acc_norm_stderr": 0.03915450630414251
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.18421052631578946,
"acc_stderr": 0.0315469804508223,
"acc_norm": 0.18421052631578946,
"acc_norm_stderr": 0.0315469804508223
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.2188679245283019,
"acc_stderr": 0.025447863825108614,
"acc_norm": 0.2188679245283019,
"acc_norm_stderr": 0.025447863825108614
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2152777777777778,
"acc_stderr": 0.034370793441061344,
"acc_norm": 0.2152777777777778,
"acc_norm_stderr": 0.034370793441061344
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.2,
"acc_stderr": 0.040201512610368466,
"acc_norm": 0.2,
"acc_norm_stderr": 0.040201512610368466
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.27167630057803466,
"acc_stderr": 0.03391750322321659,
"acc_norm": 0.27167630057803466,
"acc_norm_stderr": 0.03391750322321659
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.27450980392156865,
"acc_stderr": 0.044405219061793275,
"acc_norm": 0.27450980392156865,
"acc_norm_stderr": 0.044405219061793275
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.24680851063829787,
"acc_stderr": 0.0281854413012341,
"acc_norm": 0.24680851063829787,
"acc_norm_stderr": 0.0281854413012341
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2543859649122807,
"acc_stderr": 0.040969851398436695,
"acc_norm": 0.2543859649122807,
"acc_norm_stderr": 0.040969851398436695
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.25517241379310346,
"acc_stderr": 0.03632984052707842,
"acc_norm": 0.25517241379310346,
"acc_norm_stderr": 0.03632984052707842
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.25396825396825395,
"acc_stderr": 0.022418042891113946,
"acc_norm": 0.25396825396825395,
"acc_norm_stderr": 0.022418042891113946
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.19047619047619047,
"acc_stderr": 0.03512207412302052,
"acc_norm": 0.19047619047619047,
"acc_norm_stderr": 0.03512207412302052
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.19,
"acc_stderr": 0.03942772444036624,
"acc_norm": 0.19,
"acc_norm_stderr": 0.03942772444036624
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.3064516129032258,
"acc_stderr": 0.026226485652553883,
"acc_norm": 0.3064516129032258,
"acc_norm_stderr": 0.026226485652553883
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.2955665024630542,
"acc_stderr": 0.032104944337514575,
"acc_norm": 0.2955665024630542,
"acc_norm_stderr": 0.032104944337514575
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909282,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909282
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.2606060606060606,
"acc_stderr": 0.034277431758165236,
"acc_norm": 0.2606060606060606,
"acc_norm_stderr": 0.034277431758165236
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.2828282828282828,
"acc_stderr": 0.03208779558786752,
"acc_norm": 0.2828282828282828,
"acc_norm_stderr": 0.03208779558786752
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.30569948186528495,
"acc_stderr": 0.03324837939758159,
"acc_norm": 0.30569948186528495,
"acc_norm_stderr": 0.03324837939758159
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2692307692307692,
"acc_stderr": 0.02248938979365484,
"acc_norm": 0.2692307692307692,
"acc_norm_stderr": 0.02248938979365484
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.29259259259259257,
"acc_stderr": 0.027738969632176088,
"acc_norm": 0.29259259259259257,
"acc_norm_stderr": 0.027738969632176088
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.24369747899159663,
"acc_stderr": 0.027886828078380572,
"acc_norm": 0.24369747899159663,
"acc_norm_stderr": 0.027886828078380572
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2847682119205298,
"acc_stderr": 0.03684881521389023,
"acc_norm": 0.2847682119205298,
"acc_norm_stderr": 0.03684881521389023
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.21284403669724772,
"acc_stderr": 0.017549376389313694,
"acc_norm": 0.21284403669724772,
"acc_norm_stderr": 0.017549376389313694
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.375,
"acc_stderr": 0.033016908987210894,
"acc_norm": 0.375,
"acc_norm_stderr": 0.033016908987210894
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.23039215686274508,
"acc_stderr": 0.029554292605695077,
"acc_norm": 0.23039215686274508,
"acc_norm_stderr": 0.029554292605695077
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.24472573839662448,
"acc_stderr": 0.027985699387036423,
"acc_norm": 0.24472573839662448,
"acc_norm_stderr": 0.027985699387036423
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.2242152466367713,
"acc_stderr": 0.027991534258519527,
"acc_norm": 0.2242152466367713,
"acc_norm_stderr": 0.027991534258519527
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.20610687022900764,
"acc_stderr": 0.03547771004159462,
"acc_norm": 0.20610687022900764,
"acc_norm_stderr": 0.03547771004159462
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.23140495867768596,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.23140495867768596,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.2037037037037037,
"acc_stderr": 0.03893542518824848,
"acc_norm": 0.2037037037037037,
"acc_norm_stderr": 0.03893542518824848
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.22699386503067484,
"acc_stderr": 0.032910995786157686,
"acc_norm": 0.22699386503067484,
"acc_norm_stderr": 0.032910995786157686
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.24107142857142858,
"acc_stderr": 0.04059867246952686,
"acc_norm": 0.24107142857142858,
"acc_norm_stderr": 0.04059867246952686
},
"harness|hendrycksTest-management|5": {
"acc": 0.20388349514563106,
"acc_stderr": 0.03989139859531771,
"acc_norm": 0.20388349514563106,
"acc_norm_stderr": 0.03989139859531771
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.20512820512820512,
"acc_stderr": 0.02645350805404035,
"acc_norm": 0.20512820512820512,
"acc_norm_stderr": 0.02645350805404035
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.26309067688378035,
"acc_stderr": 0.01574549716904906,
"acc_norm": 0.26309067688378035,
"acc_norm_stderr": 0.01574549716904906
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.21098265895953758,
"acc_stderr": 0.021966309947043128,
"acc_norm": 0.21098265895953758,
"acc_norm_stderr": 0.021966309947043128
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2446927374301676,
"acc_stderr": 0.014378169884098443,
"acc_norm": 0.2446927374301676,
"acc_norm_stderr": 0.014378169884098443
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.23202614379084968,
"acc_stderr": 0.024170840879341012,
"acc_norm": 0.23202614379084968,
"acc_norm_stderr": 0.024170840879341012
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2090032154340836,
"acc_stderr": 0.023093140398374224,
"acc_norm": 0.2090032154340836,
"acc_norm_stderr": 0.023093140398374224
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.2962962962962963,
"acc_stderr": 0.025407197798890155,
"acc_norm": 0.2962962962962963,
"acc_norm_stderr": 0.025407197798890155
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.23049645390070922,
"acc_stderr": 0.02512373922687241,
"acc_norm": 0.23049645390070922,
"acc_norm_stderr": 0.02512373922687241
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.23533246414602346,
"acc_stderr": 0.01083443254391221,
"acc_norm": 0.23533246414602346,
"acc_norm_stderr": 0.01083443254391221
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.35294117647058826,
"acc_stderr": 0.029029422815681404,
"acc_norm": 0.35294117647058826,
"acc_norm_stderr": 0.029029422815681404
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.22712418300653595,
"acc_stderr": 0.01694985327921238,
"acc_norm": 0.22712418300653595,
"acc_norm_stderr": 0.01694985327921238
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.23636363636363636,
"acc_stderr": 0.040693063197213754,
"acc_norm": 0.23636363636363636,
"acc_norm_stderr": 0.040693063197213754
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.24897959183673468,
"acc_stderr": 0.027682979522960234,
"acc_norm": 0.24897959183673468,
"acc_norm_stderr": 0.027682979522960234
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.25870646766169153,
"acc_stderr": 0.03096590312357303,
"acc_norm": 0.25870646766169153,
"acc_norm_stderr": 0.03096590312357303
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.21,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.21,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-virology|5": {
"acc": 0.19879518072289157,
"acc_stderr": 0.03106939026078942,
"acc_norm": 0.19879518072289157,
"acc_norm_stderr": 0.03106939026078942
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.24561403508771928,
"acc_stderr": 0.03301405946987251,
"acc_norm": 0.24561403508771928,
"acc_norm_stderr": 0.03301405946987251
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2386780905752754,
"mc1_stderr": 0.014922629695456421,
"mc2": 0.43410131367024446,
"mc2_stderr": 0.015406538697451911
},
"harness|winogrande|5": {
"acc": 0.5327545382794001,
"acc_stderr": 0.014022300570434139
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
MoritzLaurer/cap_sotu_simple | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype: string
- name: label_cap2
dtype: int64
- name: label_cap2_text
dtype: string
- name: label_cap4
dtype: int64
- name: year
dtype: int64
- name: president
dtype: string
- name: pres_party
dtype: int64
- name: id_original
dtype: int64
- name: text_original
dtype: string
- name: text_preceding
dtype: string
- name: text_following
dtype: string
- name: doc_id
dtype: int64
- name: idx
dtype: int64
splits:
- name: train
num_bytes: 3700466
num_examples: 6339
download_size: 1940441
dataset_size: 3700466
---
# Dataset Card for "cap_sotu_simple"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jirong/mobile_aloha | ---
license: apache-2.0
---
|
mirzaei2114/stackoverflowVQA-filtered | ---
dataset_info:
features:
- name: Id
dtype: int64
- name: PostTypeId
dtype: int64
- name: AcceptedAnswerId
dtype: int64
- name: Question
dtype: string
- name: Answer
dtype: string
- name: Image
dtype: image
splits:
- name: train
num_bytes: 12815093667.144684
num_examples: 183636
- name: test
num_bytes: 1423970510.001132
num_examples: 20405
download_size: 13692500865
dataset_size: 14239064177.145817
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: mit
task_categories:
- visual-question-answering
- question-answering
language:
- en
tags:
- code
pretty_name: StackOverflowVQA-filtered
size_categories:
- 100K<n<1M
--- |
eperim/fine-tune-ds | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 77769098.46242681
num_examples: 86821
- name: validation
num_bytes: 5722017.701170724
num_examples: 5826
- name: evaluation
num_bytes: 196430.40512086247
num_examples: 200
download_size: 17231030
dataset_size: 83687546.5687184
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: evaluation
path: data/evaluation-*
---
|
Falah/line_art_drawing_prompts | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 1552162
num_examples: 10000
download_size: 216025
dataset_size: 1552162
---
# Dataset Card for "line_art_drawing_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaraAgroAI/CADI-AI | ---
license: cc-by-sa-4.0
task_categories:
- object-detection
language:
- en
tags:
- object detection
- vision
size_categories:
- 1K<n<10K
extra_gated_heading: "Acknowledge license to accept the repository"
extra_gated_button_content: "Acknowledge license"
extra_gated_fields:
I agree to attribute the creator of this repository: checkbox
---
---
## Cashew Disease Identication with Artificial Intelligence (CADI-AI) Dataset
This repository contains a comprehensive dataset of cashew images captured by drones, accompanied by meticulously annotated labels.
Each high-resolution image in the dataset has a resolution of 1600x1300 pixels, providing fine details for analysis and model training.
To facilitate efficient object detection, each image is paired with a corresponding text file in YOLO format.
The YOLO format file contains annotations, including class labels and bounding box coordinates.
### Dataset Labels
```
['abiotic', 'insect', 'disease']
```
### Number of Images
```json
{'train': 3788, 'valid': 710, 'test': 238}
```
### Number of Instances Annotated
```json
{'insect':1618, 'abiotic':13960, 'disease':7032}
```
### Folder structure after unzipping repective folders
```markdown
Data/
└── train/
├── images
├── labels
└── val/
├── images
├── labels
└── test/
├── images
├── labels
```
### Dataset Information
The dataset was created by a team of data scientists from the KaraAgro AI Foundation,
with support from agricultural scientists and officers.
The creation of this dataset was made possible through funding of the
Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) through their projects
[Market-Oriented Value Chains for Jobs & Growth in the ECOWAS Region (MOVE)](https://www.giz.de/en/worldwide/108524.html) and
[FAIR Forward - Artificial Intelligence for All](https://www.bmz-digital.global/en/overview-of-initiatives/fair-forward/), which GIZ implements on
behalf the German Federal Ministry for Economic Cooperation and Development (BMZ).
For detailed information regarding the dataset, we invite you to explore the accompanying datasheet available [here](https://drive.google.com/file/d/1viv-PtZC_j9S_K1mPl4R1lFRKxoFlR_M/view?usp=sharing).
This comprehensive resource offers a deeper understanding of the dataset's composition, variables, data collection methodologies, and other relevant details.
|
imvladikon/leipzig_corpora_collection | ---
language:
- ar
- en
- he
- de
- it
- fr
- pl
- pt
- ru
- uk
task_categories:
- text-generation
- fill-mask
source_datasets:
- original
multilinguality:
- multilingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
config_names:
- links
---
## Leipzig Corpora Collection
The [Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download) presents corpora in different languages using the same format and comparable sources. All data are available as plain text files and can be imported into a MySQL database by using the provided import script. They are intended both for scientific use by corpus linguists as well as for applications such as knowledge extraction programs.
The corpora are identical in format and similar in size and content. They contain randomly selected sentences in the language of the corpus and are available in sizes from 10,000 sentences up to 1 million sentences. The sources are either newspaper texts or texts randomly collected from the web. The texts are split into sentences. Non-sentences and foreign language material was removed. Because word co-occurrence information is useful for many applications, these data are precomputed and included as well. For each word, the most significant words appearing as immediate left or right neighbor or appearing anywhere within the same sentence are given. More information about the format and content of these files can be found [here](https://wortschatz.uni-leipzig.de/en/download).
The corpora are automatically collected from carefully selected public sources without considering in detail the content of the contained text. No responsibility is taken for the content of the data. In particular, the views and opinions expressed in specific parts of the data remain exclusively with the authors.
## Dataset Usage
### Links
A "links" subset contains URLs with corresponding language and id (based on `https://corpora.uni-leipzig.de/`)
```python
from datasets import load_dataset
ds = load_dataset("imvladikon/leipzig_corpora_collection", "links")
for row in ds["train"]:
print(row)
```
```
{'id': '0', 'data_id': '0', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/ara_news_2005-2009_10K.tar.gz', 'language': 'Arabic', 'language_short': 'ara', 'year': '2005', 'size': '10K'}
{'id': '1', 'data_id': '1', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/ara_news_2005-2009_30K.tar.gz', 'language': 'Arabic', 'language_short': 'ara', 'year': '2005', 'size': '30K'}
{'id': '2', 'data_id': '2', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/ara_news_2005-2009_100K.tar.gz', 'language': 'Arabic', 'language_short': 'ara', 'year': '2005', 'size': '100K'}
....
```
where is possible to choose specific `data_id` to load a specific dataset, where `data_id` is name of the subset
Links possible to filter according to metdata attributes:
```python
links = load_dataset("imvladikon/leipzig_corpora_collection", "links", split="train")
english_2019 = links.filter(lambda x: x["language"] == "English" and x["year"] == "2019")
for sample in english_2019:
print(sample)
```
```
{'id': '277', 'data_id': 'eng_news_2019_10K', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng_news_2019_10K.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '10K'}
{'id': '278', 'data_id': 'eng_news_2019_30K', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng_news_2019_30K.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '30K'}
{'id': '279', 'data_id': 'eng_news_2019_100K', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng_news_2019_100K.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '100K'}
{'id': '280', 'data_id': 'eng_news_2019_300K', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng_news_2019_300K.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '300K'}
{'id': '281', 'data_id': 'eng_news_2019_1M', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng_news_2019_1M.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '1M'}
{'id': '541', 'data_id': 'eng-za_web_2019_10K', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng-za_web_2019_10K.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '10K'}
{'id': '542', 'data_id': 'eng-za_web_2019_30K', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng-za_web_2019_30K.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '30K'}
{'id': '543', 'data_id': 'eng-za_web_2019_100K', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng-za_web_2019_100K.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '100K'}
{'id': '544', 'data_id': 'eng-za_web_2019_300K', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng-za_web_2019_300K.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '300K'}
{'id': '545', 'data_id': 'eng-za_web_2019_1M', 'url': 'https://downloads.wortschatz-leipzig.de/corpora/eng-za_web_2019_1M.tar.gz', 'language': 'English', 'language_short': 'eng', 'year': '2019', 'size': '1M'}
```
### Corpus
after selecting `data_id`, let's say `heb_wikipedia_2021_1M`, we could load it:
```python
dataset_he = load_dataset("imvladikon/leipzig_corpora_collection", "heb_wikipedia_2021_1M", split="train")
for row in dataset_he:
print(row)
```
another example:
```python
dataset_en = load_dataset("imvladikon/leipzig_corpora_collection", "eng-simple_wikipedia_2021_300K", split="train")
print(dataset_en[76576])
```
sample:
```json
{'id': '79214', 'sentence': 'He was a member of the assembly from 1972 to 1977.'}
```
## Citation
If you use one of these corpora in your work, please, to cite [this work](http://www.lrec-conf.org/proceedings/lrec2012/pdf/327_Paper.pdf):
```
@inproceedings{goldhahn-etal-2012-building,
title = "Building Large Monolingual Dictionaries at the {L}eipzig Corpora Collection: From 100 to 200 Languages",
author = "Goldhahn, Dirk and
Eckart, Thomas and
Quasthoff, Uwe",
editor = "Calzolari, Nicoletta and
Choukri, Khalid and
Declerck, Thierry and
Do{\u{g}}an, Mehmet U{\u{g}}ur and
Maegaard, Bente and
Mariani, Joseph and
Moreno, Asuncion and
Odijk, Jan and
Piperidis, Stelios",
booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)",
month = may,
year = "2012",
address = "Istanbul, Turkey",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/327_Paper.pdf",
pages = "759--765",
abstract = "The Leipzig Corpora Collection offers free online access to 136 monolingual dictionaries enriched with statistical information. In this paper we describe current advances of the project in collecting and processing text data automatically for a large number of languages. Our main interest lies in languages of low density, where only few text data exists online. The aim of this approach is to create monolingual dictionaries and statistical information for a high number of new languages and to expand the existing dictionaries, opening up new possibilities for linguistic typology and other research. Focus of this paper will be set on the infrastructure for the automatic acquisition of large amounts of monolingual text in many languages from various sources. Preliminary results of the collection of text data will be presented. The mainly language-independent framework for preprocessing, cleaning and creating the corpora and computing the necessary statistics will also be depicted.",
}
``` |
ovior/twitter_dataset_1713186124 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 2701387
num_examples: 8070
download_size: 1547679
dataset_size: 2701387
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
linagora/SUMM-RE_sm | ---
dataset_info:
features:
- name: meeting_id
dtype: string
- name: speaker_id
dtype: string
- name: audio_id
dtype: string
- name: audio
dtype: audio
- name: transcript
dtype: string
- name: ipus
list:
- name: end
dtype: float64
- name: start
dtype: float64
- name: words
list:
- name: end
dtype: float64
- name: start
dtype: float64
- name: word
dtype: string
- name: phonemes
list:
- name: end
dtype: float64
- name: phoneme
dtype: string
- name: start
dtype: float64
splits:
- name: train
num_bytes: 4440887851.0
num_examples: 39
download_size: 4416239830
dataset_size: 4440887851.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-4.0
task_categories:
- automatic-speech-recognition
- voice-activity-detection
language:
- fr
tags:
- NLP
- conversational
- automatic speech recognition
- voice activity detection
- inter-pausal units
pretty_name: SUMM-RE small
size_categories:
- 100K<n<1M
---
# Dataset Card for SUMM-RE small
Manually corrected transcripts of French conversations, aligned with the audio signal.
## Dataset Details
### Dataset Description
The SUMM-RE dataset is a corpus of meeting-style conversations in French created for the purpose of the SUMM-RE project (ANR-20-CE23-0017). SUMM-RE small is a subset of the full SUMM-RE corpus for which the transcripts have been manually corrected and aligned with the audio down to phoneme level. It can be used for the evaluation of automatic speech recognition and voice activity detection models.
The SUMM-RE small subset consists of 10 randomly selected conversations. Each conversation lasts roughly 20 minutes and involves 3-4 speakers. Each participant has an individual microphone and associated .wav file leading to 39 audio files in all.
- **Created by:** The corpus was recorded and manually annotated by the Language and Speech Lab (LPL) at the University of Aix-Marseille, France.
- **Funded by:** The National Research Agency of France (ANR) for the SUMM-RE project (ANR-20-CE23-0017).
- **Shared by:** LINAGORA (coordinator of the SUMM-RE project)
- **Language:** French
- **License:** CC BY-SA 4.0
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** Both gold corrected and automatic transcripts (produced with Whisper) can be found on [Ortolang](https://www.ortolang.fr/market/corpora/summ-re-asru).
- **Paper:** [More Information Needed]
## Uses
### Direct Use
This version of SUMM-RE small is designed for the evaluation of automatic speech recognition models and voice activity detection for conversational, spoken French.
### Out-of-Scope Use
Due to its size, the corpus is not suitable for model training.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
- **meeting_id**, e.g. 001a_PARL, includes:
- experiment number, e.g. 001
- meeting order: a|b|c (there were three meetings per experiment)
- experiment type: E (experiment) | P (pilot experiment)
- scenario/topic: A|B|C|D|E
- meeting type: R (reporting) | D (decision) | P (planning)
- recording location: L (LPL) | H (H2C2 studio) | Z (Zoom) | D (at home)
- **speaker_id**
- **audio_id**: meeting_id + speaker_id
- **audio**: the .wav file for an individual speaker
- **transcript**: the manually corrected transcript (corrected from Whisper transcripts)
- **ipus**: a list of start and end times for manually annotated interpausal units (units of speech from a single speaker that are separated by silences above a certain threshold)
- **words**: a list of start and end times for each word
- **phonemes**: a list of start and end times for each phoneme
## Dataset Creation
### Curation Rationale
The full SUMM-RE corpus, which includes meeting summaries, is designed to train and evaluate models for meeting summarization. SUMM-RE small is an extract of this corpus used to evaluate various stages of the summarization pipeline, starting with automatic transcription of the audio signal.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
The SUMM-RE corpus is an original corpus designed by members of LINAGORA and the University of Aix-Marseille and recorded by the latter.
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
Corpus design and production:
- University of Aix-Marseille: Océane Granier (corpus conception, recording, annotation), Laurent Prévot (corpus conception, annotatation, supervision), Hiroyoshi Yamasaki (corpus cleaning, alignment and anonymization), Roxanne Bertrand (corpus conception and annotation) with helpful input from Brigitte Bigi and Stéphane Rauzy.
- LINAGORA: Julie Hunter, Kate Thompson and Guokan Shang (corpus conception)
Corpus participants:
- Participants for the in-person conversations were recruited on the University of Aix-Marseille campus.
- Participants for the zoom meetings were recruited through [Prolific](https://www.prolific.com/).
### Annotations
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
Principal annotator: Océane Granier
Additional assistance from: Laurent Prévot, Hiroyoshi Yamasaki and Roxane Bertrand
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
The audio and transcripts have been (semi-automatically) anonymized.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
Hiroyoshi Yamasaki, Jérôme Louradour, Julie Hunter and Laurent Prévot (2023): "Transcribing and aligning conversational speech: A hybrid pipeline applied to French conversations," Workshop on Automatic Speech Recognition and Understanding.
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
|
karimasbar/test | ---
license: mit
---
|
nekotov/camera-set | ---
license: mit
language:
- en
tags:
- camera
- canon
- lense
- charger
pretty_name: 'Dataset of Canon: cameras,lenses, chargers.'
---
### Dataset contains:
```
[
'charger',
'lense',
'camera'
]
```
|
bob80333/doreco_southengland | ---
license: cc-by-4.0
---
# Dataset Card
This dataset is the aligned phoneme subset of the DoReCo South England dataset, split into utterances + phonetic transcriptions based on pause lengths / total length,
with the goal of creating utterances < 30s for fine-tuning speech recognition models on phoneme recognition, not one phoneme at a time, but rather for entire utterances.
It is already randomly pre-split into train/dev/test sets, with 80% in train, 10% in dev, and the final 10% in test.
---
Link to original dataset website: https://doreco.huma-num.fr/languages/sout3282
Original dataset citation:
```
@incollection{doreco-sout3282,
address = {Berlin \& Lyon},
author = {Schiborr, Nils Norman},
booktitle = {Language Documentation Reference Corpus (DoReCo) 1.2},
editor = {Seifart, Frank and Paschen, Ludger and Stave, Matthew},
publisher = {Leibniz-Zentrum Allgemeine Sprachwissenschaft \& laboratoire Dynamique Du Langage (UMR5596, CNRS \& Université Lyon 2)},
title = {English (Southern England) DoReCo dataset},
url = {https://doreco.huma-num.fr/languages/sout3282},
doi = {10.34847/nkl.9c271u5g},
urldate = {16/01/2024},
year = {2022}
}
``` |
lewtun/test-model-outputs | ---
dataset_info:
features:
- name: id
dtype: int64
- name: source
dtype: string
- name: prompt
dtype: string
- name: outputs
list:
- name: model
dtype: string
- name: outputs
sequence: string
splits:
- name: train
num_bytes: 982
num_examples: 1
download_size: 5435
dataset_size: 982
---
# Dataset Card for "test-model-outputs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_vistagi__Mixtral-8x7b-v0.1-sft | ---
pretty_name: Evaluation run of vistagi/Mixtral-8x7b-v0.1-sft
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [vistagi/Mixtral-8x7b-v0.1-sft](https://huggingface.co/vistagi/Mixtral-8x7b-v0.1-sft)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_vistagi__Mixtral-8x7b-v0.1-sft\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-18T06:30:00.873036](https://huggingface.co/datasets/open-llm-leaderboard/details_vistagi__Mixtral-8x7b-v0.1-sft/blob/main/results_2024-02-18T06-30-00.873036.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7134552339034452,\n\
\ \"acc_stderr\": 0.030055997546363594,\n \"acc_norm\": 0.7181597948300631,\n\
\ \"acc_norm_stderr\": 0.030631631253278484,\n \"mc1\": 0.31456548347613217,\n\
\ \"mc1_stderr\": 0.016255241993179185,\n \"mc2\": 0.4674384125733044,\n\
\ \"mc2_stderr\": 0.01414272854245227\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6322525597269625,\n \"acc_stderr\": 0.01409099561816849,\n\
\ \"acc_norm\": 0.6655290102389079,\n \"acc_norm_stderr\": 0.013787460322441374\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6694881497709619,\n\
\ \"acc_stderr\": 0.004694360968929403,\n \"acc_norm\": 0.8639713204540929,\n\
\ \"acc_norm_stderr\": 0.0034211839093201673\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6962962962962963,\n\
\ \"acc_stderr\": 0.03972552884785137,\n \"acc_norm\": 0.6962962962962963,\n\
\ \"acc_norm_stderr\": 0.03972552884785137\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.8223684210526315,\n \"acc_stderr\": 0.031103182383123387,\n\
\ \"acc_norm\": 0.8223684210526315,\n \"acc_norm_stderr\": 0.031103182383123387\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7886792452830189,\n \"acc_stderr\": 0.025125766484827845,\n\
\ \"acc_norm\": 0.7886792452830189,\n \"acc_norm_stderr\": 0.025125766484827845\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8472222222222222,\n\
\ \"acc_stderr\": 0.030085743248565666,\n \"acc_norm\": 0.8472222222222222,\n\
\ \"acc_norm_stderr\": 0.030085743248565666\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n\
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7052023121387283,\n\
\ \"acc_stderr\": 0.03476599607516478,\n \"acc_norm\": 0.7052023121387283,\n\
\ \"acc_norm_stderr\": 0.03476599607516478\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.45098039215686275,\n \"acc_stderr\": 0.049512182523962625,\n\
\ \"acc_norm\": 0.45098039215686275,\n \"acc_norm_stderr\": 0.049512182523962625\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.82,\n \"acc_stderr\": 0.03861229196653695,\n \"acc_norm\": 0.82,\n\
\ \"acc_norm_stderr\": 0.03861229196653695\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6808510638297872,\n \"acc_stderr\": 0.03047297336338004,\n\
\ \"acc_norm\": 0.6808510638297872,\n \"acc_norm_stderr\": 0.03047297336338004\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.6491228070175439,\n\
\ \"acc_stderr\": 0.04489539350270698,\n \"acc_norm\": 0.6491228070175439,\n\
\ \"acc_norm_stderr\": 0.04489539350270698\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6551724137931034,\n \"acc_stderr\": 0.03960933549451208,\n\
\ \"acc_norm\": 0.6551724137931034,\n \"acc_norm_stderr\": 0.03960933549451208\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.47619047619047616,\n \"acc_stderr\": 0.025722097064388525,\n \"\
acc_norm\": 0.47619047619047616,\n \"acc_norm_stderr\": 0.025722097064388525\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5634920634920635,\n\
\ \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.5634920634920635,\n\
\ \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.8354838709677419,\n \"acc_stderr\": 0.021090847745939313,\n \"\
acc_norm\": 0.8354838709677419,\n \"acc_norm_stderr\": 0.021090847745939313\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.6354679802955665,\n \"acc_stderr\": 0.0338640574606209,\n \"acc_norm\"\
: 0.6354679802955665,\n \"acc_norm_stderr\": 0.0338640574606209\n },\n\
\ \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\"\
: 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n\
\ \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8242424242424242,\n \"acc_stderr\": 0.02972094300622445,\n\
\ \"acc_norm\": 0.8242424242424242,\n \"acc_norm_stderr\": 0.02972094300622445\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8686868686868687,\n \"acc_stderr\": 0.024063156416822516,\n \"\
acc_norm\": 0.8686868686868687,\n \"acc_norm_stderr\": 0.024063156416822516\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9378238341968912,\n \"acc_stderr\": 0.017426974154240524,\n\
\ \"acc_norm\": 0.9378238341968912,\n \"acc_norm_stderr\": 0.017426974154240524\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.7128205128205128,\n \"acc_stderr\": 0.022939925418530613,\n\
\ \"acc_norm\": 0.7128205128205128,\n \"acc_norm_stderr\": 0.022939925418530613\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.37037037037037035,\n \"acc_stderr\": 0.02944316932303154,\n \
\ \"acc_norm\": 0.37037037037037035,\n \"acc_norm_stderr\": 0.02944316932303154\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7899159663865546,\n \"acc_stderr\": 0.026461398717471874,\n\
\ \"acc_norm\": 0.7899159663865546,\n \"acc_norm_stderr\": 0.026461398717471874\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.4900662251655629,\n \"acc_stderr\": 0.04081677107248437,\n \"\
acc_norm\": 0.4900662251655629,\n \"acc_norm_stderr\": 0.04081677107248437\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8844036697247707,\n \"acc_stderr\": 0.01370874953417264,\n \"\
acc_norm\": 0.8844036697247707,\n \"acc_norm_stderr\": 0.01370874953417264\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6342592592592593,\n \"acc_stderr\": 0.03284738857647206,\n \"\
acc_norm\": 0.6342592592592593,\n \"acc_norm_stderr\": 0.03284738857647206\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8578431372549019,\n \"acc_stderr\": 0.024509803921568624,\n \"\
acc_norm\": 0.8578431372549019,\n \"acc_norm_stderr\": 0.024509803921568624\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8860759493670886,\n \"acc_stderr\": 0.020681745135884562,\n \
\ \"acc_norm\": 0.8860759493670886,\n \"acc_norm_stderr\": 0.020681745135884562\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7757847533632287,\n\
\ \"acc_stderr\": 0.027991534258519517,\n \"acc_norm\": 0.7757847533632287,\n\
\ \"acc_norm_stderr\": 0.027991534258519517\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.816793893129771,\n \"acc_stderr\": 0.03392770926494732,\n\
\ \"acc_norm\": 0.816793893129771,\n \"acc_norm_stderr\": 0.03392770926494732\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8760330578512396,\n \"acc_stderr\": 0.03008309871603521,\n \"\
acc_norm\": 0.8760330578512396,\n \"acc_norm_stderr\": 0.03008309871603521\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8240740740740741,\n\
\ \"acc_stderr\": 0.036809181416738807,\n \"acc_norm\": 0.8240740740740741,\n\
\ \"acc_norm_stderr\": 0.036809181416738807\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n\
\ \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5625,\n\
\ \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.5625,\n \
\ \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8640776699029126,\n \"acc_stderr\": 0.033932957297610096,\n\
\ \"acc_norm\": 0.8640776699029126,\n \"acc_norm_stderr\": 0.033932957297610096\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9145299145299145,\n\
\ \"acc_stderr\": 0.018315891685625852,\n \"acc_norm\": 0.9145299145299145,\n\
\ \"acc_norm_stderr\": 0.018315891685625852\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \
\ \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.04093601807403326\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8735632183908046,\n\
\ \"acc_stderr\": 0.011884488905895555,\n \"acc_norm\": 0.8735632183908046,\n\
\ \"acc_norm_stderr\": 0.011884488905895555\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.8034682080924855,\n \"acc_stderr\": 0.021393961404363844,\n\
\ \"acc_norm\": 0.8034682080924855,\n \"acc_norm_stderr\": 0.021393961404363844\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.40558659217877097,\n\
\ \"acc_stderr\": 0.016421670506339175,\n \"acc_norm\": 0.40558659217877097,\n\
\ \"acc_norm_stderr\": 0.016421670506339175\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.8202614379084967,\n \"acc_stderr\": 0.02198603218206415,\n\
\ \"acc_norm\": 0.8202614379084967,\n \"acc_norm_stderr\": 0.02198603218206415\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7845659163987139,\n\
\ \"acc_stderr\": 0.023350225475471442,\n \"acc_norm\": 0.7845659163987139,\n\
\ \"acc_norm_stderr\": 0.023350225475471442\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8487654320987654,\n \"acc_stderr\": 0.019935086092149883,\n\
\ \"acc_norm\": 0.8487654320987654,\n \"acc_norm_stderr\": 0.019935086092149883\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5283687943262412,\n \"acc_stderr\": 0.02977945095730305,\n \
\ \"acc_norm\": 0.5283687943262412,\n \"acc_norm_stderr\": 0.02977945095730305\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5325945241199479,\n\
\ \"acc_stderr\": 0.012743072942653368,\n \"acc_norm\": 0.5325945241199479,\n\
\ \"acc_norm_stderr\": 0.012743072942653368\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.8014705882352942,\n \"acc_stderr\": 0.024231013370541087,\n\
\ \"acc_norm\": 0.8014705882352942,\n \"acc_norm_stderr\": 0.024231013370541087\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.7826797385620915,\n \"acc_stderr\": 0.016684820929148587,\n \
\ \"acc_norm\": 0.7826797385620915,\n \"acc_norm_stderr\": 0.016684820929148587\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n\
\ \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n\
\ \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7918367346938775,\n \"acc_stderr\": 0.025991117672813292,\n\
\ \"acc_norm\": 0.7918367346938775,\n \"acc_norm_stderr\": 0.025991117672813292\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8805970149253731,\n\
\ \"acc_stderr\": 0.02292879327721974,\n \"acc_norm\": 0.8805970149253731,\n\
\ \"acc_norm_stderr\": 0.02292879327721974\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.93,\n \"acc_stderr\": 0.0256432399976243,\n \
\ \"acc_norm\": 0.93,\n \"acc_norm_stderr\": 0.0256432399976243\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n\
\ \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n\
\ \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8771929824561403,\n \"acc_stderr\": 0.02517298435015575,\n\
\ \"acc_norm\": 0.8771929824561403,\n \"acc_norm_stderr\": 0.02517298435015575\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.31456548347613217,\n\
\ \"mc1_stderr\": 0.016255241993179185,\n \"mc2\": 0.4674384125733044,\n\
\ \"mc2_stderr\": 0.01414272854245227\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8153117600631413,\n \"acc_stderr\": 0.010905978112156885\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5617892342683851,\n \
\ \"acc_stderr\": 0.013666915917255069\n }\n}\n```"
repo_url: https://huggingface.co/vistagi/Mixtral-8x7b-v0.1-sft
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|arc:challenge|25_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|arc:challenge|25_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|gsm8k|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|gsm8k|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hellaswag|10_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hellaswag|10_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-18T06-21-11.005197.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-18T06-30-00.873036.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-18T06-30-00.873036.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- '**/details_harness|winogrande|5_2024-02-18T06-21-11.005197.parquet'
- split: 2024_02_18T06_30_00.873036
path:
- '**/details_harness|winogrande|5_2024-02-18T06-30-00.873036.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-18T06-30-00.873036.parquet'
- config_name: results
data_files:
- split: 2024_02_18T06_21_11.005197
path:
- results_2024-02-18T06-21-11.005197.parquet
- split: 2024_02_18T06_30_00.873036
path:
- results_2024-02-18T06-30-00.873036.parquet
- split: latest
path:
- results_2024-02-18T06-30-00.873036.parquet
---
# Dataset Card for Evaluation run of vistagi/Mixtral-8x7b-v0.1-sft
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [vistagi/Mixtral-8x7b-v0.1-sft](https://huggingface.co/vistagi/Mixtral-8x7b-v0.1-sft) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_vistagi__Mixtral-8x7b-v0.1-sft",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-18T06:30:00.873036](https://huggingface.co/datasets/open-llm-leaderboard/details_vistagi__Mixtral-8x7b-v0.1-sft/blob/main/results_2024-02-18T06-30-00.873036.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7134552339034452,
"acc_stderr": 0.030055997546363594,
"acc_norm": 0.7181597948300631,
"acc_norm_stderr": 0.030631631253278484,
"mc1": 0.31456548347613217,
"mc1_stderr": 0.016255241993179185,
"mc2": 0.4674384125733044,
"mc2_stderr": 0.01414272854245227
},
"harness|arc:challenge|25": {
"acc": 0.6322525597269625,
"acc_stderr": 0.01409099561816849,
"acc_norm": 0.6655290102389079,
"acc_norm_stderr": 0.013787460322441374
},
"harness|hellaswag|10": {
"acc": 0.6694881497709619,
"acc_stderr": 0.004694360968929403,
"acc_norm": 0.8639713204540929,
"acc_norm_stderr": 0.0034211839093201673
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252606,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252606
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6962962962962963,
"acc_stderr": 0.03972552884785137,
"acc_norm": 0.6962962962962963,
"acc_norm_stderr": 0.03972552884785137
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8223684210526315,
"acc_stderr": 0.031103182383123387,
"acc_norm": 0.8223684210526315,
"acc_norm_stderr": 0.031103182383123387
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7886792452830189,
"acc_stderr": 0.025125766484827845,
"acc_norm": 0.7886792452830189,
"acc_norm_stderr": 0.025125766484827845
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8472222222222222,
"acc_stderr": 0.030085743248565666,
"acc_norm": 0.8472222222222222,
"acc_norm_stderr": 0.030085743248565666
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7052023121387283,
"acc_stderr": 0.03476599607516478,
"acc_norm": 0.7052023121387283,
"acc_norm_stderr": 0.03476599607516478
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.45098039215686275,
"acc_stderr": 0.049512182523962625,
"acc_norm": 0.45098039215686275,
"acc_norm_stderr": 0.049512182523962625
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.82,
"acc_stderr": 0.03861229196653695,
"acc_norm": 0.82,
"acc_norm_stderr": 0.03861229196653695
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6808510638297872,
"acc_stderr": 0.03047297336338004,
"acc_norm": 0.6808510638297872,
"acc_norm_stderr": 0.03047297336338004
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.6491228070175439,
"acc_stderr": 0.04489539350270698,
"acc_norm": 0.6491228070175439,
"acc_norm_stderr": 0.04489539350270698
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6551724137931034,
"acc_stderr": 0.03960933549451208,
"acc_norm": 0.6551724137931034,
"acc_norm_stderr": 0.03960933549451208
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.47619047619047616,
"acc_stderr": 0.025722097064388525,
"acc_norm": 0.47619047619047616,
"acc_norm_stderr": 0.025722097064388525
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5634920634920635,
"acc_stderr": 0.04435932892851466,
"acc_norm": 0.5634920634920635,
"acc_norm_stderr": 0.04435932892851466
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8354838709677419,
"acc_stderr": 0.021090847745939313,
"acc_norm": 0.8354838709677419,
"acc_norm_stderr": 0.021090847745939313
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6354679802955665,
"acc_stderr": 0.0338640574606209,
"acc_norm": 0.6354679802955665,
"acc_norm_stderr": 0.0338640574606209
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8242424242424242,
"acc_stderr": 0.02972094300622445,
"acc_norm": 0.8242424242424242,
"acc_norm_stderr": 0.02972094300622445
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8686868686868687,
"acc_stderr": 0.024063156416822516,
"acc_norm": 0.8686868686868687,
"acc_norm_stderr": 0.024063156416822516
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9378238341968912,
"acc_stderr": 0.017426974154240524,
"acc_norm": 0.9378238341968912,
"acc_norm_stderr": 0.017426974154240524
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7128205128205128,
"acc_stderr": 0.022939925418530613,
"acc_norm": 0.7128205128205128,
"acc_norm_stderr": 0.022939925418530613
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.37037037037037035,
"acc_stderr": 0.02944316932303154,
"acc_norm": 0.37037037037037035,
"acc_norm_stderr": 0.02944316932303154
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7899159663865546,
"acc_stderr": 0.026461398717471874,
"acc_norm": 0.7899159663865546,
"acc_norm_stderr": 0.026461398717471874
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4900662251655629,
"acc_stderr": 0.04081677107248437,
"acc_norm": 0.4900662251655629,
"acc_norm_stderr": 0.04081677107248437
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8844036697247707,
"acc_stderr": 0.01370874953417264,
"acc_norm": 0.8844036697247707,
"acc_norm_stderr": 0.01370874953417264
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6342592592592593,
"acc_stderr": 0.03284738857647206,
"acc_norm": 0.6342592592592593,
"acc_norm_stderr": 0.03284738857647206
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8578431372549019,
"acc_stderr": 0.024509803921568624,
"acc_norm": 0.8578431372549019,
"acc_norm_stderr": 0.024509803921568624
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8860759493670886,
"acc_stderr": 0.020681745135884562,
"acc_norm": 0.8860759493670886,
"acc_norm_stderr": 0.020681745135884562
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7757847533632287,
"acc_stderr": 0.027991534258519517,
"acc_norm": 0.7757847533632287,
"acc_norm_stderr": 0.027991534258519517
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.816793893129771,
"acc_stderr": 0.03392770926494732,
"acc_norm": 0.816793893129771,
"acc_norm_stderr": 0.03392770926494732
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8760330578512396,
"acc_stderr": 0.03008309871603521,
"acc_norm": 0.8760330578512396,
"acc_norm_stderr": 0.03008309871603521
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8240740740740741,
"acc_stderr": 0.036809181416738807,
"acc_norm": 0.8240740740740741,
"acc_norm_stderr": 0.036809181416738807
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5625,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.5625,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.8640776699029126,
"acc_stderr": 0.033932957297610096,
"acc_norm": 0.8640776699029126,
"acc_norm_stderr": 0.033932957297610096
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9145299145299145,
"acc_stderr": 0.018315891685625852,
"acc_norm": 0.9145299145299145,
"acc_norm_stderr": 0.018315891685625852
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.79,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.79,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8735632183908046,
"acc_stderr": 0.011884488905895555,
"acc_norm": 0.8735632183908046,
"acc_norm_stderr": 0.011884488905895555
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8034682080924855,
"acc_stderr": 0.021393961404363844,
"acc_norm": 0.8034682080924855,
"acc_norm_stderr": 0.021393961404363844
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.40558659217877097,
"acc_stderr": 0.016421670506339175,
"acc_norm": 0.40558659217877097,
"acc_norm_stderr": 0.016421670506339175
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8202614379084967,
"acc_stderr": 0.02198603218206415,
"acc_norm": 0.8202614379084967,
"acc_norm_stderr": 0.02198603218206415
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7845659163987139,
"acc_stderr": 0.023350225475471442,
"acc_norm": 0.7845659163987139,
"acc_norm_stderr": 0.023350225475471442
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8487654320987654,
"acc_stderr": 0.019935086092149883,
"acc_norm": 0.8487654320987654,
"acc_norm_stderr": 0.019935086092149883
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5283687943262412,
"acc_stderr": 0.02977945095730305,
"acc_norm": 0.5283687943262412,
"acc_norm_stderr": 0.02977945095730305
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5325945241199479,
"acc_stderr": 0.012743072942653368,
"acc_norm": 0.5325945241199479,
"acc_norm_stderr": 0.012743072942653368
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8014705882352942,
"acc_stderr": 0.024231013370541087,
"acc_norm": 0.8014705882352942,
"acc_norm_stderr": 0.024231013370541087
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7826797385620915,
"acc_stderr": 0.016684820929148587,
"acc_norm": 0.7826797385620915,
"acc_norm_stderr": 0.016684820929148587
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7918367346938775,
"acc_stderr": 0.025991117672813292,
"acc_norm": 0.7918367346938775,
"acc_norm_stderr": 0.025991117672813292
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8805970149253731,
"acc_stderr": 0.02292879327721974,
"acc_norm": 0.8805970149253731,
"acc_norm_stderr": 0.02292879327721974
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.93,
"acc_stderr": 0.0256432399976243,
"acc_norm": 0.93,
"acc_norm_stderr": 0.0256432399976243
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5120481927710844,
"acc_stderr": 0.03891364495835817,
"acc_norm": 0.5120481927710844,
"acc_norm_stderr": 0.03891364495835817
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8771929824561403,
"acc_stderr": 0.02517298435015575,
"acc_norm": 0.8771929824561403,
"acc_norm_stderr": 0.02517298435015575
},
"harness|truthfulqa:mc|0": {
"mc1": 0.31456548347613217,
"mc1_stderr": 0.016255241993179185,
"mc2": 0.4674384125733044,
"mc2_stderr": 0.01414272854245227
},
"harness|winogrande|5": {
"acc": 0.8153117600631413,
"acc_stderr": 0.010905978112156885
},
"harness|gsm8k|5": {
"acc": 0.5617892342683851,
"acc_stderr": 0.013666915917255069
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
manu/illuin_layout_dataset_text_only | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1169564468
num_examples: 488563
download_size: 548246721
dataset_size: 1169564468
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "illuin_layout_dataset_text_only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
knowgen/Manufacturing_EN_cleaned | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3041681879
num_examples: 1329108
- name: test
num_bytes: 380199889
num_examples: 166138
- name: validation
num_bytes: 378685278
num_examples: 166138
download_size: 2049475712
dataset_size: 3800567046
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
emilykang/pathology_train | ---
dataset_info:
features:
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 925009198.572
num_examples: 1501
download_size: 890173675
dataset_size: 925009198.572
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pourmand1376/persian-qa-translated | ---
dataset_info:
features:
- name: input
dtype: float64
- name: instruction
dtype: string
- name: original_instruction
dtype: string
- name: original_output
dtype: string
- name: output
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 360540755
num_examples: 153127
download_size: 186783724
dataset_size: 360540755
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- question-answering
- translation
- text-generation
language:
- fa
- en
pretty_name: Persian QA Translated
size_categories:
- 100K<n<1M
---
# Dataset Card for "persian-qa-translated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
WmVernon/nfl-boxscores | ---
license: apache-2.0
---
|
dmrau/cqudubstack-stats | ---
configs:
- config_name: default
data_files:
- split: queries
path: data/queries-*
- split: corpus
path: data/corpus-*
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: queries
num_bytes: 47795
num_examples: 652
- name: corpus
num_bytes: 42923933
num_examples: 42269
download_size: 24679799
dataset_size: 42971728
---
# Dataset Card for "cqudubstack-stats"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
neeraj-gloify/historia | ---
license: mit
tags:
- Storyteller
- narrator
- short story
language:
- en
---
# Historia
This dataset assists LLM in improving the ability to narrate a tale depending on the prompts provided.
#### Name Derived From
The Greek word historia originally meant inquiry, the act of seeking knowledge, as well as the knowledge that results from inquiry. |
open-llm-leaderboard/details_Andron00e__YetAnother_Open-Llama-3B-LoRA-OpenOrca | ---
pretty_name: Evaluation run of Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca](https://huggingface.co/Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Andron00e__YetAnother_Open-Llama-3B-LoRA-OpenOrca\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-22T21:36:39.212716](https://huggingface.co/datasets/open-llm-leaderboard/details_Andron00e__YetAnother_Open-Llama-3B-LoRA-OpenOrca/blob/main/results_2023-09-22T21-36-39.212716.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0,\n \"\
em_stderr\": 0.0,\n \"f1\": 0.0004404362416107381,\n \"f1_stderr\"\
: 6.976502994544788e-05,\n \"acc\": 0.2541436464088398,\n \"acc_stderr\"\
: 0.007025277661412096\n },\n \"harness|drop|3\": {\n \"em\": 0.0,\n\
\ \"em_stderr\": 0.0,\n \"f1\": 0.0004404362416107381,\n \"\
f1_stderr\": 6.976502994544788e-05\n },\n \"harness|gsm8k|5\": {\n \
\ \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5082872928176796,\n \"acc_stderr\": 0.014050555322824192\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_09_22T21_20_12.395485
path:
- '**/details_harness|drop|3_2023-09-22T21-20-12.395485.parquet'
- split: 2023_09_22T21_36_39.212716
path:
- '**/details_harness|drop|3_2023-09-22T21-36-39.212716.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-22T21-36-39.212716.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_22T21_20_12.395485
path:
- '**/details_harness|gsm8k|5_2023-09-22T21-20-12.395485.parquet'
- split: 2023_09_22T21_36_39.212716
path:
- '**/details_harness|gsm8k|5_2023-09-22T21-36-39.212716.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-22T21-36-39.212716.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_22T21_20_12.395485
path:
- '**/details_harness|winogrande|5_2023-09-22T21-20-12.395485.parquet'
- split: 2023_09_22T21_36_39.212716
path:
- '**/details_harness|winogrande|5_2023-09-22T21-36-39.212716.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-22T21-36-39.212716.parquet'
- config_name: results
data_files:
- split: 2023_09_22T21_20_12.395485
path:
- results_2023-09-22T21-20-12.395485.parquet
- split: 2023_09_22T21_36_39.212716
path:
- results_2023-09-22T21-36-39.212716.parquet
- split: latest
path:
- results_2023-09-22T21-36-39.212716.parquet
---
# Dataset Card for Evaluation run of Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca](https://huggingface.co/Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Andron00e__YetAnother_Open-Llama-3B-LoRA-OpenOrca",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T21:36:39.212716](https://huggingface.co/datasets/open-llm-leaderboard/details_Andron00e__YetAnother_Open-Llama-3B-LoRA-OpenOrca/blob/main/results_2023-09-22T21-36-39.212716.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 0.0004404362416107381,
"f1_stderr": 6.976502994544788e-05,
"acc": 0.2541436464088398,
"acc_stderr": 0.007025277661412096
},
"harness|drop|3": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 0.0004404362416107381,
"f1_stderr": 6.976502994544788e-05
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5082872928176796,
"acc_stderr": 0.014050555322824192
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
coranremora/with | ---
license: openrail
---
|
ks21/Joe_Buck_the_GOATv2 | ---
dataset_info:
features:
- name: text
dtype: string
- name: image
sequence:
sequence:
sequence: uint8
splits:
- name: train
num_bytes: 258171320
num_examples: 40
download_size: 64357832
dataset_size: 258171320
---
# Dataset Card for "Joe_Buck_the_GOATv2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
adalib/numpy-cond-gen-1 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: api
dtype: string
splits:
- name: train
num_bytes: 25904109429.0
num_examples: 2088002
download_size: 8983763078
dataset_size: 25904109429.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
parsee-mizuhashi/2 | ---
license: mit
---
|
MinhMinh09/dictionary-20240409 | ---
language:
- vi
- en
license: mit
---
|
CVasNLPExperiments/Caltech101_with_background_test_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_6084 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_ViT_L_14_Attributes_ViT_L_14_text_davinci_003_full_clip_tags_ViT_L_14_simple_specific_rices
num_bytes: 2521067
num_examples: 6084
- name: fewshot_1_clip_tags_ViT_L_14_Attributes_ViT_L_14_text_davinci_003_full_clip_tags_ViT_L_14_simple_specific_rices
num_bytes: 4868124
num_examples: 6084
- name: fewshot_3_clip_tags_ViT_L_14_Attributes_ViT_L_14_text_davinci_003_full_clip_tags_ViT_L_14_simple_specific_rices
num_bytes: 9570212
num_examples: 6084
- name: fewshot_0__Attributes_ViT_B_16_descriptors_text_davinci_003_full_clip_tags_ViT_B_16_simple_specific_rices
num_bytes: 2503329
num_examples: 6084
- name: fewshot_1__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices
num_bytes: 4649211
num_examples: 6084
- name: fewshot_1__Attributes_ViT_B_16_descriptors_text_davinci_003_full_clip_tags_ViT_B_16_simple_specific_rices
num_bytes: 4833725
num_examples: 6084
- name: fewshot_3__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices
num_bytes: 9130589
num_examples: 6084
- name: fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices
num_bytes: 2415416
num_examples: 6084
download_size: 5652574
dataset_size: 40491673
configs:
- config_name: default
data_files:
- split: fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices
path: data/fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices-*
---
# Dataset Card for "Caltech101_with_background_test_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_6084"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Xmaster6y/fruit-vegetable-concepts | ---
license: mit
---
|
jaryong/jrtest1 | ---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 7786
num_examples: 32
download_size: 4172
dataset_size: 7786
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Anwaarma/BP-balanced-I04 | ---
dataset_info:
features:
- name: Target
dtype: int64
- name: PC
dtype: string
- name: GSHARE
dtype: string
- name: GA table
dtype: string
splits:
- name: train
num_bytes: 41004500
num_examples: 82009
- name: test
num_bytes: 10251500
num_examples: 20503
download_size: 2353976
dataset_size: 51256000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Thang/wikides | ---
license: cc-by-sa-4.0
---
# Dataset
## Topic-independent split
Topics are randomly selected in datasets. For a common purpose, we suggest THESE DATASETS.
* test_random.json
* training_random.json
* validation_random.json
# GitHub
* https://github.com/declare-lab/WikiDes/
# Citation
## APA
Ta, H. T., Rahman, A. B. S., Majumder, N., Hussain, A., Najjar, L., Howard, N., ... & Gelbukh, A. (2022). WikiDes: A Wikipedia-based dataset for generating short descriptions from paragraphs. *Information Fusion*.
## BibTeX
```
@article{Ta_2022,
doi = {10.1016/j.inffus.2022.09.022},
url = {https://doi.org/10.1016%2Fj.inffus.2022.09.022},
year = 2022,
month = {sep},
publisher = {Elsevier {BV}},
author = {Hoang Thang Ta and Abu Bakar Siddiqur Rahman and Navonil Majumder and Amir Hussain and Lotfollah Najjar and Newton Howard and Soujanya Poria and Alexander Gelbukh},
title = {{WikiDes}: A Wikipedia-based dataset for generating short descriptions from paragraphs},
journal = {Information Fusion}}
```
# Paper links
* https://doi.org/10.1016%2Fj.inffus.2022.09.022
* https://arxiv.org/abs/2209.13101
# Contact
Hoang Thang Ta, tahoangthang@gmail.com
|
Seanxh/twitter_dataset_1713153100 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 22488
num_examples: 52
download_size: 11680
dataset_size: 22488
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kaleemWaheed/twitter_dataset_1713119902 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 24421
num_examples: 57
download_size: 12769
dataset_size: 24421
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Multimodal-Fatima/FGVC_Aircraft_test_facebook_opt_6.7b_Attributes_Caption_ns_3333 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_0_bs_16
num_bytes: 299297082.375
num_examples: 3333
- name: fewshot_1_bs_16
num_bytes: 300147832.375
num_examples: 3333
- name: fewshot_3_bs_16
num_bytes: 301862752.375
num_examples: 3333
download_size: 885565554
dataset_size: 901307667.125
---
# Dataset Card for "FGVC_Aircraft_test_facebook_opt_6.7b_Attributes_Caption_ns_3333"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
stroopc/emoji_therapist | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: emoji_therapist
dtype: string
splits:
- name: train
num_bytes: 7983
num_examples: 54
download_size: 8128
dataset_size: 7983
---
# Dataset Card for "emoji_therapist"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nexdata/Uyghur_Speech_Data_by_Mobile_Phone | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Uyghur_Speech_Data_by_Mobile_Phone
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/46?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
It collects 2,058 people from the Uighur community, with a balanced ratio of men and women. The recording contents are 300,000 Uighur spoken sentences, and the recording environment is quiet indoor. All sentences were manually and accurately transcribed and annotated with noise signs.
For more details, please refer to the link: https://www.nexdata.ai/datasets/46?source=Huggingface
### Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Uyghur
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
spitis/rpr_scenarios | ---
dataset_info:
features:
- name: category
dtype: string
- name: scenario
dtype: string
- name: prompt
dtype: string
- name: criteria
dtype: string
- name: more_pref
dtype: string
- name: less_pref
dtype: string
splits:
- name: train
num_bytes: 64165925
num_examples: 37045
download_size: 38358273
dataset_size: 64165925
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
alexfabbri/answersumm | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
tags:
- query-based-summarization
---
# Dataset Card for answersumm
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/Alex-Fabbri/AnswerSumm
- **Paper:** [AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization](https://arxiv.org/abs/2111.06474)
- **Point of Contact:** [Alex Fabbri](mailto:afabbri@salesforce.com)
### Dataset Summary
The AnswerSumm dataset is an English-language dataset of questions and answers collected from a [StackExchange data dump](https://archive.org/details/stackexchange). The dataset was created to support the task of query-focused answer summarization with an emphasis on multi-perspective answers.
The dataset consists of over 4200 such question-answer threads annotated by professional linguists and includes over 8700 summaries. We decompose the task into several annotation stages, including sentence selection, sentence clustering, cluster summarization, and overall summarization. For each thread, the annotator writes two summaries, one in which the annotator is asked to mark sentences that are included in the final summary and instructed to more closely use the words in these sentences rather than abstract. We have multiple annotators for a subset of the examples in the test set.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
A data point comprises a question with a `title` field containing the overview of the question and a `question` that elaborates on the title. The answers are sentence tokenized and contain relevance labels, labels for inclusion in the final summary, and cluster labels. We include cluster summaries, overall summaries, and additional metadata.
An example from the AnswerSumm test set looks as follows:
```json
{
"example_id": 9_24,
"annotator_id": [1],
"question": {
"author": "gaming.stackexchange.com/users/11/Jeffrey",
"forum": "gaming.stackexchange.com",
"link": "gaming.stackexchange.com/questions/1",
"question": "Now that the Engineer update has come, there will be lots of Engineers building up everywhere. How should this best be handled?",
"question_tags": "\<team-fortress-2\>",
"title": "What is a good strategy to deal with lots of engineers turtling on the other team?"
},
"answers": [
{
"answer_details": {
"author": "gaming.stackexchange.com/users/44/Corv1nus",
"score": 49
}
"sents": [
"text": "Lots of medics with lots of ubers on high-damage-dealing classes."
"label": [0],
"label_summ": [0],
"cluster_id": [[-1]],
]
...
},
...
]
"summaries": [
[
"Demomen usually work best against a sentry farm. Heavies or pyros can also be effective. Medics should be in the frontline to absorb the shock. Build a teleporter to help your team through.",
"Demomen are best against a sentry farm. Heavies or pyros can also be effective. The medic should lead the uber combo. ..."
]
]
"cluster_summaries":[
"Demomen are best against a sentry farm.",
"Heavies or pyros can also be effective.",
...
]
}
```
### Data Fields
- question: contains metadata about the question and forum
- question: the body of the question post
- title: the title of the question post
- question_tags: user-provided question tags
- link: link to the original question
- author: link to the author's user page (as requested by StackExchange's attribution policy)
- answers: list of sentence-tokenized answers
- answer_details: dictionary consisting of link to answer author's user page (author) and community-assigned score (score)
- sents: sentences that compose the answer
- text: the sentence text
- label: a list (to generalize to multi-annotator scenarios) of whether the sentence is labeled as relevant or not for answering the question.
- label_summ: a list of whether the sentence was used to write the first annotator-created summary (that is the first summary in `summaries`)
- cluster_id: a list of lists (potentially multiple annotators and a sentence can be in potentially multiple clusters) of the clusters a sentence belongs to. -1 implies no cluster. This label can be used to aggregate sentences into clusters across answers.
- summaries: list of list of summaries. Each annotator wrote two summaries. The first in the list is the summary in which the instructor was told to mark sentences relevant for inclusion in the summary and then closely use the words of these sentences, while for the second summary the annotator was asked to paraphrase and condense the cluster summaries but was not asked to reduce abstraction.
- annotator_id: a list of the ids of the annotator(s) who completed all tasks related to that thread.
- mismatch_info: a dict of any issues in processing the excel files on which annotations were completed.
- rel_sent_not_in_cluster: list of booleans indicating whether there are sentences that are labeled as relevant but were not included in a cluster.
- cluster_sents_not_matched: list of sentences that were found in a cluster but which our processing script didn't automatically match to sentences in the source answers. If cluster summarization is of interest to the user you may want to process these examples separately using clusters_orig.
### Data Splits
The data is split into training, validation, and test sets using stratified sampling on the source forums. There are 2783, 500, and 1000 train/validation/test threads, respectively.
## Dataset Creation
### Curation Rationale
AnswerSumm was built to provide a testbed for query-focused summarization of multi-perspective answers. The data collection was designed to tackle multiple subtasks including sentence selection, clustering, cluster summarization, and overall summarization.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by filtering examples based on a whitelist of forums from StackExchange which we believed would be able to be summarized by a lay person. We describe. We asked annotators to remove examples which required technical knowledge or additional context beyond what was present in the answers.
#### Who are the source language producers?
The language producers are the users of the StackExchange forums sampled.
### Annotations
#### Annotation process
Please see our [paper](https://arxiv.org/pdf/2111.06474.pdf) for additional annotation details. We began with a pre-pilot of 50 examples, followed by a pilot of 500 and a final annotation of 5000 examples. This release contains the results of the final data collection. We will release the instructions used in data collection.
#### Who are the annotators?
The annotators are professional linguists who were obtained through an internal contractor.
### Personal and Sensitive Information
We did not anonymize the data. We followed the specifications from StackExchange [here](https://archive.org/details/stackexchange) to include author information.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop systems that automatically summarize multi-perspective answers. A system that succeeds at this task would be able to summarize many perspectives present in an answer and not limit itself to a single perspective.
### Discussion of Biases
While StackExchange allows for the exchange of information and ideas, hate and harassment may exist on this site. While our annotators did not flag examples in this process, we encourage users of the dataset to reach out with concerns.
We also note that this dataset is limited in its monolingual coverage.
## Additional Information
### Dataset Curators
The dataset was collected by Alex Fabbri, Xiaojian Wu, Srini Iyer, Haoran Li, and Mona Diab during work done at Facebook.
### Licensing Information
The data is released under cc-by-sa 4.0 following the original StackExchange [release](https://archive.org/details/stackexchange).
### Citation Information
```bibtex
@misc{fabbri-etal-2022-answersumm,
title={AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization},
author={Alexander R. Fabbri and Xiaojian Wu and Srini Iyer and Haoran Li and Mona Diab },
year={2022},
eprint={2111.06474},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2111.06474}
}
```
|
nateraw/fuego-20230203-171955-25ab48 | ---
tags:
- fuego
fuego:
id: 20230203-171955-25ab48
status: done
script: main.py
requirements_file: requirements.txt
space_id: nateraw/fuego-20230203-171955-25ab48
space_hardware: cpu-basic
github_repo_id: pytorch/examples
github_repo_branch: main
github_repo_sha: d8456a36d1bbb22f72b003f59406a19a0a0547c3
---
|
pnadel/michgovparsed8_16 | ---
dataset_info:
features:
- name: From
dtype: string
- name: Sent
dtype: string
- name: To
dtype: string
- name: Cc
dtype: string
- name: Subject
dtype: string
- name: Attachment
dtype: string
- name: Body
dtype: string
- name: org_file
dtype: string
- name: formattedSent
dtype: string
splits:
- name: train
num_bytes: 14531072
num_examples: 5933
download_size: 6320342
dataset_size: 14531072
---
# Dataset Card for "michgovparsed8_16"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
abhishek/autotrain-data-3iqe-6zi8-5xf73 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: autotrain_image
dtype: image
- name: autotrain_label
dtype:
class_label:
names:
'0': daisy
'1': dandelion
'2': rose
'3': sunflower
'4': tulip
splits:
- name: train
num_bytes: 114410927.672
num_examples: 2196
- name: validation
num_bytes: 33682367.0
num_examples: 550
download_size: 166945851
dataset_size: 148093294.672
---
# Dataset Card for "autotrain-data-3iqe-6zi8-5xf73"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Disfluency/disfluency-es | ---
language:
- es
size_categories:
- n<1K
pretty_name: Disfluency
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 8000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 21315396.0
num_examples: 270
- name: test
num_bytes: 1731088.0
num_examples: 30
download_size: 1674700
dataset_size: 23046484.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
CATIE-AQ/universal_dependencies_fr_sequoia_fr_prompt_pos | ---
language:
- fr
license: lgpl
size_categories:
- 10K<n<100K
task_categories:
- token-classification
tags:
- pos
- DFP
- french prompts
annotations_creators:
- found
language_creators:
- found
multilinguality:
- monolingual
source_datasets:
- universal_dependencies_fr_sequoia
---
# universal_dependencies_fr_sequoia_fr_prompt_pos
## Summary
**universal_dependencies_fr_sequoia_fr_prompt_pos** is a subset of the [**Dataset of French Prompts (DFP)**](https://huggingface.co/datasets/CATIE-AQ/DFP).
It contains **27,804** rows that can be used for a part-of-speech task.
The original data (without prompts) comes from the dataset [universal_dependencies](https://huggingface.co/datasets/universal_dependencies) where only the French sequoia split has been kept.
A list of prompts (see below) was then applied in order to build the input and target columns and thus obtain the same format as the [xP3](https://huggingface.co/datasets/bigscience/xP3) dataset by Muennighoff et al.
## Prompts used
### List
21 prompts were created for this dataset. The logic applied consists in proposing prompts in the indicative tense, in the form of tutoiement and in the form of vouvoiement.
```
'Extraire les classes des mots du texte suivant : '+text,
'Extrais les classes des mots du texte suivant : '+text,
'Extrayez les classes des mots du texte suivant : '+text,
'Isoler les classes des mots du texte suivant : '+text,
'Isole les classes des mots du texte suivant : '+text,
'Isolez les classes des mots du texte suivant : '+text,
'Dégager les classes des mots dans le texte : '+text,
'Dégage les classes des mots dans le texte : '+text,
'Dégagez les classes des mots dans le texte : '+text,
'Générer les classes des mots issues du texte suivant : '+text,
'Génère les classes des mots issues du texte suivant : '+text,
'Générez les classes des mots issues du texte suivant : '+text,
'Trouver les classes des mots du texte : '+text,
'Trouve les classes des mots du texte : '+text,
'Trouvez les classes des mots du texte : '+text,
'Repérer les classes des mots présentes dans le texte suivant : '+text,
'Repère les classes des mots présentes dans le texte suivant : '+text,
'Repérez les classes des mots présentes dans le texte suivant : '+text,
'Indiquer les classes des mots du texte :'+text,
'Indique les classes des mots du texte : '+text,
'Indiquez les classes des mots du texte : '+text
```
### Features used in the prompts
In the prompt list above, `text` and `targets` have been constructed from:
```
fr_sequoia = load_dataset('universal_dependencies', 'fr_sequoia')
# text
fr_sequoia['train']['tokens'] = list(map(lambda i: ' '.join(fr_sequoia['train']['tokens'][i]), range(len(fr_sequoia['train']['tokens']))))
# targets
fr_sequoia['train']['upos'] = list(map(lambda x: x.replace("[","").replace("]","").replace('17','AUX').replace('16','VERB').replace('15','INTJ').replace('14','ADV').replace('13','_').replace('12','X').replace('11','PRON').replace('10','PROPN').replace('9','CCONJ').replace('8','DET').replace('7','PART').replace('6','ADJ').replace('5','SCONJ').replace('4','SYM').replace('3','NUM').replace('2','ADP').replace('1','PUNCT').replace('0','NOUN'), map(str,fr_sequoia['train']['upos'])))
```
# Splits
- `train` with 9,576 samples
- `valid` with 8,652 samples
- `test` with 9,576 samples
# How to use?
```
from datasets import load_dataset
dataset = load_dataset("CATIE-AQ/universal_dependencies_fr_sequoia_fr_prompt_pos")
```
# Citation
## Original data
> @inproceedings{candito:hal-00698938,
TITLE = {{Le corpus Sequoia : annotation syntaxique et exploitation pour l'adaptation d'analyseur par pont lexical}},
AUTHOR = {Candito, Marie and Seddah, Djam{\'e}},
URL = {https://inria.hal.science/hal-00698938},
BOOKTITLE = {{TALN 2012 - 19e conf{\'e}rence sur le Traitement Automatique des Langues Naturelles}},
ADDRESS = {Grenoble, France},
YEAR = {2012},
MONTH = Jun,
PDF = {https://inria.hal.science/hal-00698938/file/canditoseddah-taln2012-final.pdf},
HAL_ID = {hal-00698938},
HAL_VERSION = {v1},
}
## This Dataset
> @misc {centre_aquitain_des_technologies_de_l'information_et_electroniques_2023,
author = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
title = { DFP (Revision 1d24c09) },
year = 2023,
url = { https://huggingface.co/datasets/CATIE-AQ/DFP },
doi = { 10.57967/hf/1200 },
publisher = { Hugging Face }
}
## License
LGPL-LR |
Lakera/mosscap_prompt_injection | ---
license: mit
dataset_info:
features:
- name: level
dtype: string
- name: prompt
dtype: string
- name: answer
dtype: string
- name: raw_answer
dtype: string
splits:
- name: train
num_bytes: 136521220
num_examples: 223533
- name: validation
num_bytes: 17380225
num_examples: 27683
- name: test
num_bytes: 17009787
num_examples: 27729
download_size: 63785770
dataset_size: 170911232
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# mosscap_prompt_injection
<img src="https://grt.lakera.ai/robots/level7.jpg" width="500px" />
This is a dataset of prompt injections submitted to the game [Mosscap](https://grt.lakera.ai) by [Lakera](https://www.lakera.ai/).
This variant of the game [Gandalf](https://gandalf.lakera.ai) was created for DEF CON 31.
Note that the Mosscap levels may no longer be available in the future.
Note that we release every prompt that we received, regardless of whether it truly is a prompt injection or not.
There are hundrends of thousands of prompts and many of them are not actual prompt injections (people ask Mosscap all kinds of things).
## Data
Each row corresponds to a prompt that was sent to Mosscap. The dataset has the following columns:
- `level`: The level that the prompt was submitted to, as "Level {n}", where "n" is between 1 and 8.
The levels are the same as in standard Gandalf but with different passwords.
See [this blog](https://www.lakera.ai/blog/who-is-gandalf) for a description of what defenses are used in each level.
- `prompt`: The actual prompt that the user submitted.
- `answer`: The answer that was displayed to the user.
- `raw_answer`: The raw ChatGPT answer before any post-processing is applied. For example, in level 3, if the response contains the password,
Mosscap will display "🙅I was about to reveal the password, but then I remembered that I'm not allowed to do that." to the user.
`raw_answer` contains the original ChatGPT answer that would have spoiled the password.
In standard Gandalf, the passwords are uppercase English words, but in Mosscap, they can also contain special characters and be longer.
These factors make Mosscap more difficult than the original Gandalf.
## Mosscap and prompt injections
Who is Mosscap?
At DEF CON 2023, the AI Village is bringing together thousands of people from different communities to conduct the largest red teaming exercise ever for any group of AI models at the Generative Red Team (GRT) Challenge.
Mosscap is a spin-off of Lakera's popular game [Gandalf](https://gandalf.lakera.ai), re-emerged in new styles just in time for the Challenge.
The Generative AI Red Team Challenge design, including Mosscap, is inspired by the "Monk and Robot" series. Though it's a light-hearted and fun game, Mosscap illustrates an important type of LLM security issues: prompt injection.
## Citation
If you use this dataset in your research, please cite it as
```
@InProceedings{mosscap_prompt_injection,
title = {mosscap_prompt_injection},
author={Lakera AI (https://www.lakera.ai)},
year={2023}
}
```
## Licensing Information
mosscap_prompt_injection is distributed under the [MIT License](https://opensource.org/license/mit/).
|
DopeorNope/hermes_removed | ---
dataset_info:
features:
- name: system
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 62227630
num_examples: 55435
download_size: 35145931
dataset_size: 62227630
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bigscience-data/roots_indic-te_wikipedia | ---
language: te
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_indic-te_wikipedia
# wikipedia
- Dataset uid: `wikipedia`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 3.2299 % of total
- 4.2071 % of en
- 5.6773 % of ar
- 3.3416 % of fr
- 5.2815 % of es
- 12.4852 % of ca
- 0.4288 % of zh
- 0.4286 % of zh
- 5.4743 % of indic-bn
- 8.9062 % of indic-ta
- 21.3313 % of indic-te
- 4.4845 % of pt
- 4.0493 % of indic-hi
- 11.3163 % of indic-ml
- 22.5300 % of indic-ur
- 4.4902 % of vi
- 16.9916 % of indic-kn
- 24.7820 % of eu
- 11.6241 % of indic-mr
- 9.8749 % of id
- 9.3489 % of indic-pa
- 9.4767 % of indic-gu
- 24.1132 % of indic-as
- 5.3309 % of indic-or
### BigScience processing steps
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ca
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
#### Filters applied to: zh
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: pt
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ur
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: vi
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: id
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-as
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-or
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
|
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-html-70000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 666980
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
srmisa/elsalvador-context | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: title
dtype: string
- name: embeddings
sequence: float64
splits:
- name: train
num_bytes: 24800833
num_examples: 3913
download_size: 14690698
dataset_size: 24800833
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
heliosprime/twitter_dataset_1713152548 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 5727
num_examples: 15
download_size: 10801
dataset_size: 5727
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713152548"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
one-sec-cv12/chunk_14 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 20150294112.75
num_examples: 209794
download_size: 17971519323
dataset_size: 20150294112.75
---
# Dataset Card for "chunk_14"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuhaotian/LLaVA-Pretrain | ---
license: other
language:
- en
pretty_name: LLaVA Pretrain
---
# LLaVA Visual Instruct Pretrain Dataset Card
## Dataset details
**Dataset type:**
LLaVA Visual Instruct Pretrain LCS-558K is a subset of LAION/CC/SBU dataset, filtered with a more balanced concept coverage distribution.
Captions are also associated with [BLIP synthetic caption](https://github.com/salesforce/BLIP#pre-training-datasets-download) for reference.
It is constructed for the pretraining stage for feature alignment in visual instruction tuning.
We aim to build large multimodal towards GPT-4 vision/language capability.
**Dataset date:**
LLaVA Visual Instruct CC3M Pretrain 595K was created in May 2023.
**Dataset structure:**
- `blip_laion_cc_sbu_558k.json` contains the multimodal synthesized conversation from the image-caption pairs, by adding randomly selected instructions like: "Describe this image". It is used for pretraining in LLaVA. We use the raw CC-3M caption as the default answer.
- `blip_laion_cc_sbu_558k_meta.json` contains the meta data of the image file name, image URL, synthetic BLIP caption.
- `images.zip` contains all raw images of the filtered subset from LAION/CC/SBU. Important notice: Upon the request from the community, as ~15% images of the original LAION/CC/SBU dataset are no longer accessible, we upload images.zip for better reproducing our work in research community. It should not be used for any other purpose. The use of these images must comply with the LAION/CC/SBU license. This may be taken down when requested by the original LAION/CC/SBU dataset owner or owners of the referenced images.
**Paper or resources for more information:**
https://llava-vl.github.io/
**License:**
Must comply with license of [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE), [BLIP](https://github.com/salesforce/BLIP/blob/main/LICENSE.txt) (if you use their synthetic caption).
CC-3M
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. |
open-llm-leaderboard/details_aboros98__motans1 | ---
pretty_name: Evaluation run of aboros98/motans1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [aboros98/motans1](https://huggingface.co/aboros98/motans1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_aboros98__motans1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-31T11:57:55.423129](https://huggingface.co/datasets/open-llm-leaderboard/details_aboros98__motans1/blob/main/results_2024-03-31T11-57-55.423129.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5701164072320629,\n\
\ \"acc_stderr\": 0.0336543556199306,\n \"acc_norm\": 0.572478787137479,\n\
\ \"acc_norm_stderr\": 0.03434219645804154,\n \"mc1\": 0.31334149326805383,\n\
\ \"mc1_stderr\": 0.016238065069059608,\n \"mc2\": 0.46102693040489917,\n\
\ \"mc2_stderr\": 0.015119347356892852\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.560580204778157,\n \"acc_stderr\": 0.014503747823580122,\n\
\ \"acc_norm\": 0.5861774744027304,\n \"acc_norm_stderr\": 0.014392730009221007\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5494921330412268,\n\
\ \"acc_stderr\": 0.004965276587781622,\n \"acc_norm\": 0.7342162915753834,\n\
\ \"acc_norm_stderr\": 0.004408468107262731\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932269,\n \
\ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932269\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.42962962962962964,\n\
\ \"acc_stderr\": 0.04276349494376599,\n \"acc_norm\": 0.42962962962962964,\n\
\ \"acc_norm_stderr\": 0.04276349494376599\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5657894736842105,\n \"acc_stderr\": 0.040335656678483205,\n\
\ \"acc_norm\": 0.5657894736842105,\n \"acc_norm_stderr\": 0.040335656678483205\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n\
\ \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\"\
: 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"\
acc\": 0.6075471698113207,\n \"acc_stderr\": 0.03005258057955785,\n \
\ \"acc_norm\": 0.6075471698113207,\n \"acc_norm_stderr\": 0.03005258057955785\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6180555555555556,\n\
\ \"acc_stderr\": 0.040629907841466674,\n \"acc_norm\": 0.6180555555555556,\n\
\ \"acc_norm_stderr\": 0.040629907841466674\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \
\ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n\
\ \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5895953757225434,\n\
\ \"acc_stderr\": 0.03750757044895537,\n \"acc_norm\": 0.5895953757225434,\n\
\ \"acc_norm_stderr\": 0.03750757044895537\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.28431372549019607,\n \"acc_stderr\": 0.04488482852329017,\n\
\ \"acc_norm\": 0.28431372549019607,\n \"acc_norm_stderr\": 0.04488482852329017\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n\
\ \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5148936170212766,\n \"acc_stderr\": 0.032671518489247764,\n\
\ \"acc_norm\": 0.5148936170212766,\n \"acc_norm_stderr\": 0.032671518489247764\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.37719298245614036,\n\
\ \"acc_stderr\": 0.04559522141958216,\n \"acc_norm\": 0.37719298245614036,\n\
\ \"acc_norm_stderr\": 0.04559522141958216\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.04164188720169375,\n\
\ \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.04164188720169375\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4365079365079365,\n \"acc_stderr\": 0.025542846817400496,\n \"\
acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.025542846817400496\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.373015873015873,\n\
\ \"acc_stderr\": 0.04325506042017086,\n \"acc_norm\": 0.373015873015873,\n\
\ \"acc_norm_stderr\": 0.04325506042017086\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \
\ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.6838709677419355,\n \"acc_stderr\": 0.02645087448904276,\n \"\
acc_norm\": 0.6838709677419355,\n \"acc_norm_stderr\": 0.02645087448904276\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.47783251231527096,\n \"acc_stderr\": 0.035145285621750094,\n \"\
acc_norm\": 0.47783251231527096,\n \"acc_norm_stderr\": 0.035145285621750094\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.64,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\"\
: 0.64,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6303030303030303,\n \"acc_stderr\": 0.03769430314512566,\n\
\ \"acc_norm\": 0.6303030303030303,\n \"acc_norm_stderr\": 0.03769430314512566\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7525252525252525,\n \"acc_stderr\": 0.030746300742124498,\n \"\
acc_norm\": 0.7525252525252525,\n \"acc_norm_stderr\": 0.030746300742124498\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.7772020725388601,\n \"acc_stderr\": 0.03003114797764154,\n\
\ \"acc_norm\": 0.7772020725388601,\n \"acc_norm_stderr\": 0.03003114797764154\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5769230769230769,\n \"acc_stderr\": 0.025049197876042338,\n\
\ \"acc_norm\": 0.5769230769230769,\n \"acc_norm_stderr\": 0.025049197876042338\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2777777777777778,\n \"acc_stderr\": 0.02730914058823017,\n \
\ \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.02730914058823017\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6176470588235294,\n \"acc_stderr\": 0.03156663099215416,\n \
\ \"acc_norm\": 0.6176470588235294,\n \"acc_norm_stderr\": 0.03156663099215416\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"\
acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7834862385321101,\n \"acc_stderr\": 0.017658710594443128,\n \"\
acc_norm\": 0.7834862385321101,\n \"acc_norm_stderr\": 0.017658710594443128\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5092592592592593,\n \"acc_stderr\": 0.034093869469927006,\n \"\
acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.034093869469927006\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.6568627450980392,\n \"acc_stderr\": 0.033321399446680854,\n \"\
acc_norm\": 0.6568627450980392,\n \"acc_norm_stderr\": 0.033321399446680854\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7215189873417721,\n \"acc_stderr\": 0.029178682304842555,\n \
\ \"acc_norm\": 0.7215189873417721,\n \"acc_norm_stderr\": 0.029178682304842555\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6412556053811659,\n\
\ \"acc_stderr\": 0.032190792004199956,\n \"acc_norm\": 0.6412556053811659,\n\
\ \"acc_norm_stderr\": 0.032190792004199956\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7099236641221374,\n \"acc_stderr\": 0.03980066246467766,\n\
\ \"acc_norm\": 0.7099236641221374,\n \"acc_norm_stderr\": 0.03980066246467766\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7355371900826446,\n \"acc_stderr\": 0.040261875275912046,\n \"\
acc_norm\": 0.7355371900826446,\n \"acc_norm_stderr\": 0.040261875275912046\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n\
\ \"acc_stderr\": 0.042365112580946315,\n \"acc_norm\": 0.7407407407407407,\n\
\ \"acc_norm_stderr\": 0.042365112580946315\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.034089978868575295,\n\
\ \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.034089978868575295\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n\
\ \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n\
\ \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7475728155339806,\n \"acc_stderr\": 0.04301250399690878,\n\
\ \"acc_norm\": 0.7475728155339806,\n \"acc_norm_stderr\": 0.04301250399690878\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.811965811965812,\n\
\ \"acc_stderr\": 0.02559819368665227,\n \"acc_norm\": 0.811965811965812,\n\
\ \"acc_norm_stderr\": 0.02559819368665227\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6679438058748404,\n\
\ \"acc_stderr\": 0.016841174655295724,\n \"acc_norm\": 0.6679438058748404,\n\
\ \"acc_norm_stderr\": 0.016841174655295724\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.661849710982659,\n \"acc_stderr\": 0.02546977014940017,\n\
\ \"acc_norm\": 0.661849710982659,\n \"acc_norm_stderr\": 0.02546977014940017\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2223463687150838,\n\
\ \"acc_stderr\": 0.013907189208156881,\n \"acc_norm\": 0.2223463687150838,\n\
\ \"acc_norm_stderr\": 0.013907189208156881\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6470588235294118,\n \"acc_stderr\": 0.02736359328468497,\n\
\ \"acc_norm\": 0.6470588235294118,\n \"acc_norm_stderr\": 0.02736359328468497\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6366559485530546,\n\
\ \"acc_stderr\": 0.02731684767419271,\n \"acc_norm\": 0.6366559485530546,\n\
\ \"acc_norm_stderr\": 0.02731684767419271\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6080246913580247,\n \"acc_stderr\": 0.02716368603827115,\n\
\ \"acc_norm\": 0.6080246913580247,\n \"acc_norm_stderr\": 0.02716368603827115\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.45390070921985815,\n \"acc_stderr\": 0.02970045324729148,\n \
\ \"acc_norm\": 0.45390070921985815,\n \"acc_norm_stderr\": 0.02970045324729148\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.408735332464146,\n\
\ \"acc_stderr\": 0.012555701346703382,\n \"acc_norm\": 0.408735332464146,\n\
\ \"acc_norm_stderr\": 0.012555701346703382\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.4522058823529412,\n \"acc_stderr\": 0.03023375855159645,\n\
\ \"acc_norm\": 0.4522058823529412,\n \"acc_norm_stderr\": 0.03023375855159645\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5473856209150327,\n \"acc_stderr\": 0.020136790918492534,\n \
\ \"acc_norm\": 0.5473856209150327,\n \"acc_norm_stderr\": 0.020136790918492534\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n\
\ \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n\
\ \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6938775510204082,\n \"acc_stderr\": 0.029504896454595954,\n\
\ \"acc_norm\": 0.6938775510204082,\n \"acc_norm_stderr\": 0.029504896454595954\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7810945273631841,\n\
\ \"acc_stderr\": 0.029239174636647,\n \"acc_norm\": 0.7810945273631841,\n\
\ \"acc_norm_stderr\": 0.029239174636647\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.77,\n \"acc_stderr\": 0.042295258468165065,\n \
\ \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.042295258468165065\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4759036144578313,\n\
\ \"acc_stderr\": 0.038879718495972646,\n \"acc_norm\": 0.4759036144578313,\n\
\ \"acc_norm_stderr\": 0.038879718495972646\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.6783625730994152,\n \"acc_stderr\": 0.03582529442573122,\n\
\ \"acc_norm\": 0.6783625730994152,\n \"acc_norm_stderr\": 0.03582529442573122\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.31334149326805383,\n\
\ \"mc1_stderr\": 0.016238065069059608,\n \"mc2\": 0.46102693040489917,\n\
\ \"mc2_stderr\": 0.015119347356892852\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7411207576953434,\n \"acc_stderr\": 0.012310515810993369\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4700530705079606,\n \
\ \"acc_stderr\": 0.013747759685444704\n }\n}\n```"
repo_url: https://huggingface.co/aboros98/motans1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|arc:challenge|25_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|gsm8k|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hellaswag|10_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-31T11-57-55.423129.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-31T11-57-55.423129.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- '**/details_harness|winogrande|5_2024-03-31T11-57-55.423129.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-31T11-57-55.423129.parquet'
- config_name: results
data_files:
- split: 2024_03_31T11_57_55.423129
path:
- results_2024-03-31T11-57-55.423129.parquet
- split: latest
path:
- results_2024-03-31T11-57-55.423129.parquet
---
# Dataset Card for Evaluation run of aboros98/motans1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [aboros98/motans1](https://huggingface.co/aboros98/motans1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_aboros98__motans1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-31T11:57:55.423129](https://huggingface.co/datasets/open-llm-leaderboard/details_aboros98__motans1/blob/main/results_2024-03-31T11-57-55.423129.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5701164072320629,
"acc_stderr": 0.0336543556199306,
"acc_norm": 0.572478787137479,
"acc_norm_stderr": 0.03434219645804154,
"mc1": 0.31334149326805383,
"mc1_stderr": 0.016238065069059608,
"mc2": 0.46102693040489917,
"mc2_stderr": 0.015119347356892852
},
"harness|arc:challenge|25": {
"acc": 0.560580204778157,
"acc_stderr": 0.014503747823580122,
"acc_norm": 0.5861774744027304,
"acc_norm_stderr": 0.014392730009221007
},
"harness|hellaswag|10": {
"acc": 0.5494921330412268,
"acc_stderr": 0.004965276587781622,
"acc_norm": 0.7342162915753834,
"acc_norm_stderr": 0.004408468107262731
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932269,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932269
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.42962962962962964,
"acc_stderr": 0.04276349494376599,
"acc_norm": 0.42962962962962964,
"acc_norm_stderr": 0.04276349494376599
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5657894736842105,
"acc_stderr": 0.040335656678483205,
"acc_norm": 0.5657894736842105,
"acc_norm_stderr": 0.040335656678483205
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6075471698113207,
"acc_stderr": 0.03005258057955785,
"acc_norm": 0.6075471698113207,
"acc_norm_stderr": 0.03005258057955785
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6180555555555556,
"acc_stderr": 0.040629907841466674,
"acc_norm": 0.6180555555555556,
"acc_norm_stderr": 0.040629907841466674
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5895953757225434,
"acc_stderr": 0.03750757044895537,
"acc_norm": 0.5895953757225434,
"acc_norm_stderr": 0.03750757044895537
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.28431372549019607,
"acc_stderr": 0.04488482852329017,
"acc_norm": 0.28431372549019607,
"acc_norm_stderr": 0.04488482852329017
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5148936170212766,
"acc_stderr": 0.032671518489247764,
"acc_norm": 0.5148936170212766,
"acc_norm_stderr": 0.032671518489247764
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.37719298245614036,
"acc_stderr": 0.04559522141958216,
"acc_norm": 0.37719298245614036,
"acc_norm_stderr": 0.04559522141958216
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.04164188720169375,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.04164188720169375
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.025542846817400496,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.025542846817400496
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.373015873015873,
"acc_stderr": 0.04325506042017086,
"acc_norm": 0.373015873015873,
"acc_norm_stderr": 0.04325506042017086
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6838709677419355,
"acc_stderr": 0.02645087448904276,
"acc_norm": 0.6838709677419355,
"acc_norm_stderr": 0.02645087448904276
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.47783251231527096,
"acc_stderr": 0.035145285621750094,
"acc_norm": 0.47783251231527096,
"acc_norm_stderr": 0.035145285621750094
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.64,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.64,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6303030303030303,
"acc_stderr": 0.03769430314512566,
"acc_norm": 0.6303030303030303,
"acc_norm_stderr": 0.03769430314512566
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7525252525252525,
"acc_stderr": 0.030746300742124498,
"acc_norm": 0.7525252525252525,
"acc_norm_stderr": 0.030746300742124498
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7772020725388601,
"acc_stderr": 0.03003114797764154,
"acc_norm": 0.7772020725388601,
"acc_norm_stderr": 0.03003114797764154
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5769230769230769,
"acc_stderr": 0.025049197876042338,
"acc_norm": 0.5769230769230769,
"acc_norm_stderr": 0.025049197876042338
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.02730914058823017,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.02730914058823017
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6176470588235294,
"acc_stderr": 0.03156663099215416,
"acc_norm": 0.6176470588235294,
"acc_norm_stderr": 0.03156663099215416
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7834862385321101,
"acc_stderr": 0.017658710594443128,
"acc_norm": 0.7834862385321101,
"acc_norm_stderr": 0.017658710594443128
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6568627450980392,
"acc_stderr": 0.033321399446680854,
"acc_norm": 0.6568627450980392,
"acc_norm_stderr": 0.033321399446680854
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7215189873417721,
"acc_stderr": 0.029178682304842555,
"acc_norm": 0.7215189873417721,
"acc_norm_stderr": 0.029178682304842555
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6412556053811659,
"acc_stderr": 0.032190792004199956,
"acc_norm": 0.6412556053811659,
"acc_norm_stderr": 0.032190792004199956
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7099236641221374,
"acc_stderr": 0.03980066246467766,
"acc_norm": 0.7099236641221374,
"acc_norm_stderr": 0.03980066246467766
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7355371900826446,
"acc_stderr": 0.040261875275912046,
"acc_norm": 0.7355371900826446,
"acc_norm_stderr": 0.040261875275912046
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.042365112580946315,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.042365112580946315
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7484662576687117,
"acc_stderr": 0.034089978868575295,
"acc_norm": 0.7484662576687117,
"acc_norm_stderr": 0.034089978868575295
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.7475728155339806,
"acc_stderr": 0.04301250399690878,
"acc_norm": 0.7475728155339806,
"acc_norm_stderr": 0.04301250399690878
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.811965811965812,
"acc_stderr": 0.02559819368665227,
"acc_norm": 0.811965811965812,
"acc_norm_stderr": 0.02559819368665227
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.6679438058748404,
"acc_stderr": 0.016841174655295724,
"acc_norm": 0.6679438058748404,
"acc_norm_stderr": 0.016841174655295724
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.661849710982659,
"acc_stderr": 0.02546977014940017,
"acc_norm": 0.661849710982659,
"acc_norm_stderr": 0.02546977014940017
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2223463687150838,
"acc_stderr": 0.013907189208156881,
"acc_norm": 0.2223463687150838,
"acc_norm_stderr": 0.013907189208156881
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6470588235294118,
"acc_stderr": 0.02736359328468497,
"acc_norm": 0.6470588235294118,
"acc_norm_stderr": 0.02736359328468497
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6366559485530546,
"acc_stderr": 0.02731684767419271,
"acc_norm": 0.6366559485530546,
"acc_norm_stderr": 0.02731684767419271
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6080246913580247,
"acc_stderr": 0.02716368603827115,
"acc_norm": 0.6080246913580247,
"acc_norm_stderr": 0.02716368603827115
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.45390070921985815,
"acc_stderr": 0.02970045324729148,
"acc_norm": 0.45390070921985815,
"acc_norm_stderr": 0.02970045324729148
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.408735332464146,
"acc_stderr": 0.012555701346703382,
"acc_norm": 0.408735332464146,
"acc_norm_stderr": 0.012555701346703382
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4522058823529412,
"acc_stderr": 0.03023375855159645,
"acc_norm": 0.4522058823529412,
"acc_norm_stderr": 0.03023375855159645
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5473856209150327,
"acc_stderr": 0.020136790918492534,
"acc_norm": 0.5473856209150327,
"acc_norm_stderr": 0.020136790918492534
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6938775510204082,
"acc_stderr": 0.029504896454595954,
"acc_norm": 0.6938775510204082,
"acc_norm_stderr": 0.029504896454595954
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7810945273631841,
"acc_stderr": 0.029239174636647,
"acc_norm": 0.7810945273631841,
"acc_norm_stderr": 0.029239174636647
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.77,
"acc_stderr": 0.042295258468165065,
"acc_norm": 0.77,
"acc_norm_stderr": 0.042295258468165065
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4759036144578313,
"acc_stderr": 0.038879718495972646,
"acc_norm": 0.4759036144578313,
"acc_norm_stderr": 0.038879718495972646
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.6783625730994152,
"acc_stderr": 0.03582529442573122,
"acc_norm": 0.6783625730994152,
"acc_norm_stderr": 0.03582529442573122
},
"harness|truthfulqa:mc|0": {
"mc1": 0.31334149326805383,
"mc1_stderr": 0.016238065069059608,
"mc2": 0.46102693040489917,
"mc2_stderr": 0.015119347356892852
},
"harness|winogrande|5": {
"acc": 0.7411207576953434,
"acc_stderr": 0.012310515810993369
},
"harness|gsm8k|5": {
"acc": 0.4700530705079606,
"acc_stderr": 0.013747759685444704
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Francesco/corrosion-bi3q3 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': corrosion-0
'1': Slippage
'2': corrosion
'3': crack
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: corrosion-bi3q3
tags:
- rf100
---
# Dataset Card for corrosion-bi3q3
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/corrosion-bi3q3
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
corrosion-bi3q3
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/corrosion-bi3q3
### Citation Information
```
@misc{ corrosion-bi3q3,
title = { corrosion bi3q3 Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/corrosion-bi3q3 } },
url = { https://universe.roboflow.com/object-detection/corrosion-bi3q3 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
tog/dolphin_5k_test | ---
language:
- en
license: apache-2.0
task_categories:
- text-generation
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8726321.400179625
num_examples: 5000
download_size: 4973800
dataset_size: 8726321.400179625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Tiny Dolphin 🐬
see https://erichartford.com/dolphin
## Dataset details
This dataset is an extract of ~1 million of FLANv2 augmented with GPT-4 completions (flan1m-alpaca-uncensored.jsonl). It is derived from this [dataset](https://huggingface.co/datasets/ehartford/dolphin)
### Loading
```python
dataset = load_dataset("tog/dolphin_5k_test)
```
This dataset is licensed apache-2.0 for commercial or non-commercial use. |
nasa-cisto-data-science-group/senegal-lcluc-tutorial | ---
license: apache-2.0
---
|
KagglingFace/nnUNetPlans_3d_lowres_KiTS19 | ---
license: mit
---
Please cite the following paper when using nnU-Net:
Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211. |
crime_and_punish | ---
language:
- en
paperswithcode_id: null
pretty_name: CrimeAndPunish
dataset_info:
features:
- name: line
dtype: string
splits:
- name: train
num_bytes: 1270540
num_examples: 21969
download_size: 1201735
dataset_size: 1270540
---
# Dataset Card for "crime_and_punish"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.gutenberg.org/files/2554/2554-h/2554-h.htm](https://www.gutenberg.org/files/2554/2554-h/2554-h.htm)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.21 MB
- **Size of the generated dataset:** 1.27 MB
- **Total amount of disk used:** 2.47 MB
### Dataset Summary
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### crime-and-punish
- **Size of downloaded dataset files:** 1.21 MB
- **Size of the generated dataset:** 1.27 MB
- **Total amount of disk used:** 2.47 MB
An example of 'train' looks as follows.
```
{
"line": "CRIME AND PUNISHMENT\n"
}
```
### Data Fields
The data fields are the same among all splits.
#### crime-and-punish
- `line`: a `string` feature.
### Data Splits
| name |train|
|----------------|----:|
|crime-and-punish|21969|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
yuan-sf63/word_label_0.5_64_D | ---
dataset_info:
features:
- name: text
dtype: string
- name: '0'
dtype: int64
- name: '1'
dtype: int64
- name: '2'
dtype: int64
- name: '3'
dtype: int64
- name: '4'
dtype: int64
- name: '5'
dtype: int64
- name: '6'
dtype: int64
- name: '7'
dtype: int64
- name: '8'
dtype: int64
- name: '9'
dtype: int64
- name: '10'
dtype: int64
- name: '11'
dtype: int64
- name: '12'
dtype: int64
- name: '13'
dtype: int64
- name: '14'
dtype: int64
- name: '15'
dtype: int64
- name: '16'
dtype: int64
- name: '17'
dtype: int64
- name: '18'
dtype: int64
- name: '19'
dtype: int64
- name: '20'
dtype: int64
- name: '21'
dtype: int64
- name: '22'
dtype: int64
- name: '23'
dtype: int64
- name: '24'
dtype: int64
- name: '25'
dtype: int64
- name: '26'
dtype: int64
- name: '27'
dtype: int64
- name: '28'
dtype: int64
- name: '29'
dtype: int64
- name: '30'
dtype: int64
- name: '31'
dtype: int64
- name: '32'
dtype: int64
- name: '33'
dtype: int64
- name: '34'
dtype: int64
- name: '35'
dtype: int64
- name: '36'
dtype: int64
- name: '37'
dtype: int64
- name: '38'
dtype: int64
- name: '39'
dtype: int64
- name: '40'
dtype: int64
- name: '41'
dtype: int64
- name: '42'
dtype: int64
- name: '43'
dtype: int64
- name: '44'
dtype: int64
- name: '45'
dtype: int64
- name: '46'
dtype: int64
- name: '47'
dtype: int64
- name: '48'
dtype: int64
- name: '49'
dtype: int64
- name: '50'
dtype: int64
- name: '51'
dtype: int64
- name: '52'
dtype: int64
- name: '53'
dtype: int64
- name: '54'
dtype: int64
- name: '55'
dtype: int64
- name: '56'
dtype: int64
- name: '57'
dtype: int64
- name: '58'
dtype: int64
- name: '59'
dtype: int64
- name: '60'
dtype: int64
- name: '61'
dtype: int64
- name: '62'
dtype: int64
- name: '63'
dtype: int64
splits:
- name: train
num_bytes: 44319033.9
num_examples: 71802
- name: validation
num_bytes: 4924337.1
num_examples: 7978
download_size: 8494767
dataset_size: 49243371.0
---
# Dataset Card for "word_label_0.5_64_D"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
elifftosunn/bank-dataset | ---
license: mit
---
|
Nexdata/Multi-angle_Lip_Multimodal_Video_Data | ---
language:
- zh
---
# Dataset Card for Nexdata/Multi-angle_Lip_Multimodal_Video_Data
## Description
202 People - Multi-angle Lip Multimodal Video Data. The collection environments include indoor natural light scenes and indoor fluorescent lamp scenes. The device is cellphone. The diversity includes multiple scenes, different ages, 13 shooting angles. The language is Mandarin Chinese. The recording content is general field, unlimited content. The data can be used in multi-modal learning algorithms research in speech and image fields.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1298?source=Huggingface
# Specifications
## Data size
202 people, each person collects the audio and video data from 13 different angles +1 txt document
## People distribution
race distribution: Asian (Indonesia), gender distribution: 89 males, 113 females, age distribution: 165 people aged 18-30, 32 people aged 31-45, and 5 people aged 46-60
## Collecting environment
indoor natural light scenes, indoor fluorescent lamp scenes
## Data diversity
including multiple scenes, different ages, different shooting angles
## Device
cellphone, the resolution is 1,920*1,080
## Collecting angle
audio and video data of front face, 3 angles left side face, 3 angles right side face, looking down, looking up, left side face down, right side face down, left side face up and right side face up all 13 different angles were collected at the same time
## Recording content
general field, unlimited content
## Language
Mandarin Chinese, each video is more than 20 seconds
## Data format
the video data format is .mp4, the audio is greater than or equal to 16KHz, 16bit, the frame rate is 25-30 fps
## Accuracy rata
the accuracy rate of sentence is more than 95%
# Licensing Information
Commercial License |
streamerbtw1002/stringtheory-163KB | ---
license: apache-2.0
language:
- en
size_categories:
- 10K<n<100K
---
This dataset is fully made with AI.
The AI gets information in a PDF, and then create questions and the answers in lists.
```json
[
{
"question": "What is 1+2",
"answer": "1+2 is equals to 3."
},
...
]
``` |
managgiaate/gergefrgyuerf | ---
license: openrail
---
|
freshpearYoon/v3_train_free_concat_31 | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 3842451168
num_examples: 2500
download_size: 1751540522
dataset_size: 3842451168
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
laion/strategic_game_chess | ---
tags:
- game
pretty_name: The Chess Dataset
license: cc-by-4.0
---
# Chess
> Recent advancements in artificial intelligence (AI) underscore the progress of reasoning and planning shown by recent generalist machine learning (ML) models. The progress can be boosted by datasets that can further boost these generic capabilities when used for training foundation models of various kind. This research initiative has generated extensive synthetic datasets from complex games — chess, Rubik's Cube, and mazes — to study facilitation and the advancement of these critical generic skills in AI models.
This dataset contains 3.2 billion games, equating to approximately 608 billion individual moves.
it is generated through self-play by Stockfish engine using Fugaku and we add initial moves to expand its diversity.
Each game has three columns: 'Moves', 'Termination' and 'Result',
- 'Move': recorded chess moves of the whole game.
- 'Termination': include CHECKMATE, INSUFFICIENT_MATERIAL, ... etc.
- Please check this for detail information
https://python-chess.readthedocs.io/en/latest/core.html#chess.Outcome.termination
- 'Result': result of this game, 1-0, 1/2-1/2, 0-1.
### Call for Collaboration
We invite interested researchers and ML practitioners to explore these datasets' potential. Whether training GPT models from scratch or fine-tuning pre-existing models, we encourage the exploration of various pre-training and fine-tuning strategies using these game-based datasets standalone or as enhancement of other already composed large-scale data.
Our team is prepared to assist in securing necessary GPU resources for these explorations. We are particularly interested in collaborators eager to pre-train models of small to medium scale on our game data, subsequently transition to standard text-based training, and then perform comparative analyses against models of similar architecture trained exclusively on text data.
Conclusively, this initiative marks a significant stride toward intricate problem-solving and strategic planning in AI, extending an open invitation to the research community for collaborative advancement in this domain. |
autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-3fbf83bf-11925597 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- kmfoda/booksum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9
metrics: []
dataset_name: kmfoda/booksum
dataset_config: kmfoda--booksum
dataset_split: test
col_mapping:
text: chapter
target: summary_text
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
Weni/LLM_Base_2.0.3_SFT_negative_reduction_negative_response_variation | ---
language:
- pt
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: question
dtype: string
- name: answear
dtype: string
- name: context
dtype: string
- name: correct_ans
dtype: int64
- name: language
dtype: string
splits:
- name: pt
num_bytes: 16906927
num_examples: 8505
- name: en
num_bytes: 15790178
num_examples: 8387
- name: es
num_bytes: 15857946
num_examples: 8048
download_size: 16929932
dataset_size: 48555051
configs:
- config_name: default
data_files:
- split: pt
path: data/pt-*
- split: en
path: data/en-*
- split: es
path: data/es-*
---
|
KETI-AIR/kor_ai2_arc | ---
license: cc-by-sa-4.0
configs:
- config_name: ARC-Challenge
data_files:
- split: train
path: ARC-Challenge/train-*
- split: validation
path: ARC-Challenge/validation-*
- split: test
path: ARC-Challenge/test-*
- config_name: ARC-Easy
data_files:
- split: train
path: ARC-Easy/train-*
- split: validation
path: ARC-Easy/validation-*
- split: test
path: ARC-Easy/test-*
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
- config_name: ARC-Challenge
features:
- name: data_index_by_user
dtype: int32
- name: id
dtype: string
- name: question
dtype: string
- name: choices
struct:
- name: text
sequence: string
- name: label
sequence: string
- name: answerKey
dtype: string
splits:
- name: train
num_bytes: 396164
num_examples: 1119
- name: validation
num_bytes: 108314
num_examples: 299
- name: test
num_bytes: 425252
num_examples: 1172
download_size: 516331
dataset_size: 929730
- config_name: ARC-Easy
features:
- name: data_index_by_user
dtype: int32
- name: id
dtype: string
- name: question
dtype: string
- name: choices
struct:
- name: text
sequence: string
- name: label
sequence: string
- name: answerKey
dtype: string
splits:
- name: train
num_bytes: 694289
num_examples: 2251
- name: validation
num_bytes: 175983
num_examples: 570
- name: test
num_bytes: 735067
num_examples: 2376
download_size: 861121
dataset_size: 1605339
- config_name: default
features:
- name: data_index_by_user
dtype: int32
- name: id
dtype: string
- name: question
dtype: string
- name: choices
struct:
- name: text
sequence: string
- name: label
sequence: string
- name: answerKey
dtype: string
splits:
- name: train
num_bytes: 694289
num_examples: 2251
- name: validation
num_bytes: 175983
num_examples: 570
- name: test
num_bytes: 735067
num_examples: 2376
download_size: 861121
dataset_size: 1605339
---
# Dataset Card for "kor_ai2_arc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
# Source Data Citation Information
```
@article{allenai:arc,
author = {Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and
Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},
title = {Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},
journal = {arXiv:1803.05457v1},
year = {2018},
}
``` |
strombergnlp/danfever | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
- natural-language-inference
paperswithcode_id: danfever
pretty_name: DanFEVER
tags:
- knowledge-verification
---
# Dataset Card for DanFEVER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/StrombergNLP/danfever](https://github.com/StrombergNLP/danfever)
- **Repository:** [https://stromberg.ai/publication/danfever/](https://stromberg.ai/publication/danfever/)
- **Paper:** [https://aclanthology.org/2021.nodalida-main.47/](https://aclanthology.org/2021.nodalida-main.47/)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Leon Derczynski](mailto:leod@itu.dk)
- **Size of downloaded dataset files:** 2.82 MiB
- **Size of the generated dataset:** 2.80 MiB
- **Total amount of disk used:** 5.62 MiB
### Dataset Summary
We present a dataset, DanFEVER, intended for multilingual misinformation research. The dataset is in Danish and has the same format as the well-known English FEVER dataset. It can be used for testing methods in multilingual settings, as well as for creating models in production for the Danish language.
### Supported Tasks and Leaderboards
This dataset supports the FEVER task, but in Danish.
* PwC leaderboard: [Fact Verification on DanFEVER](https://paperswithcode.com/sota/fact-verification-on-danfever)
### Languages
This dataset is in Danish; the bcp47 is `da_DK`.
## Dataset Structure
### Data Instances
```
{
'id': '0',
'claim': 'Den 31. oktober 1920 opdagede Walter Baade kometen (944) Hidalgo i det ydre solsystem.',
'label': 0,
'evidence_extract': '(944) Hidalgo (oprindeligt midlertidigt navn: 1920 HZ) er en mørk småplanet med en diameter på ca. 50 km, der befinder sig i det ydre solsystem. Objektet blev opdaget den 31. oktober 1920 af Walter Baade. En asteroide (småplanet, planetoide) er et fast himmellegeme, hvis bane går rundt om Solen (eller en anden stjerne). Pr. 5. maj 2017 kendes mere end 729.626 asteroider og de fleste befinder sig i asteroidebæltet mellem Mars og Jupiter.',
'verifiable': 1,
'evidence': 'wiki_26366, wiki_12289',
'original_id': '1'
}
```
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
A dump of the Danish Wikipedia of 13 February 2020 was stored as well as the relevant articles from Den Store Danske (excerpts only, to comply with copyright laws). Two teams of two people independently sampled evidence, and created and annotated claims from these two sites.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
The source language is from Wikipedia contributors editors and from dictionary contributors and editors.
### Annotations
#### Annotation process
Detailed in [this paper](http://www.derczynski.com/papers/danfever.pdf).
#### Who are the annotators?
The annotators are native Danish speakers and masters students of IT; two female, two male, ages 25-35.
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to enable construction of fact-checking systems in Danish. A system that succeeds at this may be able to identify questionable conclusions or inferences.
### Discussion of Biases
The data is drawn from relatively formal topics, and so may perform poorly outside these areas.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The data here is licensed CC-BY 4.0. If you use this data, you MUST state its origin.
### Citation Information
Refer to this work as:
> Nørregaard and Derczynski (2021). "DanFEVER: claim verification dataset for Danish", Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa).
Bibliographic reference:
````
@inproceedings{norregaard-derczynski-2021-danfever,
title = "{D}an{FEVER}: claim verification dataset for {D}anish",
author = "N{\o}rregaard, Jeppe and Derczynski, Leon",
booktitle = "Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)",
year = "2021",
publisher = {Link{\"o}ping University Electronic Press, Sweden},
url = "https://aclanthology.org/2021.nodalida-main.47",
pages = "422--428"
}
```
|
MarkK/spongebob_transcripts | ---
license: cc-by-sa-4.0
task_categories:
- text-generation
- conversational
language:
- en
size_categories:
- 10K<n<100K
tags:
- cartoons
---
## <h1>Spongebob Transcripts Dataset 🧽</h1>
The Spongebob Transcripts Dataset is a collection of transcripts from the beloved animated television series, Spongebob Squarepants. This dataset includes information on each line of dialogue spoken by a character, including the character's name, their replica, and the episode ID.
The number of characters in the dataset: **84**
Total number of words in the dataset: **~80,800 words**, **~4000 rows**, **Updated to full Season 1**
## <h3>Dataset Overview 📊</h3>
|Column | Description |
|------------|-------------------------------------|
|**Speaker** | The character speaking the dialogue.|
|**Replica** | The line of dialogue spoken. |
|**EP_ID** | The episode ID of the transcript. |
## <h3>System Replicas🔍</h3>
The system replicas describe the actions and events that occur in each episode. These replicas are written in a specific format, using brackets to indicate actions and events.
**<h5>Replica Format</h5>**
`{system} : [The episode opens with a bubble transition, and we see a coral reef under the sea. The camera zooms to initiate parallax scrolling, which reveals the city of Bikini Bottom. It continues zooming to show a brown rock, a Moai head, and a pineapple, which each contain inhabitants.]`
## <h3>Sample Data 💬</h3>
|Speaker |Replica |EP_ID |
|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------|-------|
|**Spongebob** | I just met this girl. She wears a hat full of... air. |s1e3_22|
|**Patrick** | Do you mean she puts on "airs"? |s1e3_23|
|**Spongebob** | I guess so. |s1e3_24|
|**Patrick** | That's just fancy talk. If you wanna be fancy, hold your pinky up like this. The higher you hold it, the fancier you are. |s1e3_25|
## <h3>📊 Interactions with Dataset</h3>
**<h5>Using Pandas to filter rows</h5>**
1. To find all rows with a specific ep_id, you can use the following code:
```
import pandas as pd
#Read the CSV file into a Pandas DataFrame
df = pd.read_csv('dataset.csv')
#Define the ep_id you want to filter by
ep_id = 's1e2'
#Filter the DataFrame to get rows with an ep_id that starts with the defined ep_id
filtered_df = df[df['ep_id'].str.startswith(ep_id)]
#Print the filtered DataFrame
print(filtered_df)
```
2. To find rows where a specific character says a specific word or phrase, you can use the following code:
```
#Filter the DataFrame to get rows where a specific character says a specific word or phrase
speaker = 'SpongeBob'
word_or_phrase = 'jellyfish'
filtered_df = df[df['speaker'] == speaker]
filtered_df = filtered_df[filtered_df['replica'].str.contains(word_or_phrase)]
#Print the filtered DataFrame
print(filtered_df)
```
You can replace `SpongeBob` and `jellyfish` with any other speaker and word/phrase that you want to filter by.
## <h3>Data Sources 📝</h3>
The transcripts were sourced *Encyclopedia SpongeBobia*.
## <h3>Potential Uses 🧐</h3>
This Dataset could be used for a variety of natural language processing (NLP) tasks, including dialogue generation. It could also be used for educational purposes, such as studying the language and communication styles of different characters. |
prakash48/autotrain-data-bhaav-sentiment | ---
language:
- en
task_categories:
- text-classification
---
# AutoTrain Dataset for project: bhaav-sentiment
## Dataset Description
This dataset has been automatically processed by AutoTrain for project bhaav-sentiment.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "\u0914\u0930 \u0926\u094b\u0928\u094b\u0902 \u091f\u0940\u0932\u0947 \u0915\u0947 \u0905\u0932\u0917 \u0905\u0932\u0917 \u0915\u094b\u0928\u0947 \u092e\u0947\u0902 \u091c\u093e \u092a\u0939\u0941\u0902\u091a\u0947",
"target": 3
},
{
"text": "\u0909\u0938\u0915\u0947 \u092e\u0941\u0901\u0939 \u0938\u0947 \u090f\u0915 \u091a\u0940\u0916 \u0928\u093f\u0915\u0932 \u0917\u092f\u0940",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['0', '1', '2', '3', '4'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 16241 |
| valid | 4063 |
|
marcobrando/accent_id | ---
license: unknown
---
|
elliotthwang/guanaco-llama2-chinese-1ka | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1348677
num_examples: 1000
download_size: 811412
dataset_size: 1348677
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-chinese-1ka"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shoondyu/AI_EarthHack | ---
license: apache-2.0
---
|
Arnaldo34/voice1 | ---
license: openrail
---
|
patched-codes/static-analysis-eval | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: source
dtype: string
- name: file_name
dtype: string
- name: cwe
dtype: string
splits:
- name: train
num_bytes: 87854
num_examples: 76
download_size: 53832
dataset_size: 87854
---
# Dataset Card for "static-analysis-eval"
A dataset of 76 Python programs taken from real Python open source projects (top 1000 on GitHub),
where each program is a file that has exactly 1 vulnerability as detected by a particular static analyzer (Semgrep). |
luyunlll/pp1 | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1520321640
num_examples: 3000
- name: test
num_bytes: 382341867
num_examples: 750
download_size: 452124174
dataset_size: 1902663507
---
# Dataset Card for "pp1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alexandrainst/scandi-reddit | ---
pretty_name: ScandiReddit
language:
- da
- sv
- no
- is
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
---
# Dataset Card for ScandiReddit
## Dataset Description
- **Repository:** <https://github.com/alexandrainst/ScandiReddit>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
- **Size of downloaded dataset files:** 2341 MB
- **Size of the generated dataset:** 3594 MB
- **Total amount of disk used:** 5935 MB
### Dataset Summary
ScandiReddit is a filtered and post-processed corpus consisting of comments from [Reddit](https://reddit.com/).
All Reddit comments from December 2005 up until October 2022 were downloaded through [PushShift](https://files.pushshift.io/reddit/comments/), after which these were filtered based on the FastText language detection model. Any comment which was classified as Danish (`da`), Norwegian (`no`), Swedish (`sv`) or Icelandic (`is`) with a confidence score above 70% was kept.
The resulting comments were then deduplicated, removing roughly 438,000 comments. 5,000 comments written by Reddit bots were removed, and roughly 189,000 comments belonging to inappropriate subreddits (explicit and drug-related) were also removed.
Lastly, we remove roughly 40,000 near-duplicate comments from the resulting corpus, where near-duplicate here means that the comments have more than 80% of their word 5-grams in common.
### Supported Tasks and Leaderboards
Training language models is the intended task for this dataset. No leaderboard is active at this point.
### Languages
The dataset is available in Danish (`da`), Swedish (`sv`), Norwegian (`no`) and Icelandic (`is`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 2341 MB
- **Size of the generated dataset:** 3594 MB
- **Total amount of disk used:** 5935 MB
An example from the dataset looks as follows.
```
{
'doc': 'Bergen er ødelagt. Det er ikke moro mer.',
'subreddit': 'Norway',
'language': 'da',
'language_confidence': 0.7472341656684875
}
```
### Data Fields
The data fields are the same among all splits.
- `doc`: a `string` feature.
- `subreddit`: a `string` feature.
- `language`: a `string` feature.
- `language_confidence`: a `float64` feature.
### Language Distribution
| name | count |
|----------|---------:|
| sv | 6,967,420 |
| da | 4,965,195 |
| no | 1,340,470 |
| is | 206,689 |
| total | 13,479,774 |
### Top-50 Subreddit Distribution
| name | count |
|----------|--------:|
|sweden |4,881,483|
|Denmark |3,579,178|
|norge |1,281,655|
|svenskpolitik | 771,960|
|InfluencergossipDK | 649,910|
|swedishproblems | 339,683|
|Iceland | 183,488|
|dkfinance | 113,860|
|unket | 81,077|
|DanishEnts | 69,055|
|dankmark | 62,928|
|swedents | 58,576|
|scandinavia | 57,136|
|Allsvenskan | 56,006|
|Gothenburg | 54,395|
|stockholm | 51,016|
|ISKbets | 47,944|
|Sverige | 39,552|
|SWARJE | 34,691|
|GossipDK | 29,332|
|NorskFotball | 28,571|
|Superligaen | 23,641|
|Aarhus | 22,516|
|Svenska | 20,561|
|newsdk | 19,893|
|AskReddit | 16,672|
|copenhagen | 16,668|
|okpolarncp | 16,583|
|SwedditUniversalis | 15,990|
|Sveriges_politik | 15,058|
|intresseklubben | 13,246|
|Aktiemarknaden | 13,202|
|soccer | 12,637|
|teenagers | 10,845|
|Norway | 10,680|
|europe | 10,247|
|Matinbum | 9,792|
|oslo | 9,650|
|iksdagen | 9,232|
|Asksweddit | 8,851|
|Forsvaret | 8,641|
|Sverigesforsvarsmakt | 8,469|
|memes | 8,299|
|Danish | 8,268|
|DANMAG | 8,214|
|PewdiepieSubmissions | 7,800|
|sweddpolitik | 7,646|
|pinsamt | 7,318|
|arbetarrorelsen | 7,317|
|Ishockey | 6,824|
## Dataset Creation
### Curation Rationale
The Scandinavian languages do not have many open source social media datasets.
### Source Data
The raw Reddit data was collected through [PushShift](https://files.pushshift.io/reddit/comments/).
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY 4.0
license](https://creativecommons.org/licenses/by/4.0/).
|
ateffal/softskills | ---
license: mit
---
This dataset contains paragraphs tagged as relevant to soft skills or not. |
result-kand2-sdxl-wuerst-karlo/cfc9bbcd | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 187
num_examples: 10
download_size: 1339
dataset_size: 187
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cfc9bbcd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_CommonGen | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: concept_set_id
dtype: int32
- name: concepts
list: string
- name: target
dtype: string
- name: references
list: string
- name: joined_concepts
dtype: string
splits:
- name: train
num_bytes: 12780999
num_examples: 67389
- name: validation
num_bytes: 440794
num_examples: 993
- name: test
num_bytes: 214190
num_examples: 1497
- name: train_dedup
num_bytes: 6018136
num_examples: 32651
download_size: 8248320
dataset_size: 19454119
---
# Dataset Card for "UDR_CommonGen"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
amitness/logits-italian-128 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: teacher_logits
sequence:
sequence: float64
- name: teacher_indices
sequence:
sequence: int64
- name: teacher_mask_indices
sequence: int64
splits:
- name: train
num_bytes: 37616201036
num_examples: 8305825
download_size: 16084893126
dataset_size: 37616201036
---
# Dataset Card for "logits-italian-128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
markytools/goosegmv3 | ---
license: mit
---
goosynthtrainsegm.zip ---> gaze on object (goo) synthetic train set segmentation data (.npy files)
goosynthtestsegm.zip ---> gaze on object (goo) synthetic test set segmentation data (.npy files)
goorealtestsegm.zip ---> gaze on object (goo) real test set segmentation data (.npy files) |
Tural/stanford_alpaca | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: output
dtype: string
- name: input
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 19000112
num_examples: 52002
download_size: 11986667
dataset_size: 19000112
---
# Dataset Card for "stanford_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/feena_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of feena (Fire Emblem)
This is the dataset of feena (Fire Emblem), containing 30 images and their tags.
The core tags of this character are `long_hair, pink_hair, bow, pink_eyes, ponytail, hair_bow, breasts, very_long_hair, side_ponytail`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 30 | 37.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/feena_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 30 | 23.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/feena_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 69 | 45.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/feena_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 30 | 35.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/feena_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 69 | 60.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/feena_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/feena_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, barefoot, full_body, jewelry, leg_up, short_sleeves, solo, holding_sword, open_mouth, short_dress, simple_background, bangs, shiny_hair, toes, white_background, company_name, copyright_name, grey_background, one_eye_closed, smile, sparkle, torn_clothes, transparent_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | barefoot | full_body | jewelry | leg_up | short_sleeves | solo | holding_sword | open_mouth | short_dress | simple_background | bangs | shiny_hair | toes | white_background | company_name | copyright_name | grey_background | one_eye_closed | smile | sparkle | torn_clothes | transparent_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:------------|:----------|:---------|:----------------|:-------|:----------------|:-------------|:--------------|:--------------------|:--------|:-------------|:-------|:-------------------|:---------------|:-----------------|:------------------|:-----------------|:--------|:----------|:---------------|:-------------------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
AdapterOcean/python3-standardized_cluster_5_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 12084405
num_examples: 7729
download_size: 0
dataset_size: 12084405
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "python3-standardized_cluster_5_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kuotient/Verified-Camel-KO | ---
license: apache-2.0
task_categories:
- conversational
- question-answering
- text-generation
language:
- ko
tags:
- Physics
- Biology
- Math
- Chemistry
- Culture
- Logic
pretty_name: Verified-Camel-KO
size_categories:
- n<1K
---
## Verified-Camel-KO
이 데이터셋은 https://huggingface.co/datasets/LDJnr/Verified-Camel 의 한국어 번역입니다.
GPT4 Turbo로 번역한 뒤, 약간의 수정을 거쳤습니다.
이 데이터에 대한 방침은 전부 원 저자의 방침을 따릅니다.
## This is the Official Verified Camel dataset. Just over 100 verified examples, and many more coming soon!
- Comprised of over 100 highly filtered and curated examples from specific portions of CamelAI stem datasets.
- These examples are verified to be true by experts in the specific related field, with atleast a bachelors degree in the subject.
- Roughly 30-40% of the originally curated data from CamelAI was found to have atleast minor errors and/or incoherent questions(as determined by experts in said field)
## Purpose?
- This dataset is not intended to be trained on by itself(besides perhaps interesting research purposes) however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such!
## Quality filtering and cleaning.
- Extensive cleaning was done to make sure there is no possible instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021"
- This was done for the initial curation due to the responses being originally created by GPT-4.
## Future Plans & How you can help!
This is a relatively early build amongst the grand plans for the future of what I plan to work on!
In the near future we plan on leveraging the help of even more domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord! |
BangumiBase/gosick | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Gosick
This is the image base of bangumi Gosick, we detected 25 characters, 2356 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 98 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 36 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 167 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 92 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 29 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 24 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 24 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 100 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 20 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 16 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 16 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 10 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 10 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 28 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 762 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 13 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 12 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 18 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 10 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 45 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 535 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 11 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 27 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 32 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 221 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
CyberHarem/arlecchino_genshin | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Arlecchino/アルレッキーノ/아를레키노/僕人/召使/阿蕾奇诺/Arlebina (Genshin Impact)
This is the dataset of Arlecchino/アルレッキーノ/아를레키노/僕人/召使/阿蕾奇诺/Arlebina (Genshin Impact), containing 500 images and their tags.
The core tags of this character are `multicolored_hair, black_hair, bangs, white_hair, symbol-shaped_pupils, streaked_hair, hair_between_eyes, x-shaped_pupils, short_hair, black_eyes, red_pupils, breasts, two-tone_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:------------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 1.06 GiB | [Download](https://huggingface.co/datasets/CyberHarem/arlecchino_genshin/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 484.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arlecchino_genshin/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1201 | 1014.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arlecchino_genshin/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 885.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/arlecchino_genshin/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1201 | 1.67 GiB | [Download](https://huggingface.co/datasets/CyberHarem/arlecchino_genshin/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/arlecchino_genshin',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, closed_mouth, fur-trimmed_coat, looking_at_viewer, solo, upper_body, white_coat |
| 1 | 5 |  |  |  |  |  | 1girl, closed_mouth, fur-trimmed_coat, looking_at_viewer, solo, upper_body |
| 2 | 5 |  |  |  |  |  | 1girl, closed_mouth, earrings, fur-trimmed_coat, looking_at_viewer, solo, upper_body, white_coat |
| 3 | 7 |  |  |  |  |  | 1girl, fur-trimmed_coat, simple_background, solo, upper_body, white_background, closed_mouth, looking_at_viewer, white_coat, grey_hair |
| 4 | 5 |  |  |  |  |  | 1girl, cleavage, fur-trimmed_coat, large_breasts, looking_at_viewer, solo, bare_shoulders, collarbone, open_clothes, parted_lips, white_coat, grey_hair, navel, panties, stomach, thighs, black_bra, blush, off_shoulder, sitting, upper_body, white_bra |
| 5 | 5 |  |  |  |  |  | 1girl, black_gloves, black_pants, closed_mouth, long_sleeves, looking_at_viewer, solo, hand_up, sitting, alternate_costume, crossed_legs, earrings, white_coat, white_jacket, white_shirt, indoors, on_chair, red_gemstone, sidelocks |
| 6 | 5 |  |  |  |  |  | 1girl, black_gloves, holding_cup, looking_at_viewer, solo, wine_glass, red_eyes, smile, closed_mouth, coat, earrings, portrait |
| 7 | 5 |  |  |  |  |  | 1girl, bare_shoulders, black_dress, cowboy_shot, earrings, looking_at_viewer, solo, long_hair, simple_background, alternate_costume, artist_name, backless_dress, bare_back, black_gloves, choker, closed_mouth, grey_hair, hand_up, red_eyes, white_background, elbow_gloves, from_side, gradient_background, looking_back, parted_lips, profile |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | closed_mouth | fur-trimmed_coat | looking_at_viewer | solo | upper_body | white_coat | earrings | simple_background | white_background | grey_hair | cleavage | large_breasts | bare_shoulders | collarbone | open_clothes | parted_lips | navel | panties | stomach | thighs | black_bra | blush | off_shoulder | sitting | white_bra | black_gloves | black_pants | long_sleeves | hand_up | alternate_costume | crossed_legs | white_jacket | white_shirt | indoors | on_chair | red_gemstone | sidelocks | holding_cup | wine_glass | red_eyes | smile | coat | portrait | black_dress | cowboy_shot | long_hair | artist_name | backless_dress | bare_back | choker | elbow_gloves | from_side | gradient_background | looking_back | profile |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-------------------|:--------------------|:-------|:-------------|:-------------|:-----------|:--------------------|:-------------------|:------------|:-----------|:----------------|:-----------------|:-------------|:---------------|:--------------|:--------|:----------|:----------|:---------|:------------|:--------|:---------------|:----------|:------------|:---------------|:--------------|:---------------|:----------|:--------------------|:---------------|:---------------|:--------------|:----------|:-----------|:---------------|:------------|:--------------|:-------------|:-----------|:--------|:-------|:-----------|:--------------|:--------------|:------------|:--------------|:-----------------|:------------|:---------|:---------------|:------------|:----------------------|:---------------|:----------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | | X | X | X | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | | X | X | | X | X | | | | | | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | X | | X | X | | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | X | | X | X | | | X | X | X | X | | | X | | | X | | | | | | | | | | X | | | X | X | | | | | | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
datastax/entomology | ---
license: apache-2.0
language:
- en
pretty_name: Fictional entomology
size_categories:
- n<1K
---
32 made-up insect descriptions with Latin name and order (well, there's a spider, too), as one would find in a field guide.
These were created with ChatGPT 3.5 / ChatGPT 4 for the purpose of running example applications such as a "entomology field guide helper".
It was chosen to use entirely fictional material to avoid inadvertently using the LLM's implicit knowledge from pretraining in the demos. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.