datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
eabayed/EmiratiDialictShowsAudioTranscription | ---
license: afl-3.0
---
This dataset contains two files: a zipped file with segmented audio files from Emirati TV shows, podcasts, or YouTube channels, and a tsv file containing the transcription of the zipped audio files.
The purpose of the dataset is to act as a benchmark for Automatic Speech Recognition models that work with the Emirati dialect.
The dataset is made so that it covers different categories: traditions, cars, health, games, sports, and police.
Although the dataset is for the emirati dialect, sometimes people talking in a different dialect could be found in the shows, and they are kept as is.
For any suggestions please contact me at eabayed@gmail.com |
Mayuresh87/queries_on_sql | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 4064916
num_examples: 22074
download_size: 1086521
dataset_size: 4064916
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hojzas/proj8-lab1 | ---
license: apache-2.0
---
|
DynamicSuperbPrivate/EnhancementDetection_LibrittsTrainClean360Wham | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
- name: speech file
dtype: string
- name: noise file
dtype: string
- name: SNR
dtype: float32
splits:
- name: train
num_bytes: 32262863124.0
num_examples: 116500
- name: validation
num_bytes: 1545478177.008
num_examples: 5736
download_size: 38320667534
dataset_size: 33808341301.008
---
# Dataset Card for "EnhancementDetection_LibrittsTrainClean360Wham"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Myashka/gpt2-imdb-constractive | ---
license: mit
---
|
Writer/palmyra-data-index | ---
task_categories:
- text-generation
language:
- en
tags:
- B2B
- palmyra
size_categories:
- n>1T
pretty_name: Palmyra index 1T Sample
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Palmyra v1.4 dataset is a clean-room dataset. This HuggingFace repository contains a 1 billion token sample of the dataset. The full dataset has the following token counts and is available upon request.
| Dataset | Token Count |
|---------------|-------------|
| Commoncrawl (Filtered) | 790 Billion |
| C4 (Filtered) | 121 Billion |
| GitHub | 31 Billion |
| Books (Filtered) | 16 Billion |
| ArXiv | 28 Billion |
| Wikipedia | 24 Billion |
### Languages
Primarily English, though the Wikipedia slice contains multiple languages.
## Dataset Structure
The dataset structure is as follows:
```
{
"text": ...,
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}
}
```
## Dataset Creation
The Writer Linguistics team created this dataset in order to adhere to business data and free copyright content as much as possible.
### Source Data
#### Commoncrawl
We downloaded five dumps from Commoncrawl and ran them through the official `cc_net` pipeline. We filtered out low quality data and only kept data that is distributed free of any copyright restrictions.
#### C4
C4 is downloaded from Huggingface. Filter out low quality data, and only keep data that is distributed free of any copyright restrictions.
#### GitHub
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
files and only keep projects that are distributed under the MIT, BSD, or Apache license.
#### Wikipedia
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
formatting boilerplate has been removed.
#### Gutenberg and Public domains
The PG19 subset of the Gutenberg Project and public domains books.
#### ArXiv
ArXiv data is downloaded from Amazon S3 in the `arxiv` requester pays bucket. We only keep latex source files and remove preambles, comments, macros and bibliographies. |
pierre-loic/climate-news-articles | ---
license: cc
task_categories:
- text-classification
language:
- fr
tags:
- climate
- news
pretty_name: Titres de presse française avec labellisation "climat/pas climat"
size_categories:
- 1K<n<10K
---
# 🌍 Jeu de données d'articles de presse française labellisés comme traitant ou non des sujets liés au climat
*🇬🇧 / 🇺🇸 : as this data set is based only on French data, all explanations are written in French in this repository. The goal of the dataset is to train a model to classify titles of French newspapers in two categories : if it's about climate or not.*
## 🗺️ Le contexte
Ce jeu de données de classification de **titres d'article de presse française** a été réalisé pour l'association [Data for good](https://dataforgood.fr/) à Grenoble et plus particulièrement pour l'association [Quota climat](https://www.quotaclimat.org/).
## 💾 Le jeu de données
Le jeu de données d'entrainement contient 2007 titres d'articles de presse (1923 ne concernant pas le climat et 84 concernant le climat). Le jeu de données de test contient 502 titres d'articles de presse (481 ne concernant pas le climat et 21 concernant le climat).
 |
osyvokon/wiki-edits-uk | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- uk-UA
license:
- cc-by-3.0
multilinguality:
- monolingual
- translation
pretty_name: 'Ukrainian Wikipedia edits '
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- other
task_ids: []
---
# Ukrainian Wikipedia Edits
### Dataset summary
A collection of over 5M sentence edits extracted from Ukrainian Wikipedia history revisions.
Edits were filtered by edit distance and sentence length. This makes them usable for grammatical error correction (GEC) or spellchecker models pre-training.
### Supported Tasks and Leaderboards
* Ukrainian grammatical error correction (GEC) - see [UA-GEC](https://github.com/grammarly/ua-gec)
* Ukrainian spelling correction
### Languages
Ukrainian
## Dataset Structure
### Data Fields
* `src` - sentence before edit
* `tgt` - sentence after edit
### Data Splits
* `full/train` contains all the data (5,243,376 samples)
* `tiny/train` contains a 5000 examples sample.
## Dataset Creation
Latest full Ukrainian Wiki dump were used as of 2022-04-30.
It was processed with the [wikiedits](https://github.com/snukky/wikiedits) and custom scripts.
### Source Data
#### Initial Data Collection and Normalization
Wikipedia
#### Who are the source language producers?
Wikipedia writers
### Annotations
#### Annotation process
Annotations inferred by comparing two subsequent page revisions.
#### Who are the annotators?
People who edit Wikipedia pages.
### Personal and Sensitive Information
No
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The data is noisy. In addition to GEC and spelling edits, it contains a good chunk of factual changes and vandalism.
More task-specific filters could help.
## Additional Information
### Dataset Curators
[Oleksiy Syvokon](https://github.com/asivokon)
### Licensing Information
CC-BY-3.0
### Citation Information
```
@inproceedings{wiked2014,
author = {Roman Grundkiewicz and Marcin Junczys-Dowmunt},
title = {The WikEd Error Corpus: A Corpus of Corrective Wikipedia Edits and its Application to Grammatical Error Correction},
booktitle = {Advances in Natural Language Processing -- Lecture Notes in Computer Science},
editor = {Adam Przepiórkowski and Maciej Ogrodniczuk},
publisher = {Springer},
year = {2014},
volume = {8686},
pages = {478--490},
url = {http://emjotde.github.io/publications/pdf/mjd.poltal2014.draft.pdf}
}
```
### Contributions
[@snukky](https://github.com/snukky) created tools for dataset processing.
[@asivokon](https://github.com/asivokon) generated this dataset.
|
euclaise/oasst2_rank | ---
license: apache-2.0
dataset_info:
features:
- name: history
list:
- name: role
dtype: string
- name: text
dtype: string
- name: prompt
dtype: string
- name: completions
list:
- name: labels
struct:
- name: creativity
struct:
- name: count
dtype: int64
- name: value
dtype: float64
- name: fails_task
struct:
- name: count
dtype: int64
- name: value
dtype: float64
- name: hate_speech
struct:
- name: count
dtype: int64
- name: value
dtype: float64
- name: helpfulness
struct:
- name: count
dtype: int64
- name: value
dtype: float64
- name: humor
struct:
- name: count
dtype: int64
- name: value
dtype: float64
- name: lang_mismatch
struct:
- name: count
dtype: int64
- name: value
dtype: float64
- name: moral_judgement
struct:
- name: count
dtype: int64
- name: value
dtype: float64
- name: not_appropriate
struct:
- name: count
dtype: int64
- name: value
dtype: float64
- name: pii
struct:
- name: count
dtype: int64
- name: value
dtype: float64
- name: political_content
struct:
- name: count
dtype: int64
- name: value
dtype: float64
- name: quality
struct:
- name: count
dtype: int64
- name: value
dtype: float64
- name: sexual_content
struct:
- name: count
dtype: int64
- name: value
dtype: float64
- name: spam
struct:
- name: count
dtype: int64
- name: value
dtype: float64
- name: toxicity
struct:
- name: count
dtype: int64
- name: value
dtype: float64
- name: violence
struct:
- name: count
dtype: int64
- name: value
dtype: float64
- name: rank
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 106295033
num_examples: 28383
download_size: 49057236
dataset_size: 106295033
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
[oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) in a friendlier format |
amitness/sentiment-mt | ---
language: mt
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
- name: text
dtype: string
splits:
- name: train
num_bytes: 83382
num_examples: 595
- name: validation
num_bytes: 11602
num_examples: 85
- name: test
num_bytes: 25749
num_examples: 171
download_size: 0
dataset_size: 120733
---
# Dataset Card for "sentiment-mt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
timpal0l/scandisent | ---
license: openrail
task_categories:
- text-classification
language:
- sv
- no
- da
- en
- fi
pretty_name: ScandiSent
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository: https://github.com/timpal0l/ScandiSent**
- **Paper: https://arxiv.org/pdf/2104.10441.pdf**
- **Leaderboard:**
- **Point of Contact: Tim Isbister**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
ami-iit/human_upperbody_motions | ---
license: bsd-3-clause-clear
---
|
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_255 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 925002536.0
num_examples: 181658
download_size: 936638893
dataset_size: 925002536.0
---
# Dataset Card for "chunk_255"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
OdiaGenAI/Odia_Alpaca_instructions_52k | ---
license: cc-by-nc-sa-4.0
language:
- or
pretty_name: Odia_Alpaca_Instruction_52K
size_categories:
- 10K<n<100K
---
# Dataset Card for Odia_Alpaca_Instruction_52K
## Dataset Description
- **Homepage: https://www.odiagenai.org/**
- **Repository: https://github.com/shantipriyap/OdiaGenAI**
- **Point of Contact: Shantipriya Parida, and Sambit Sekhar**
### Dataset Summary
This dataset is the Odia-translated version of Alpaca 52K instruction set. In this dataset both English and Odia instruction, input, and output strings are available.
### Supported Tasks and Leaderboards
Large Language Model (LLM)
### Languages
Odia
## Dataset Structure
JSON
### Data Fields
instruction (string)
english_instruction (string)
input (string)
english_input (string)
output (string)
english_output (string)
### Licensing Information
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
### Citation Information
If you find this repository useful, please consider giving 👏 and citing:
```
@misc{OdiaGenAI,
author = {Shantipriya Parida and Sambit Sekhar and Subhadarshi Panda and Soumendra Kumar Sahoo and Swateek Jena and Abhijeet Parida and Arghyadeep Sen and Satya Ranjan Dash and Deepak Kumar Pradhan},
title = {OdiaGenAI: Generative AI and LLM Initiative for the Odia Language},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
```
### Contributions
- Shantipriya Parida
- Sambit Sekhar |
UnbiasedMoldInspectionsIN/4thTryGlmr | ---
license: apache-2.0
---
|
stigsfoot/cms_federal_medicare | ---
license: other
task_categories:
- text-classification
- table-question-answering
language:
- en
tags:
- medical
---
# Dataset Card for US Dialysis Facilities
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
The dataset includes a wide range of metrics, such as Five Star ratings, addresses, city/town, state, and various statistical measures related to the quality and outcomes of the facilities.
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
The "DFC_FACILITY.csv" dataset contains information about dialysis facilities, including certification, ratings, locations, and various performance measures.
- **Curated by:** Centers for Medicare and Medicaid
- **Shared by :** Centers for Medicare and Medicaid
- **Adapted for NLP tasks by:** Noble Ackerson @Byte An Atom Research
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://data.cms.gov/provider-data/topics/dialysis-facilities
## LLM Prototype Goal & Use case
Enhance the quality of dialysis care and patient experiences by providing actionable insights to healthcare providers and policymakers.
To do this I Fine-tune Llama 2 to create an intelligent decision support system that:
- Analyzes Facility Performance: Utilizes quality measures, clinical data, and patient surveys to evaluate individual dialysis facilities' performance.
- Generates Personalized Recommendations: Offers tailored recommendations for improvement based on identified weaknesses or areas of concern.
- Provides Comparative Analysis: Compares facilities at state and national levels to benchmark performance and identify best practices.
- Visualizes Patient Experience Insights: Processes ICH-CAHPS Survey data to visualize and interpret patient experiences, providing insights into patient satisfaction and areas for enhancing patient-provider relationships.
- **Demo [optional]:** https://colab.research.google.com/drive/1QyPJBiTezCzCCjkH3GlenlXGJCtxFSGz?usp=sharing
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
These are the official datasets used on Medicare.gov provided by the Centers for Medicare & Medicaid Services. These datasets allow you to compare the quality of care provided in Medicare-certified dialysis facilities nationwide.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The dataset consists of 118 columns, covering various metrics and attributes of dialysis facilities. Some key columns include Certification Number, Facility Name, Five Star Rating, Address, and numerous statistical measures related to healthcare outcomes.
Each row represents a unique dialysis facility.
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
These datasets allow you to compare the quality of care provided in Medicare-certified dialysis facilities nationwide.
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
Given the dataset involves healthcare facilities, care should be taken to ensure no personal or sensitive patient information is included or can be derived. |
lumenggan/avatar-the-last-airbender | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 1465863874.344
num_examples: 13896
download_size: 1427257543
dataset_size: 1465863874.344
---
# Dataset Card for "avatar-the-last-airbender"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yoon-gu/pokemon-ko | ---
license: mit
---
|
roman_urdu | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- ur
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: roman-urdu-data-set
pretty_name: Roman Urdu Dataset
dataset_info:
features:
- name: sentence
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': Positive
'1': Negative
'2': Neutral
splits:
- name: train
num_bytes: 1633423
num_examples: 20229
download_size: 1628349
dataset_size: 1633423
---
# Dataset Card for Roman Urdu Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Roman+Urdu+Data+Set)
- **Point of Contact:** [Zareen Sharf](mailto:zareensharf76@gmail.com)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Urdu
## Dataset Structure
[More Information Needed]
### Data Instances
```
Wah je wah,Positive,
```
### Data Fields
Each row consists of a short Urdu text, followed by a sentiment label. The labels are one of `Positive`, `Negative`, and `Neutral`. Note that the original source file is a comma-separated values file.
* `sentence`: A short Urdu text
* `label`: One of `Positive`, `Negative`, and `Neutral`, indicating the polarity of the sentiment expressed in the sentence
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{Sharf:2018,
title = "Performing Natural Language Processing on Roman Urdu Datasets",
authors = "Zareen Sharf and Saif Ur Rahman",
booktitle = "International Journal of Computer Science and Network Security",
volume = "18",
number = "1",
pages = "141-148",
year = "2018"
}
@misc{Dua:2019,
author = "Dua, Dheeru and Graff, Casey",
year = "2017",
title = "{UCI} Machine Learning Repository",
url = "http://archive.ics.uci.edu/ml",
institution = "University of California, Irvine, School of Information and Computer Sciences"
}
```
### Contributions
Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset. |
caball21/baseball | ---
license: unknown
---
|
allegro_reviews | ---
annotations_creators:
- found
language_creators:
- found
language:
- pl
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-scoring
- text-scoring
paperswithcode_id: allegro-reviews
pretty_name: Allegro Reviews
dataset_info:
features:
- name: text
dtype: string
- name: rating
dtype: float32
splits:
- name: train
num_bytes: 4899535
num_examples: 9577
- name: test
num_bytes: 514523
num_examples: 1006
- name: validation
num_bytes: 515781
num_examples: 1002
download_size: 3923657
dataset_size: 5929839
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
https://klejbenchmark.com/
- **Repository:**
https://github.com/allegro/klejbenchmark-allegroreviews
- **Paper:**
KLEJ: Comprehensive Benchmark for Polish Language Understanding (Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz)
- **Leaderboard:**
https://klejbenchmark.com/leaderboard/
- **Point of Contact:**
klejbenchmark@allegro.pl
### Dataset Summary
Allegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted from Allegro.pl - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale from one (negative review) to five (positive review).
We recommend using the provided train/dev/test split. The ratings for the test set reviews are kept hidden. You can evaluate your model using the online evaluation tool available on klejbenchmark.com.
### Supported Tasks and Leaderboards
Product reviews sentiment analysis.
https://klejbenchmark.com/leaderboard/
### Languages
Polish
## Dataset Structure
### Data Instances
Two tsv files (train, dev) with two columns (text, rating) and one (test) with just one (text).
### Data Fields
- text: a product review of at least 50 words
- rating: product rating of a scale of one (negative review) to five (positive review)
### Data Splits
Data is splitted in train/dev/test split.
## Dataset Creation
### Curation Rationale
This dataset is one of nine evaluation tasks to improve polish language processing.
### Source Data
#### Initial Data Collection and Normalization
The Allegro Reviews is a set of product reviews from a popular e-commerce marketplace (Allegro.pl).
#### Who are the source language producers?
Customers of an e-commerce marketplace.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Allegro Machine Learning Research team klejbenchmark@allegro.pl
### Licensing Information
Dataset licensed under CC BY-SA 4.0
### Citation Information
@inproceedings{rybak-etal-2020-klej,
title = "{KLEJ}: Comprehensive Benchmark for Polish Language Understanding",
author = "Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.111",
pages = "1191--1201",
}
### Contributions
Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset. |
UdyanSachdev/Multi_Language_Audio2Text | ---
license: mit
---
This dataset by Mozilla Common Voice (https://commonvoice.mozilla.org/en/datasets) is crafted by Udyan Sachdev
Voice datasets play a pivotal role in training and evaluating speech-to-text models, influencing advancements in natural language processing. This dataset outlines the creation of a comprehensive text dataset from 40,571 MP3 audio files sourced from the Common Voice project. The dataset aims to serve as a benchmark for training and evaluating speech-to-text models in English, French, and Spanish languages, leveraging the OpenAI Whisper-large-v3 model.
Data Details: Common Voice Delta Segment: • Size: 1.28 GB (40,571 MP3 audio files) • Duration: 68 recorded hours, 48 validated hours • Voices: 750 unique voices • Format: MP3 audio |
MegPaulson/Melanoma_Train | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_seg
dtype: image
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 35945944.0
num_examples: 26
download_size: 1333203
dataset_size: 35945944.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Melanoma_Train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sgans/JudgeSmall | ---
license: mit
task_categories:
- question-answering
language:
- en
size_categories:
- n<1K
---
</br>
# Can LLMs Become Editors?
### Dataset Summary
Judge is a new dataset for investigating how LLMs handle judging and writing responses with long term memory, short term memory and key information.
To succeed, an LLM needs to make correct evaluations of new responses based on the short, long and key data provided. Along with this test, we
can also evaulate how an LLM writes theres new responses as well. The coverage of questions in the dataset includes multiple categories like sports, music, history, gaming and more.
#### Dataset Size
This is the small version of the dataset with only 100 questions. Designed to be a low-cost test to find out how current LLMs handle these types
of problems.
#### LLM Results
<img alt="benchmark" src="small_benchmark.png">
--
#### Initial Low Scores Across The Board
During the experiments with JudgeSmall it was discovered that LLMs consistantly mixed up 4 point responses and 5 point responses. When taking this into
account, scores increased dramatically for all LLMs.
#### Self Reward Language Models
(Link: https://arxiv.org/pdf/2401.10020.pdf)
This paper was the inspiration for the creation of this dataset. The same scoring system used in this paper was used in the evaluation of LLMs with JudgeSmall.
--
#### Future Work
- Finding a way to prevent the mix up between a 4 point response and a 5 point response.
- Finding out the proper instructions to increase GPT4's score.
- Increasing the size of the dataset to create a training set for fine-tuning.
|
Heejung89/customCode | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2016
num_examples: 10
download_size: 2713
dataset_size: 2016
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_phanerozoic__Tiny-Pirate-1.1b-v0.1 | ---
pretty_name: Evaluation run of phanerozoic/Tiny-Pirate-1.1b-v0.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [phanerozoic/Tiny-Pirate-1.1b-v0.1](https://huggingface.co/phanerozoic/Tiny-Pirate-1.1b-v0.1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_phanerozoic__Tiny-Pirate-1.1b-v0.1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-03T09:58:56.501465](https://huggingface.co/datasets/open-llm-leaderboard/details_phanerozoic__Tiny-Pirate-1.1b-v0.1/blob/main/results_2024-04-03T09-58-56.501465.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.249657117729542,\n\
\ \"acc_stderr\": 0.030476136080602383,\n \"acc_norm\": 0.25040686164465875,\n\
\ \"acc_norm_stderr\": 0.031218530322115884,\n \"mc1\": 0.22276621787025705,\n\
\ \"mc1_stderr\": 0.01456650696139673,\n \"mc2\": 0.3583815493745987,\n\
\ \"mc2_stderr\": 0.013666714729248913\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.3447098976109215,\n \"acc_stderr\": 0.013888816286782112,\n\
\ \"acc_norm\": 0.36945392491467577,\n \"acc_norm_stderr\": 0.014104578366491904\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.4500099581756622,\n\
\ \"acc_stderr\": 0.004964779805180657,\n \"acc_norm\": 0.6016729735112527,\n\
\ \"acc_norm_stderr\": 0.004885529674958339\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \
\ \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.040936018074033256\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.18518518518518517,\n\
\ \"acc_stderr\": 0.033556772163131424,\n \"acc_norm\": 0.18518518518518517,\n\
\ \"acc_norm_stderr\": 0.033556772163131424\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.17763157894736842,\n \"acc_stderr\": 0.031103182383123398,\n\
\ \"acc_norm\": 0.17763157894736842,\n \"acc_norm_stderr\": 0.031103182383123398\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.31,\n\
\ \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \
\ \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.24528301886792453,\n \"acc_stderr\": 0.026480357179895678,\n\
\ \"acc_norm\": 0.24528301886792453,\n \"acc_norm_stderr\": 0.026480357179895678\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2569444444444444,\n\
\ \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.2569444444444444,\n\
\ \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.26,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.26,\n\
\ \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.2,\n \"acc_stderr\": 0.04020151261036845,\n \
\ \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n\
\ \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2138728323699422,\n\
\ \"acc_stderr\": 0.03126511206173043,\n \"acc_norm\": 0.2138728323699422,\n\
\ \"acc_norm_stderr\": 0.03126511206173043\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.043898699568087785,\n\
\ \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.043898699568087785\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.28,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.28,\n\
\ \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.2680851063829787,\n \"acc_stderr\": 0.02895734278834235,\n\
\ \"acc_norm\": 0.2680851063829787,\n \"acc_norm_stderr\": 0.02895734278834235\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.22807017543859648,\n\
\ \"acc_stderr\": 0.03947152782669415,\n \"acc_norm\": 0.22807017543859648,\n\
\ \"acc_norm_stderr\": 0.03947152782669415\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.2620689655172414,\n \"acc_stderr\": 0.036646663372252565,\n\
\ \"acc_norm\": 0.2620689655172414,\n \"acc_norm_stderr\": 0.036646663372252565\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.25396825396825395,\n \"acc_stderr\": 0.02241804289111395,\n \"\
acc_norm\": 0.25396825396825395,\n \"acc_norm_stderr\": 0.02241804289111395\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.21428571428571427,\n\
\ \"acc_stderr\": 0.03670066451047181,\n \"acc_norm\": 0.21428571428571427,\n\
\ \"acc_norm_stderr\": 0.03670066451047181\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \
\ \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.040936018074033256\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.18064516129032257,\n \"acc_stderr\": 0.021886178567172548,\n \"\
acc_norm\": 0.18064516129032257,\n \"acc_norm_stderr\": 0.021886178567172548\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.1625615763546798,\n \"acc_stderr\": 0.025960300064605587,\n \"\
acc_norm\": 0.1625615763546798,\n \"acc_norm_stderr\": 0.025960300064605587\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\"\
: 0.27,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.21212121212121213,\n \"acc_stderr\": 0.03192271569548299,\n\
\ \"acc_norm\": 0.21212121212121213,\n \"acc_norm_stderr\": 0.03192271569548299\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.21212121212121213,\n \"acc_stderr\": 0.02912652283458682,\n \"\
acc_norm\": 0.21212121212121213,\n \"acc_norm_stderr\": 0.02912652283458682\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.19689119170984457,\n \"acc_stderr\": 0.028697873971860664,\n\
\ \"acc_norm\": 0.19689119170984457,\n \"acc_norm_stderr\": 0.028697873971860664\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.19743589743589743,\n \"acc_stderr\": 0.02018264696867484,\n\
\ \"acc_norm\": 0.19743589743589743,\n \"acc_norm_stderr\": 0.02018264696867484\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.24444444444444444,\n \"acc_stderr\": 0.02620276653465215,\n \
\ \"acc_norm\": 0.24444444444444444,\n \"acc_norm_stderr\": 0.02620276653465215\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.21008403361344538,\n \"acc_stderr\": 0.026461398717471874,\n\
\ \"acc_norm\": 0.21008403361344538,\n \"acc_norm_stderr\": 0.026461398717471874\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.23178807947019867,\n \"acc_stderr\": 0.034454062719870546,\n \"\
acc_norm\": 0.23178807947019867,\n \"acc_norm_stderr\": 0.034454062719870546\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.2018348623853211,\n \"acc_stderr\": 0.01720857935778757,\n \"\
acc_norm\": 0.2018348623853211,\n \"acc_norm_stderr\": 0.01720857935778757\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.32407407407407407,\n \"acc_stderr\": 0.03191923445686185,\n \"\
acc_norm\": 0.32407407407407407,\n \"acc_norm_stderr\": 0.03191923445686185\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.3088235294117647,\n \"acc_stderr\": 0.03242661719827218,\n \"\
acc_norm\": 0.3088235294117647,\n \"acc_norm_stderr\": 0.03242661719827218\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.26582278481012656,\n \"acc_stderr\": 0.02875679962965834,\n \
\ \"acc_norm\": 0.26582278481012656,\n \"acc_norm_stderr\": 0.02875679962965834\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.3273542600896861,\n\
\ \"acc_stderr\": 0.031493846709941306,\n \"acc_norm\": 0.3273542600896861,\n\
\ \"acc_norm_stderr\": 0.031493846709941306\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.24427480916030533,\n \"acc_stderr\": 0.037683359597287434,\n\
\ \"acc_norm\": 0.24427480916030533,\n \"acc_norm_stderr\": 0.037683359597287434\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.2396694214876033,\n \"acc_stderr\": 0.03896878985070417,\n \"\
acc_norm\": 0.2396694214876033,\n \"acc_norm_stderr\": 0.03896878985070417\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25925925925925924,\n\
\ \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.25925925925925924,\n\
\ \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.22085889570552147,\n \"acc_stderr\": 0.032591773927421776,\n\
\ \"acc_norm\": 0.22085889570552147,\n \"acc_norm_stderr\": 0.032591773927421776\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.32142857142857145,\n\
\ \"acc_stderr\": 0.0443280405529152,\n \"acc_norm\": 0.32142857142857145,\n\
\ \"acc_norm_stderr\": 0.0443280405529152\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.1553398058252427,\n \"acc_stderr\": 0.03586594738573972,\n\
\ \"acc_norm\": 0.1553398058252427,\n \"acc_norm_stderr\": 0.03586594738573972\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.27350427350427353,\n\
\ \"acc_stderr\": 0.02920254015343116,\n \"acc_norm\": 0.27350427350427353,\n\
\ \"acc_norm_stderr\": 0.02920254015343116\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.23754789272030652,\n\
\ \"acc_stderr\": 0.015218733046150191,\n \"acc_norm\": 0.23754789272030652,\n\
\ \"acc_norm_stderr\": 0.015218733046150191\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.24855491329479767,\n \"acc_stderr\": 0.023267528432100174,\n\
\ \"acc_norm\": 0.24855491329479767,\n \"acc_norm_stderr\": 0.023267528432100174\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23798882681564246,\n\
\ \"acc_stderr\": 0.014242630070574915,\n \"acc_norm\": 0.23798882681564246,\n\
\ \"acc_norm_stderr\": 0.014242630070574915\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.22549019607843138,\n \"acc_stderr\": 0.023929155517351284,\n\
\ \"acc_norm\": 0.22549019607843138,\n \"acc_norm_stderr\": 0.023929155517351284\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2347266881028939,\n\
\ \"acc_stderr\": 0.024071805887677048,\n \"acc_norm\": 0.2347266881028939,\n\
\ \"acc_norm_stderr\": 0.024071805887677048\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.25617283950617287,\n \"acc_stderr\": 0.0242885336377261,\n\
\ \"acc_norm\": 0.25617283950617287,\n \"acc_norm_stderr\": 0.0242885336377261\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.2198581560283688,\n \"acc_stderr\": 0.024706141070705477,\n \
\ \"acc_norm\": 0.2198581560283688,\n \"acc_norm_stderr\": 0.024706141070705477\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.23728813559322035,\n\
\ \"acc_stderr\": 0.010865436690780272,\n \"acc_norm\": 0.23728813559322035,\n\
\ \"acc_norm_stderr\": 0.010865436690780272\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.23161764705882354,\n \"acc_stderr\": 0.025626533803777562,\n\
\ \"acc_norm\": 0.23161764705882354,\n \"acc_norm_stderr\": 0.025626533803777562\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.24509803921568626,\n \"acc_stderr\": 0.01740181671142766,\n \
\ \"acc_norm\": 0.24509803921568626,\n \"acc_norm_stderr\": 0.01740181671142766\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.22727272727272727,\n\
\ \"acc_stderr\": 0.04013964554072775,\n \"acc_norm\": 0.22727272727272727,\n\
\ \"acc_norm_stderr\": 0.04013964554072775\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.1673469387755102,\n \"acc_stderr\": 0.023897144768914524,\n\
\ \"acc_norm\": 0.1673469387755102,\n \"acc_norm_stderr\": 0.023897144768914524\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.25870646766169153,\n\
\ \"acc_stderr\": 0.03096590312357304,\n \"acc_norm\": 0.25870646766169153,\n\
\ \"acc_norm_stderr\": 0.03096590312357304\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.28313253012048195,\n\
\ \"acc_stderr\": 0.03507295431370518,\n \"acc_norm\": 0.28313253012048195,\n\
\ \"acc_norm_stderr\": 0.03507295431370518\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.3216374269005848,\n \"acc_stderr\": 0.03582529442573122,\n\
\ \"acc_norm\": 0.3216374269005848,\n \"acc_norm_stderr\": 0.03582529442573122\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.22276621787025705,\n\
\ \"mc1_stderr\": 0.01456650696139673,\n \"mc2\": 0.3583815493745987,\n\
\ \"mc2_stderr\": 0.013666714729248913\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6140489344909235,\n \"acc_stderr\": 0.01368203699339741\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.017437452615617893,\n \
\ \"acc_stderr\": 0.003605486867998265\n }\n}\n```"
repo_url: https://huggingface.co/phanerozoic/Tiny-Pirate-1.1b-v0.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|arc:challenge|25_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|gsm8k|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hellaswag|10_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-03T09-58-56.501465.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-03T09-58-56.501465.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- '**/details_harness|winogrande|5_2024-04-03T09-58-56.501465.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-03T09-58-56.501465.parquet'
- config_name: results
data_files:
- split: 2024_04_03T09_58_56.501465
path:
- results_2024-04-03T09-58-56.501465.parquet
- split: latest
path:
- results_2024-04-03T09-58-56.501465.parquet
---
# Dataset Card for Evaluation run of phanerozoic/Tiny-Pirate-1.1b-v0.1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [phanerozoic/Tiny-Pirate-1.1b-v0.1](https://huggingface.co/phanerozoic/Tiny-Pirate-1.1b-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_phanerozoic__Tiny-Pirate-1.1b-v0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-03T09:58:56.501465](https://huggingface.co/datasets/open-llm-leaderboard/details_phanerozoic__Tiny-Pirate-1.1b-v0.1/blob/main/results_2024-04-03T09-58-56.501465.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.249657117729542,
"acc_stderr": 0.030476136080602383,
"acc_norm": 0.25040686164465875,
"acc_norm_stderr": 0.031218530322115884,
"mc1": 0.22276621787025705,
"mc1_stderr": 0.01456650696139673,
"mc2": 0.3583815493745987,
"mc2_stderr": 0.013666714729248913
},
"harness|arc:challenge|25": {
"acc": 0.3447098976109215,
"acc_stderr": 0.013888816286782112,
"acc_norm": 0.36945392491467577,
"acc_norm_stderr": 0.014104578366491904
},
"harness|hellaswag|10": {
"acc": 0.4500099581756622,
"acc_stderr": 0.004964779805180657,
"acc_norm": 0.6016729735112527,
"acc_norm_stderr": 0.004885529674958339
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.18518518518518517,
"acc_stderr": 0.033556772163131424,
"acc_norm": 0.18518518518518517,
"acc_norm_stderr": 0.033556772163131424
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.17763157894736842,
"acc_stderr": 0.031103182383123398,
"acc_norm": 0.17763157894736842,
"acc_norm_stderr": 0.031103182383123398
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.24528301886792453,
"acc_stderr": 0.026480357179895678,
"acc_norm": 0.24528301886792453,
"acc_norm_stderr": 0.026480357179895678
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2569444444444444,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.2569444444444444,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.2138728323699422,
"acc_stderr": 0.03126511206173043,
"acc_norm": 0.2138728323699422,
"acc_norm_stderr": 0.03126511206173043
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2647058823529412,
"acc_stderr": 0.043898699568087785,
"acc_norm": 0.2647058823529412,
"acc_norm_stderr": 0.043898699568087785
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.28,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.28,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.2680851063829787,
"acc_stderr": 0.02895734278834235,
"acc_norm": 0.2680851063829787,
"acc_norm_stderr": 0.02895734278834235
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.22807017543859648,
"acc_stderr": 0.03947152782669415,
"acc_norm": 0.22807017543859648,
"acc_norm_stderr": 0.03947152782669415
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2620689655172414,
"acc_stderr": 0.036646663372252565,
"acc_norm": 0.2620689655172414,
"acc_norm_stderr": 0.036646663372252565
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.25396825396825395,
"acc_stderr": 0.02241804289111395,
"acc_norm": 0.25396825396825395,
"acc_norm_stderr": 0.02241804289111395
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.21428571428571427,
"acc_stderr": 0.03670066451047181,
"acc_norm": 0.21428571428571427,
"acc_norm_stderr": 0.03670066451047181
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.18064516129032257,
"acc_stderr": 0.021886178567172548,
"acc_norm": 0.18064516129032257,
"acc_norm_stderr": 0.021886178567172548
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.1625615763546798,
"acc_stderr": 0.025960300064605587,
"acc_norm": 0.1625615763546798,
"acc_norm_stderr": 0.025960300064605587
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.27,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.27,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.21212121212121213,
"acc_stderr": 0.03192271569548299,
"acc_norm": 0.21212121212121213,
"acc_norm_stderr": 0.03192271569548299
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.21212121212121213,
"acc_stderr": 0.02912652283458682,
"acc_norm": 0.21212121212121213,
"acc_norm_stderr": 0.02912652283458682
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.19689119170984457,
"acc_stderr": 0.028697873971860664,
"acc_norm": 0.19689119170984457,
"acc_norm_stderr": 0.028697873971860664
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.19743589743589743,
"acc_stderr": 0.02018264696867484,
"acc_norm": 0.19743589743589743,
"acc_norm_stderr": 0.02018264696867484
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.24444444444444444,
"acc_stderr": 0.02620276653465215,
"acc_norm": 0.24444444444444444,
"acc_norm_stderr": 0.02620276653465215
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.21008403361344538,
"acc_stderr": 0.026461398717471874,
"acc_norm": 0.21008403361344538,
"acc_norm_stderr": 0.026461398717471874
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.23178807947019867,
"acc_stderr": 0.034454062719870546,
"acc_norm": 0.23178807947019867,
"acc_norm_stderr": 0.034454062719870546
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.2018348623853211,
"acc_stderr": 0.01720857935778757,
"acc_norm": 0.2018348623853211,
"acc_norm_stderr": 0.01720857935778757
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.32407407407407407,
"acc_stderr": 0.03191923445686185,
"acc_norm": 0.32407407407407407,
"acc_norm_stderr": 0.03191923445686185
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.3088235294117647,
"acc_stderr": 0.03242661719827218,
"acc_norm": 0.3088235294117647,
"acc_norm_stderr": 0.03242661719827218
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.26582278481012656,
"acc_stderr": 0.02875679962965834,
"acc_norm": 0.26582278481012656,
"acc_norm_stderr": 0.02875679962965834
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.3273542600896861,
"acc_stderr": 0.031493846709941306,
"acc_norm": 0.3273542600896861,
"acc_norm_stderr": 0.031493846709941306
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.24427480916030533,
"acc_stderr": 0.037683359597287434,
"acc_norm": 0.24427480916030533,
"acc_norm_stderr": 0.037683359597287434
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2396694214876033,
"acc_stderr": 0.03896878985070417,
"acc_norm": 0.2396694214876033,
"acc_norm_stderr": 0.03896878985070417
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.042365112580946336,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.042365112580946336
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.22085889570552147,
"acc_stderr": 0.032591773927421776,
"acc_norm": 0.22085889570552147,
"acc_norm_stderr": 0.032591773927421776
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.32142857142857145,
"acc_stderr": 0.0443280405529152,
"acc_norm": 0.32142857142857145,
"acc_norm_stderr": 0.0443280405529152
},
"harness|hendrycksTest-management|5": {
"acc": 0.1553398058252427,
"acc_stderr": 0.03586594738573972,
"acc_norm": 0.1553398058252427,
"acc_norm_stderr": 0.03586594738573972
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.27350427350427353,
"acc_stderr": 0.02920254015343116,
"acc_norm": 0.27350427350427353,
"acc_norm_stderr": 0.02920254015343116
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.23754789272030652,
"acc_stderr": 0.015218733046150191,
"acc_norm": 0.23754789272030652,
"acc_norm_stderr": 0.015218733046150191
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24855491329479767,
"acc_stderr": 0.023267528432100174,
"acc_norm": 0.24855491329479767,
"acc_norm_stderr": 0.023267528432100174
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.23798882681564246,
"acc_stderr": 0.014242630070574915,
"acc_norm": 0.23798882681564246,
"acc_norm_stderr": 0.014242630070574915
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.22549019607843138,
"acc_stderr": 0.023929155517351284,
"acc_norm": 0.22549019607843138,
"acc_norm_stderr": 0.023929155517351284
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2347266881028939,
"acc_stderr": 0.024071805887677048,
"acc_norm": 0.2347266881028939,
"acc_norm_stderr": 0.024071805887677048
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.25617283950617287,
"acc_stderr": 0.0242885336377261,
"acc_norm": 0.25617283950617287,
"acc_norm_stderr": 0.0242885336377261
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2198581560283688,
"acc_stderr": 0.024706141070705477,
"acc_norm": 0.2198581560283688,
"acc_norm_stderr": 0.024706141070705477
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.23728813559322035,
"acc_stderr": 0.010865436690780272,
"acc_norm": 0.23728813559322035,
"acc_norm_stderr": 0.010865436690780272
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.23161764705882354,
"acc_stderr": 0.025626533803777562,
"acc_norm": 0.23161764705882354,
"acc_norm_stderr": 0.025626533803777562
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.24509803921568626,
"acc_stderr": 0.01740181671142766,
"acc_norm": 0.24509803921568626,
"acc_norm_stderr": 0.01740181671142766
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.22727272727272727,
"acc_stderr": 0.04013964554072775,
"acc_norm": 0.22727272727272727,
"acc_norm_stderr": 0.04013964554072775
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.1673469387755102,
"acc_stderr": 0.023897144768914524,
"acc_norm": 0.1673469387755102,
"acc_norm_stderr": 0.023897144768914524
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.25870646766169153,
"acc_stderr": 0.03096590312357304,
"acc_norm": 0.25870646766169153,
"acc_norm_stderr": 0.03096590312357304
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-virology|5": {
"acc": 0.28313253012048195,
"acc_stderr": 0.03507295431370518,
"acc_norm": 0.28313253012048195,
"acc_norm_stderr": 0.03507295431370518
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.3216374269005848,
"acc_stderr": 0.03582529442573122,
"acc_norm": 0.3216374269005848,
"acc_norm_stderr": 0.03582529442573122
},
"harness|truthfulqa:mc|0": {
"mc1": 0.22276621787025705,
"mc1_stderr": 0.01456650696139673,
"mc2": 0.3583815493745987,
"mc2_stderr": 0.013666714729248913
},
"harness|winogrande|5": {
"acc": 0.6140489344909235,
"acc_stderr": 0.01368203699339741
},
"harness|gsm8k|5": {
"acc": 0.017437452615617893,
"acc_stderr": 0.003605486867998265
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
AdapterOcean/med_alpaca_standardized_cluster_82_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 9389219
num_examples: 16788
download_size: 4649190
dataset_size: 9389219
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_82_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ulewis/vaxclass | ---
license: mit
---
|
akash140500/failure12 | ---
license: apache-2.0
---
|
letao670982/machine_solution | ---
dataset_info:
features:
- name: MACHINE_NO
dtype: string
- name: ERROR_ID
dtype: int64
- name: ERROR_CODE
dtype: string
- name: ERROR_DESC
dtype: string
- name: ERROR_CATEGORY1
dtype: string
- name: SOLUTION
dtype: string
splits:
- name: train
num_bytes: 625718
num_examples: 1689
- name: vaild
num_bytes: 79433
num_examples: 211
- name: test
num_bytes: 74391
num_examples: 212
download_size: 166732
dataset_size: 779542
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: vaild
path: data/vaild-*
- split: test
path: data/test-*
---
|
liuyanchen1015/MULTI_VALUE_sst2_who_as | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 4902
num_examples: 31
- name: test
num_bytes: 11775
num_examples: 69
- name: train
num_bytes: 146824
num_examples: 1021
download_size: 76802
dataset_size: 163501
---
# Dataset Card for "MULTI_VALUE_sst2_who_as"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Asmaamaghraby/ArabicChartsQA | ---
task_categories:
- question-answering
language:
- ar
- en
--- |
HaiboinLeeds/eee3 | ---
license: apache-2.0
---
|
SEACrowd/postag_su | ---
tags:
- pos-tagging
language:
- sun
---
# postag_su
This dataset contains 3616 lines of Sundanese sentences taken from several online magazines (Mangle, Dewan Dakwah Jabar, and Balebat). Annotated with PoS Labels by several undergraduates of the Sundanese Language Education Study Program (PPBS), UPI Bandung.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@data{FK2/VTAHRH_2022,
author = {ARDIYANTI SURYANI, ARIE and Widyantoro, Dwi Hendratmo and Purwarianti, Ayu and Sudaryat, Yayat},
publisher = {Telkom University Dataverse},
title = {{PoSTagged Sundanese Monolingual Corpus}},
year = {2022},
version = {DRAFT VERSION},
doi = {10.34820/FK2/VTAHRH},
url = {https://doi.org/10.34820/FK2/VTAHRH}
}
@INPROCEEDINGS{7437678,
author={Suryani, Arie Ardiyanti and Widyantoro, Dwi Hendratmo and Purwarianti, Ayu and Sudaryat, Yayat},
booktitle={2015 International Conference on Information Technology Systems and Innovation (ICITSI)},
title={Experiment on a phrase-based statistical machine translation using PoS Tag information for Sundanese into Indonesian},
year={2015},
volume={},
number={},
pages={1-6},
doi={10.1109/ICITSI.2015.7437678}
}
```
## License
CC0 - "Public Domain Dedication"
## Homepage
[https://dataverse.telkomuniversity.ac.id/dataset.xhtml?persistentId=doi:10.34820/FK2/VTAHRH](https://dataverse.telkomuniversity.ac.id/dataset.xhtml?persistentId=doi:10.34820/FK2/VTAHRH)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) |
nthngdy/bert_dataset_202203 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 24635440616
num_examples: 146707688
download_size: 14651841592
dataset_size: 24635440616
license: apache-2.0
task_categories:
- text-generation
- fill-mask
language:
- en
tags:
- language-modeling
- masked-language-modeling
pretty_name: BERT Dataset (BookCorpus + Wikipedia 03/2022)
---
# Dataset Card for "bert_dataset_202203"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_mrpc_analytic_superlative | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 22973
num_examples: 82
- name: train
num_bytes: 38289
num_examples: 131
- name: validation
num_bytes: 5998
num_examples: 20
download_size: 54326
dataset_size: 67260
---
# Dataset Card for "MULTI_VALUE_mrpc_analytic_superlative"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SamPIngram/tinyshakespeare | ---
configs:
- config_name: default
data_files:
- split: train
path: "input.txt"
license: mit
language:
- en
pretty_name: tiny_shakespeare
task_categories:
- text-classification
size_categories:
- 100K<n<1M
--- |
Akajackson/synth_pass_open | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 2482854497.0
num_examples: 10000
- name: validation
num_bytes: 51578237.0
num_examples: 200
- name: test
num_bytes: 52340884.0
num_examples: 200
download_size: 2576631016
dataset_size: 2586773618.0
---
# Dataset Card for "synth_pass_open"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Blogs/tech | ---
license: mit
---
|
olivermueller/winereviews | ---
license: mit
---
|
zxvix/qa_wikipedia | ---
dataset_info:
- config_name: qa
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 76564
num_examples: 1000
download_size: 42490
dataset_size: 76564
- config_name: text
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 172663
num_examples: 310
download_size: 109321
dataset_size: 172663
configs:
- config_name: qa
data_files:
- split: train
path: qa/train-*
- config_name: text
data_files:
- split: train
path: text/train-*
---
|
CVasNLPExperiments/VQAv2_minival_no_image_google_flan_t5_xl_mode_T_A_Q_rices_ns_25994 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: true_label
sequence: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random_
num_bytes: 3716868
num_examples: 25994
download_size: 1341254
dataset_size: 3716868
---
# Dataset Card for "VQAv2_minival_no_image_google_flan_t5_xl_mode_T_A_Q_rices_ns_25994"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zolak/twitter_dataset_79_1713158319 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 296213
num_examples: 707
download_size: 151717
dataset_size: 296213
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sakharamg/AviationCorpus | ---
license: mit
---
|
kevinblake/gormenghast | ---
license: apache-2.0
---
|
shidowake/glaive-code-assistant-v1-sharegpt-format_split_11 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 10503837.603832223
num_examples: 6805
download_size: 5166385
dataset_size: 10503837.603832223
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jet-universe/jetclass | ---
license: mit
---
# Dataset Card for JetClass
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/jet-universe/particle_transformer
- **Paper:** https://arxiv.org/abs/2202.03772
- **Leaderboard:**
- **Point of Contact:** [Huilin Qu](mailto:huilin.qu@cern.ch)
### Dataset Summary
JetClass is a large and comprehensive dataset to advance deep learning for jet tagging. The dataset consists of 100 million jets for training, with 10 different types of jets. The jets in this dataset generally fall into two categories:
* The background jets are initiated by light quarks or gluons (q/g) and are ubiquitously produced at the
LHC.
* The signal jets are those arising either from the top quarks (t), or from the W, Z or Higgs (H) bosons. For top quarks and Higgs bosons, we further consider their different decay modes as separate types, because the resulting jets have rather distinct characteristics and are often tagged individually.
Jets in this dataset are simulated with standard Monte Carlo event generators used by LHC experiments. The production and decay of the top quarks and the W, Z and Higgs bosons are generated with MADGRAPH5_aMC@NLO. We use PYTHIA to evolve the produced particles, i.e., performing parton showering and hadronization, and produce the final outgoing particles. To be close to realistic jets reconstructed at the ATLAS or CMS experiment, detector effects are simulated with DELPHES using the CMS detector configuration provided in DELPHES. In addition, the impact parameters of electrically charged particles are smeared to match the resolution of the CMS tracking detector . Jets are clustered from DELPHES E-Flow objects with the anti-kT algorithm using a distance
parameter R = 0.8. Only jets with transverse momentum in 500–1000 GeV and pseudorapidity |η| < 2 are considered. For signal jets, only the “high-quality” ones that fully contain the decay products of initial particles are included.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use the JetClass dataset, please cite:
```
@article{Qu:2022mxj,
author = "Qu, Huilin and Li, Congqiao and Qian, Sitian",
title = "{Particle Transformer for Jet Tagging}",
eprint = "2202.03772",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
month = "2",
year = "2022"
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
|
perrynelson/waxal-wolof2 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: duration
dtype: float64
- name: transcription
dtype: string
splits:
- name: test
num_bytes: 179976390.6
num_examples: 1075
download_size: 178716765
dataset_size: 179976390.6
---
# Dataset Card for "waxal-wolof2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dplutchok/llama2-train100 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 52322.04722895273
num_examples: 100
download_size: 30915
dataset_size: 52322.04722895273
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama2-train100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064212 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot
eval_info:
task: text_zero_shot_classification
model: facebook/opt-30b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot
dataset_config: mathemakitten--winobias_antistereotype_test_cot
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
maveriq/desse | ---
configs:
- config_name: default
data_files:
- split: valid
path: data/valid-*
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: simple
dtype: string
- name: complex
dtype: string
splits:
- name: valid
num_bytes: 8994
num_examples: 42
- name: train
num_bytes: 3033921
num_examples: 13199
- name: test
num_bytes: 168330
num_examples: 790
download_size: 1961038
dataset_size: 3211245
---
# Dataset Card for "desse"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Mayhem50/mayhem-test | ---
license: unknown
---
|
Cohere/miracl-es-corpus-22-12 | ---
annotations_creators:
- expert-generated
language:
- es
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (es) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-es-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-es-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-es-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-es-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-es-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-es-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-es-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-es-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-es-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-es-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-es-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-es-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
lsb/c4 | ---
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 828588742863
num_examples: 364868892
- name: validation
num_bytes: 825766822
num_examples: 364608
download_size: 511302989842
dataset_size: 829414509685
---
# Dataset Card for "c4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hoangphu7122002ai/translate_data_express_sql_v0 | ---
dataset_info:
features:
- name: field_choose
sequence: string
- name: info_map_field
sequence: string
- name: question
dtype: string
- name: info_choose
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 17397732
num_examples: 12545
download_size: 7326881
dataset_size: 17397732
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
autoevaluate/autoeval-staging-eval-project-95ce44b7-7684-4cf4-b396-d486367937e4-86 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
CyberHarem/kirisame_marisa_touhou | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kirisame_marisa/霧雨魔理沙/키리사메마리사 (Touhou)
This is the dataset of kirisame_marisa/霧雨魔理沙/키리사메마리사 (Touhou), containing 500 images and their tags.
The core tags of this character are `blonde_hair, hat, long_hair, witch_hat, bow, braid, yellow_eyes, single_braid, hat_bow, hair_bow, white_bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 821.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kirisame_marisa_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 469.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kirisame_marisa_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1209 | 945.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kirisame_marisa_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 734.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kirisame_marisa_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1209 | 1.31 GiB | [Download](https://huggingface.co/datasets/CyberHarem/kirisame_marisa_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kirisame_marisa_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 18 |  |  |  |  |  | 1girl, short_sleeves, solo, waist_apron, puffy_sleeves, smile, looking_at_viewer, broom, ribbon, dress, star_(symbol) |
| 1 | 10 |  |  |  |  |  | 1girl, black_footwear, black_headwear, black_skirt, black_vest, frills, looking_at_viewer, puffy_short_sleeves, solo, waist_apron, white_apron, white_shirt, full_body, white_socks, bangs, broom, mary_janes, buttons, grin, holding, mini-hakkero, simple_background, star_(symbol), blush |
| 2 | 14 |  |  |  |  |  | 1girl, solo, bloomers, star_(symbol), broom_riding, grin, shoes, open_mouth |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | short_sleeves | solo | waist_apron | puffy_sleeves | smile | looking_at_viewer | broom | ribbon | dress | star_(symbol) | black_footwear | black_headwear | black_skirt | black_vest | frills | puffy_short_sleeves | white_apron | white_shirt | full_body | white_socks | bangs | mary_janes | buttons | grin | holding | mini-hakkero | simple_background | blush | bloomers | broom_riding | shoes | open_mouth |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------|:-------|:--------------|:----------------|:--------|:--------------------|:--------|:---------|:--------|:----------------|:-----------------|:-----------------|:--------------|:-------------|:---------|:----------------------|:--------------|:--------------|:------------|:--------------|:--------|:-------------|:----------|:-------|:----------|:---------------|:--------------------|:--------|:-----------|:---------------|:--------|:-------------|
| 0 | 18 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | | X | X | | | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | |
| 2 | 14 |  |  |  |  |  | X | | X | | | | | | | | X | | | | | | | | | | | | | | X | | | | | X | X | X | X |
|
jessthebp/yankee_candle_reviews | ---
license: mit
size_categories:
- n<1K
---
Publically available yankee candle reviews with ratings and dates from Amazon, for project comparing reviews to current covid cases. |
bigcode/bigcode-pii-dataset | ---
dataset_info:
features:
- name: text
dtype: string
- name: type
dtype: string
- name: language
dtype: string
- name: fragments
list:
- name: category
dtype: string
- name: position
sequence: int64
- name: value
dtype: string
- name: id
dtype: int64
splits:
- name: test
num_bytes: 22496122
num_examples: 12099
download_size: 9152605
dataset_size: 22496122
language:
- code
task_categories:
- token-classification
extra_gated_prompt: |-
## Terms of Use for the dataset
This is an annotated dataset for Personal Identifiable Information (PII) in code. We ask that you read and agree to the following Terms of Use before using the dataset and fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSfiWKyBB8-PxOCLo-KMsLlYNyQNJEzxJw0gcUAUHT3UY848qA/viewform):
**Incomplete answers to the form will result in the request for access being ignored, with no follow-up actions by BigCode.**
1. You agree that you will not use the PII dataset for any purpose other than training or evaluating models for PII removal from datasets.
2. You agree that you will not share the PII dataset or any modified versions for whatever purpose.
3. Unless required by applicable law or agreed to in writing, the dataset is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using the dataset, and assume any risks associated with your exercise of permissions under these Terms of Use.
4. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE DATASET OR THE USE OR OTHER DEALINGS IN THE DATASET.
extra_gated_fields:
Email: text
I have read the License and agree with its terms: checkbox
---
# PII dataset
## Dataset description
This is an annotated dataset for Personal Identifiable Information (PII) in code. The target entities are: Names, Usernames, Emails, IP addresses, Keys, Passwords, and IDs.
The annotation process involved 1,399 crowd-workers from 35 countries with [Toloka](https://toloka.ai/).
It consists of **12,099** samples of
~50 lines of code in 31 programming languages. You can also find a PII detection model that we trained on this dataset at [bigcode-pii-model](https://huggingface.co/loubnabnl/bigcode-pii-model).
## Dataset Structure
You can load the dataset with:
```python
from datasets import load_dataset
ds = load_dataset("bigcode/bigcode-pii-dataset", use_auth_token=True)
ds
```
````
DatasetDict({
test: Dataset({
features: ['text', 'type', 'language', 'fragments', 'id'],
num_rows: 12099
})
})
````
It has the following data fields:
- text: the code snippet
- type: indicated if the data was pre-filtered with regexes (before annotation we selected 7100 files that were pre-filtered as positive for PII with regexes, and selected 5199 randomly)
- language: programming language
- fragments: detected secrets and their positions and categories
- category: PII category
- position: start and end
- value: PII value
## Statistics
Figure below shows the distribution of programming languages in the dataset:
<img src="https://huggingface.co/datasets/bigcode/admin/resolve/main/pii_lang_dist.png" width="50%">
The following table shows the distribution of PII in all classes, as well as annotation quality after manual inspection of 300 diverse files from the dataset:
| Entity | Count | Precision | Recall |
| ---------------- | ----- | --------- | ------ |
| IP\_ADDRESS | 2526 | 85% | 97% |
| KEY | 308 | 91% | 78% |
| PASSWORD | 598 | 91% | 86% |
| ID | 1702 | 53% | 51% |
| EMAIL | 5470 | 99% | 97% |
| EMAIL\_EXAMPLE | 1407 | | |
| EMAIL\_LICENSE | 3141 | | |
| NAME | 2477 | 89% | 94% |
| NAME\_EXAMPLE | 318 | | |
| NAME\_LICENSE | 3105 | | |
| USERNAME | 780 | 74% | 86% |
| USERNAME\_EXAMPLE| 328 | | |
| USERNAME\_LICENSE| 503 | | |
| AMBIGUOUS | 287 | | |
`AMBIGUOUS` and `ID` were not used in our [NER model](https://huggingface.co/loubnabnl/bigcode-pii-model) training for PII detection.
# Dataset Creation
We selected the annotation samples from [The Stack](https://huggingface.co/datasets/bigcode/the-stack) dataset after deduplication,
a collection of code from open permissively licensed repositories on GitHub.
To increase the representation of rare PII types, such as keys and IP addresses, we pre-filtered 7100 files from a larger sample.
This pre-filtering was carried out using the [detect-secrets](https://github.com/Yelp/detect-secrets) tool with all default plugins activated,
in addition to the regular expressions to detect emails, IPv4 and IPv6 addresses. To avoid introducing bias, the remaining 5100 files were randomly sampled from the dataset without pre-filtering.
We then annotated the dataset through [Toloka Platform](https://toloka.ai/) with 1,399 crowd-workers from 35 countries. To ensure that crowd-workers received fair compensation, we established an hourly pay rate of \$7.30, taking into consideration different minimum wage rates across countries and their corresponding purchasing power.
We limited annotation eligibility to countries where the hourly pay rate of \$7.30 was equivalent to the highest minimum wage in the US (\$16.50) in terms of purchasing power parity.
# Considerations for Using the Data
When using this dataset, please be mindful of the data governance risks that come with handling personally identifiable information (PII). Despite sourcing the data from open, permissive GitHub repositories and having it annotated by fairly paid crowd-workers, it does contain sensitive details such as names, usernames, keys, emails, passwords, and IP addresses. To ensure responsible use for research within the open-source community, access to the dataset will be provided through a gated mechanism.
We expect researchers and developers working with the dataset to adhere to the highest ethical standards and employ robust data protection measures.
To assist users in effectively detecting and masking PII, we've also released a PII model trained on this dataset.
Our goal in providing access to both the dataset and the PII model is to foster the development of privacy-preserving AI technologies while minimizing potential risks related to handling PII. |
jlbaker361/cyberpunk-250-cropped | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: frame
dtype: int64
- name: title
dtype: string
splits:
- name: train
num_bytes: 209436505.0
num_examples: 985
download_size: 209402884
dataset_size: 209436505.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
edbeeching/prj_gia_dataset_atari_2B_atari_skiing_1111 | ---
library_name: gia
tags:
- deep-reinforcement-learning
- reinforcement-learning
- gia
- multi-task
- multi-modal
- imitation-learning
- offline-reinforcement-learning
---
An imitation learning environment for the atari_skiing environment, sample for the policy atari_2B_atari_skiing_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
Zombely/diachronia-ocr-test-A | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 62457501.0
num_examples: 81
download_size: 62461147
dataset_size: 62457501.0
---
# Dataset Card for "diachronia-ocr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
el2e10/aya-indicsentiment | ---
license: cc
task_categories:
- conversational
language:
- bn
- gu
- hi
- kn
- ml
- mr
- pa
- ta
- te
- ur
pretty_name: Aya-Indicsentiment
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: bn
path: data/bn.parquet
- split: guj
path: data/guj.parquet
- split: hn
path: data/hn.parquet
- split: kn
path: data/kn.parquet
- split: ml
path: data/ml.parquet
- split: mr
path: data/mr.parquet
- split: pa
path: data/pa.parquet
- split: ta
path: data/ta.parquet
- split: te
path: data/te.parquet
- split: ur
path: data/ur.parquet
---
### Description
This dataset is derived from the already existing dataset made by AI4Bharat. We have used the [IndicSentiment](https://huggingface.co/datasets/ai4bharat/IndicSentiment) dataset of AI4Bharat to create an instruction style dataset.
IndicSentiment is a multilingual parallel dataset for sentiment analysis. It encompasses product reviews, translations into Indic languages, sentiment labels, and more.
The original dataset(IndicSentiment) was made available under the cc-0 license.
This dataset contains 10 split with 1150+ rows each.Each split corresponds to a language.
### Template
The following template was used for converting the original dataset:
```
#Template 1
prompt:
Translate from English to {target_language}:
{ENGLISH_REVIW}
completion:
{INDIC_REVIEW}
```
```
#Template 2
prompt:
Translate this sentence to {target_language}:
{ENGLISH_REVIW}
completion:
{INDIC_REVIEW}
```
```
#Template 3
prompt:
What's the {target_language} translation of this language:
{ENGLISH_REVIW}
completion:
{INDIC_REVIEW}
```
```
#Template 4
prompt:
Can you translate this text to {target_language}:
{ENGLISH_REVIW}
completion:
{INDIC_REVIEW}
``` |
arbml/aya_ar | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 17570210.773853056
num_examples: 13960
- name: test
num_bytes: 254601.14285714287
num_examples: 250
download_size: 3697679
dataset_size: 17824811.916710198
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
azmisahin/dataset | ---
license: mit
---
|
mehr32/Persian_English_translation | ---
license: gpl-3.0
language:
- fa
- en
size_categories:
- 1M<n<10M
---
English-Persian translation dataset with about two million translation lines optimized for training LibreTranslate model:
https://github.com/LibreTranslate/Locomotive |
mediabiasgroup/BAT | ---
license: cc-by-nc-nd-4.0
---
Dataset from the paper https://www.sciencedirect.com/science/article/pii/S246869642300023X; combining articles with a bias rating with their respective tweets and reaction to these tweets.
The Twitter data can not be published, please contact us for any questions. |
kgr123/quality_counter_3000_4_simple | ---
dataset_info:
features:
- name: context
dtype: string
- name: word
dtype: string
- name: claim
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 16638447
num_examples: 1929
- name: train
num_bytes: 16476589
num_examples: 1935
- name: validation
num_bytes: 16810922
num_examples: 1941
download_size: 11148725
dataset_size: 49925958
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Seanxh/twitter_dataset_1713203391 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 123776
num_examples: 290
download_size: 47072
dataset_size: 123776
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
princeton-nlp/QuRatedPajama-260B | ---
pretty_name: QuRatedPajama-260B
---
## QuRatedPajama
**Paper:** [QuRating: Selecting High-Quality Data for Training Language Models](https://arxiv.org/pdf/2402.09739.pdf)
A 260B token subset of [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B), annotated by [princeton-nlp/QuRater-1.3B](https://huggingface.co/princeton-nlp/QuRater-1.3B/tree/main) with sequence-level quality ratings across 4 criteria:
- **Educational Value** - e.g. the text includes clear explanations, step-by-step reasoning, or questions and answers
- **Facts & Trivia** - how much factual and trivia knowledge the text contains, where specific facts and obscure trivia are preferred over more common knowledge
- **Writing Style** - how polished and good is the writing style in the text
- **Required Expertise**: - how much required expertise and prerequisite knowledge is necessary to understand the text
In a pre-processing step, we split documents in into chunks of exactly 1024 tokens. We provide tokenization with the Llama-2 tokenizer in the `input_ids` column.
**Guidance on Responsible Use:**
In the paper, we document various types of bias that are present in the quality ratings (biases related to domains, topics, social roles, regions and languages - see Section 6 of the paper).
Hence, be aware that data selection with QuRating could have unintended and harmful effects on the language model that is being trained. We strongly recommend a comprehensive evaluation of the language model for these and other types of bias, particularly before real-world deployment. We hope that releasing the data/models can facilitate future research aimed at uncovering and mitigating such biases.
**Citation:**
```
@article{wettig2024qurating,
title={QuRating: Selecting High-Quality Data for Training Language Models},
author={Alexander Wettig, Aatmik Gupta, Saumya Malik, Danqi Chen},
journal={arXiv preprint 2402.09739},
year={2024}
}
``` |
ibm/otter_dude | ---
license: mit
---
# Otter DUDe Dataset Card
Otter DUDe includes 1,452,568 instances of drug-target interactions.
## Dataset details
#### DUDe
DUDe comprises a collection of 22,886 active compounds and their corresponding affinities towards 102 targets. For our study, we utilized a preprocessed version of the DUDe, which includes 1,452,568 instances of drug-target interactions. To prevent any data leakage, we eliminated the negative interactions and the overlapping triples with the TDC DTI dataset. As a result, we were left with a total of 40,216 drug-target interaction pairs.
**Original dataset:**
- Citation: Samuel Sledzieski, Rohit Singh, Lenore Cowen, and Bonnie Berger. Adapting protein language models for rapid dti prediction. bioRxiv, pages 2022–11, 2022
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the dataset:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
**Models trained on Otter UBC**
- [ibm/otter_dude_classifier](https://huggingface.co/ibm/otter_dude_classifier)
- [ibm/otter_dude_distmult](https://huggingface.co/ibm/otter_dude_distmult)
- [ibm/otter_dude_transe](https://huggingface.co/ibm/otter_dude_transe) |
rathi2023/owlvitnhood | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_id
dtype: string
- name: objects
struct:
- name: category_id
sequence: int64
- name: bbox
sequence:
sequence: float64
splits:
- name: train
num_bytes: 2627714.0
num_examples: 41
download_size: 2630412
dataset_size: 2627714.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
one-sec-cv12/chunk_107 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 16759895760.125
num_examples: 174495
download_size: 14947281130
dataset_size: 16759895760.125
---
# Dataset Card for "chunk_107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jdowni80/ABL1_llamology_embeddings | ---
dataset_info:
features:
- name: title
dtype: string
- name: page
dtype: float64
- name: content
dtype: string
- name: type
dtype: string
- name: id
sequence: float32
- name: text
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 2350461
num_examples: 217
download_size: 2631913
dataset_size: 2350461
---
# Dataset Card for "ABL1_llamology_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JAYASWAROOP/mine_laws | ---
task_categories:
- text-classification
language:
- en
--- |
ivanbar/roasted-coffee-defects | ---
license: mit
task_categories:
- image-classification
tags:
- coffee
pretty_name: Roasted coffee defects
size_categories:
- 1K<n<10K
---
# This dataset contains images of roasted beans exhibiting a total of 5 different defects
The defect classes are:
* Normal beans
* "Quaker" beans
* Bean fragments/broken beans
* Burnt beans
* Underroasted beans
* Insect/mould damaged beans
The images are annotated with the defect class as well as the origin, species and processing method for each bean.
The counts for each defect, origin, species and processing methods are shown below:

Note that the insect and mould classes are merged as the visual features and impact on the finished product is quite similar.
This was also done to prevent extremely underrepresented classes in the dataset. |
Mystearica/Misty | ---
license: unknown
---
|
krishan-CSE/HatEval_Relabled_with_Author_Features | ---
license: apache-2.0
---
|
Deojoandco/ah100 | ---
dataset_info:
features:
- name: url
dtype: string
- name: id
dtype: string
- name: num_comments
dtype: int64
- name: name
dtype: string
- name: title
dtype: string
- name: body
dtype: string
- name: score
dtype: int64
- name: upvote_ratio
dtype: float64
- name: distinguished
dtype: 'null'
- name: over_18
dtype: bool
- name: created_utc
dtype: float64
- name: comments
list:
- name: body
dtype: string
- name: created_utc
dtype: float64
- name: distinguished
dtype: 'null'
- name: id
dtype: string
- name: permalink
dtype: string
- name: score
dtype: int64
- name: best_num_comments
dtype: int64
splits:
- name: train
num_bytes: 91748
num_examples: 29
download_size: 75134
dataset_size: 91748
---
# Dataset Card for "ah100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
odreblakkj/cabecaamarela | ---
license: openrail
---
|
JRHuy/vivos-fleurs | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 3812520903.0
num_examples: 14654
- name: test
num_bytes: 778309245.448
num_examples: 1617
- name: validation
num_bytes: 275255625.0
num_examples: 361
download_size: 4811668493
dataset_size: 4866085773.448
---
# Dataset Card for "vivos-fleurs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-staging-eval-project-emotion-21f117d5-11035480 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: jsoutherland/distilbert-base-uncased-finetuned-emotion
metrics: []
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: jsoutherland/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jsoutherland](https://huggingface.co/jsoutherland) for evaluating this model. |
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-html-14000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 657122
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
enobyte/wiki | ---
license: apache-2.0
---
|
gryffindor-ISWS/prompts_wiki_fictional_data_without_image | ---
license: gpl-3.0
---
|
inuwamobarak/random-files | ---
license: openrail
---
Crouse, M., Abdelaziz, I., Basu, K., Dan, S., Kumaravel, S., Fokoue, A., Kapanipathi, P., & Lastras, L. (2023). Formally Specifying the High-Level Behavior of LLM-Based Agents. ArXiv. /abs/2310.08535 |
raowaqas123/hbl_v5 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 46790
num_examples: 194
download_size: 16811
dataset_size: 46790
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
autoevaluate/autoeval-staging-eval-project-6fbfec76-7855037 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: jpcorb20/pegasus-large-reddit_tifu-samsum-512
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: jpcorb20/pegasus-large-reddit_tifu-samsum-512
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
gishnum/worldpopulation_neo4j_graph_dump | ---
license: gpl
---
|
mvasiliniuc/iva-kotlin-codeint-clean-train | ---
annotations_creators:
- crowdsourced
license: other
language_creators:
- crowdsourced
language:
- code
task_categories:
- text-generation
tags:
- code, kotlin, native Android development, curated, training
size_categories:
- 100K<n<1M
source_datasets: []
pretty_name: iva-kotlin-codeint-clean
task_ids:
- language-modeling
---
# IVA Kotlin GitHub Code Dataset
## Dataset Description
This is the curated train split of IVA Kotlin dataset extracted from GitHub.
It contains curated Kotlin files gathered with the purpose to train a code generation model.
The dataset consists of 383380 Kotlin code files from GitHub.
[Here is the unsliced curated dataset](https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean) and [here is the raw dataset](https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint).
### How to use it
To download the full dataset:
```python
from datasets import load_dataset
dataset = load_dataset('mvasiliniuc/iva-kotlin-codeint-clean-train', split='train'))
```
## Data Structure
### Data Fields
|Field|Type|Description|
|---|---|---|
|repo_name|string|name of the GitHub repository|
|path|string|path of the file in GitHub repository|
|copies|string|number of occurrences in dataset|
|content|string|content of source file|
|size|string|size of the source file in bytes|
|license|string|license of GitHub repository|
|hash|string|Hash of content field.|
|line_mean|number|Mean line length of the content.
|line_max|number|Max line length of the content.
|alpha_frac|number|Fraction between mean and max line length of content.
|ratio|number|Character/token ratio of the file with tokenizer.
|autogenerated|boolean|True if the content is autogenerated by looking for keywords in the first few lines of the file.
|config_or_test|boolean|True if the content is a configuration file or a unit test.
|has_no_keywords|boolean|True if a file has none of the keywords for Kotlin Programming Language.
|has_few_assignments|boolean|True if file uses symbol '=' less than `minimum` times.
### Instance
```json
{
"repo_name":"oboenikui/UnivCoopFeliCaReader",
"path":"app/src/main/java/com/oboenikui/campusfelica/ScannerActivity.kt",
"copies":"1",
"size":"5635",
"content":"....",
"license":"apache-2.0",
"hash":"e88cfd99346cbef640fc540aac3bf20b",
"line_mean":37.8620689655,
"line_max":199,
"alpha_frac":0.5724933452,
"ratio":5.0222816399,
"autogenerated":false,
"config_or_test":false,
"has_no_keywords":false,
"has_few_assignments":false
}
```
## Languages
The dataset contains only Kotlin files.
```json
{
"Kotlin": [".kt"]
}
```
## Licenses
Each entry in the dataset contains the associated license. The following is a list of licenses involved and their occurrences.
```json
{
"agpl-3.0":3209,
"apache-2.0":90782,
"artistic-2.0":130,
"bsd-2-clause":380,
"bsd-3-clause":3584,
"cc0-1.0":155,
"epl-1.0":792,
"gpl-2.0":4432,
"gpl-3.0":19816,
"isc":345,
"lgpl-2.1":118,
"lgpl-3.0":2689,
"mit":31470,
"mpl-2.0":1444,
"unlicense":654
}
```
## Dataset Statistics
```json
{
"Total size": "~207 MB",
"Number of files": 160000,
"Number of files under 500 bytes": 2957,
"Average file size in bytes": 5199,
}
```
## Curation Process
See [the unsliced curated dataset](https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean) for mode details.
## Data Splits
The dataset only contains a train split focused only on training data. For validation and unspliced versions, please check the following links:
* Clean Version Unsliced: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean
* Clean Version Valid: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-valid
# Considerations for Using the Data
The dataset comprises source code from various repositories, potentially containing harmful or biased code,
along with sensitive information such as passwords or usernames.
|
hoangdeeptry/cntt2-audio-dataset | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 5857690017.034657
num_examples: 2217
- name: test
num_bytes: 653656343.4683442
num_examples: 247
download_size: 6225869601
dataset_size: 6511346360.503
---
# Dataset Card for "cntt2-audio-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/88615e7a | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 178
num_examples: 10
download_size: 1340
dataset_size: 178
---
# Dataset Card for "88615e7a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
amaye15/invoices | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype:
class_label:
names:
'0': Barcode
'1': Invoice
'2': Object
'3': Receipt
'4': Non-Object
splits:
- name: train
num_bytes: 2413172028.613804
num_examples: 13463
- name: test
num_bytes: 620463009.8081964
num_examples: 3366
download_size: 3035547690
dataset_size: 3033635038.4220004
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
gagan3012/NewArOCRDatasetv3 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 804663442.224
num_examples: 45856
- name: validation
num_bytes: 14180587.0
num_examples: 425
- name: test
num_bytes: 13690842.0
num_examples: 425
download_size: 727818407
dataset_size: 832534871.224
---
# Dataset Card for "NewArOCRDatasetv3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chenxwh/gen-storycloze | ---
license: cc-by-nc-4.0
---
|
utsabbarmanju/jeebonananda_das_bangla_poems | ---
license: apache-2.0
---
|
warleagle/pco_audio_data_v2 | ---
dataset_info:
features:
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 195374660.0
num_examples: 6
download_size: 195380376
dataset_size: 195374660.0
---
# Dataset Card for "pco_audio_data_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Jeet2000/MyModel | ---
license: unknown
---
|
zhan1993/task_positive_negative_expert_mmlu_oracle | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: task_eval_on
dtype: string
- name: positive_expert_name
dtype: string
- name: negative_expert_name
dtype: string
splits:
- name: train
num_bytes: 6185
num_examples: 78
download_size: 4193
dataset_size: 6185
---
# Dataset Card for "task_positive_negative_expert_mmlu_oracle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-electrical_engineering-neg-answer | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_answer
dtype: string
splits:
- name: test
num_bytes: 28832
num_examples: 145
download_size: 20564
dataset_size: 28832
---
# Dataset Card for "mmlu-electrical_engineering-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.