datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
AdapterOcean/oasst_top1_standardized_cluster_1 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 47634962
num_examples: 4950
download_size: 14012416
dataset_size: 47634962
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "oasst_top1_standardized_cluster_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlekseyKorshuk/chai-chatgpt-fullserved-chatml | ---
dataset_info:
features:
- name: conversation
list:
- name: content
dtype: string
- name: do_train
dtype: bool
- name: role
dtype: string
splits:
- name: train
num_bytes: 442121603
num_examples: 126140
download_size: 242078847
dataset_size: 442121603
---
# Dataset Card for "chai-chatgpt-fullserved-chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
VIshalGautam/nba_player_scores | ---
license: mit
---
|
Marcelpribu/stabledifusion | ---
license: other
---
|
PocketDoc/Choose-Your-Story-Long-Text-Adventures | ---
tags:
- not-for-all-audiences
task_categories:
- conversational
language:
- en
pretty_name: Choose Your Story Novel Format Text Adventures
---
This is the 'CYS' text adventure dataset converted to a chat format with system messages. The system messages were randomly constructed from a table of phrases and templates. The original data can be found in the .7z archive.
**Credits:**
Thank you to VE Forbryderne from KoboldAI for scraping the dataset. |
zolak/twitter_dataset_80_1713158413 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 310353
num_examples: 775
download_size: 157822
dataset_size: 310353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DeepFoldProtein/foldseek_combined_processed_BPE_512 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: special_tokens_mask
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 3213381648
num_examples: 447297
download_size: 979931272
dataset_size: 3213381648
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/sasaki_chie_theidolmastercinderellagirlsu149 | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Sasaki Chie
This is the dataset of Sasaki Chie, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 445 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 445 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 445 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 445 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
trajanson/ralph_lauren_purple_label | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 52581473.788
num_examples: 1259
download_size: 52561557
dataset_size: 52581473.788
---
# Dataset Card for "ralph_lauren_purple_label"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Seanxh/twitter_dataset_1713210865 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 166237
num_examples: 390
download_size: 60186
dataset_size: 166237
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vishnun0027/imdb_dataset | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
splits:
- name: train
num_bytes: 66083508
num_examples: 50000
download_size: 41449486
dataset_size: 66083508
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Vishaltiwari2019/textGen-databricks-dolly | ---
license: mit
task_categories:
- text-generation
language:
- en
--- |
aitamilnadu/thirukkural_instruct | ---
license: apache-2.0
task_categories:
- text-generation
- question-answering
- conversational
language:
- ta
size_categories:
- 1K<n<10K
language_creators:
- expert-generated
- machine-generated
multilinguality:
- monolingual
pretty_name: Thirukkural_QA
---
# Summary
`thirukkural_QA` is an open source dataset of instruct-style records generated by converting publicly available data on Thirukkural and it's meaning.
This was created as part of [Aya Open Science Initiative](https://sites.google.com/cohere.com/aya-en/home) by Cohere For AI.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Apache 2.0](https://opensource.org/license/apache-2-0) License.
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
- Question Answering
Languages: Tamil Version: 1.0
# Dataset Overview
`thirukkural_QA` is a corpus of 3990 records generated by converting existing Thirukkural and its meaning into instruction-style. This Dataset can be used for the following tasks:
- Given the thirukkural and ask for its meaning, generates the meaning of the kural.
- Given the meaning of the kural, generates the original kural.
- Given the beginning of a kural and ask for its meaning, generates the original kural along with its meaning.
# Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation. For example, prompt-completions could be submitted as few-shot examples to a large open language model to generate new kurals in a similar style.
# Dataset
## Load with Datasets
To load this dataset with Datasets, you'll just need to install Datasets as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset('aitamilnadu/thirukkural_QA')
```
## Purpose of Collection
Tamil is a low-resource language (inspite of having rich literature) where there are no instruct-style dataset to the best of my knowledge.
This was created as a part of [Aya Open Science Initiative](https://sites.google.com/cohere.com/aya-en/home) from Cohere For AI to make sure Tamil is well represented in the space of AI/ML.
Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications.
## Sources
- **[Thirukkural.com](http://www.thirukkural.com/)**: The data from this website is scraped and available at [Thirukkural-Tamil-Dataset](https://github.com/vijayanandrp/Thirukkural-Tamil-Dataset).
- The scraped data is carefully analysed making sure there are no missed words, spelling mistakes and the data is in Tamil only.
- Next, some pre-processing is performed to extract kural, adhigaram, kural no and different meanings seperately from the scraped data.
- Finally, converted the scraped data into instruct-style prompts and completions.
## Templates
For the creation of instruct-style prompts and completions from the scraped data, the following three templates were used:
Template Id: 1. Given the thirukkural and ask for its meaning, generates the meaning of the kural.
```python
Prompt:
{Adigaram_Name} என்னும் அதிகாரத்தில் வரும்,
{Complete_Kural} என்ற குறளின் பொருளை விளக்குக.
Completion:
கொடுக்கப்பட்டுள்ள குறளின் பொருள்: {Kural_Meaning} என்பதாகும். இந்த குறள் {Adigaram_Name} என்னும் அதிகாரத்தில் வரும் {Kural_Number}ஆம் குரள் ஆகும்.
திருக்குறள் உலக புகழ் பெற்ற பொது மறை நூல். இந்நூலை இயற்றியவர் திருவள்ளுவர்.
திருக்குறளில் 133 அதிகாரமும், அதிகாரத்துக்கு 10 குறளும் மொத்தம் 1330 குறளும் அடங்கியுள்ளது. ஒவ்வொரு குறளும் இரண்டு அடிகளையும் ஏழு சீரும் கொண்ட வெண்பாவாகும்.
இந்நூலில் பெரும் பிரிவு பால் எனவும், சிறு பிரிவு இயல் எனவும், அதனினும் சிறியது அதிகாரம் என்று வகுக்க பெற்றுள்ளது.
அறத்துப்பாலில் 38 அதிகாரங்களையும், பொருட்பாலில் 70 அதிகாரங்களையும் மற்றும் காமத்துப்பாலில் 25 அதிகாரங்களையும் கொண்டுள்ளது.
இந்நூல் அறம், பொருள், இன்பம்(காமம்) என்னும் முப்பாலையும் அழகாக எடுத்துரைக்கிறது. வாழ்கையின் அனைத்து பகுதிகளையும் எடுத்துரைக்கும் ஒரு சிறந்த வாழ்வியல் நூலாகும்.
சாதி, மதம், மொழி, நாடு என்று வேறுபாடு இல்லாமல் மக்கள் அனைவருக்கும் பொருந்துவதாக உள்ளதால் உலக பொது மறை என்று அழைக்கப்படுகிறது.
தெய்வநூல், பொய்யாமொழி, தமிழ் மறை, முப்பால் என்று வேறு பெயர்களும் திருக்குறளுக்கு உண்டு.
```
Template Id: 2. Given the meaning of the kural, generates the original kural.
```python
Prompt:
{Kural_Meaning} என்னும் பொருளுக்கு ஏற்ற {Adigram_Name} என்னும் அதிகாரத்தில் வரும் {Kural_Number}ஆம் குறளைத் தருக.
Completion:
'{Complete_Kural}' என்னும் குறளே கொடுக்கப்பட்டுள்ள பொருளுக்குப் பொருத்தமான குறளாகும்.
கொடுக்கப்பட்டுள்ள பொருள் மு. வரதராசனார் அவர்கள் எழுதிய உரையிலிருந்து எடுக்கப்பட்டது. திருக்குறளுக்கு வேறு சில அறிஞர்களும் உரை எழுதியுள்ளனர்.
```
Template Id: 3. Given the beginning of a kural and ask for its meaning, generates the original kural along with its meaning.
```python
Prompt:
'{Kural_Starting}' எனத் தொடங்கும் குறளையும் அதன் பொருளையும் தருக.
Completion:
'{Complete_Kural}' என்னும் குறளே கொடுக்கப்பட்டுள்ள வார்த்தைகளிலிருந்து தொடங்கும் குறளாகும்.
இதன் பொருள்: {Kural_Meaning}
```
## Personal or Sensitive Data
This dataset contains public information. To my knowledge, there are no private person’s personal identifiers or sensitive information.
## Language
Tamil
# Known Limitations
- The meanings used in the prompts/completions are chosen randomly based on the availability of complete sentences and this may reflect some bias by ignoring other meanings written by other scholars.
# Contributors
[AbinayaM02](https://github.com/AbinayaM02) |
sayakpaul/drawbench-sdxl | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Image
dtype: image
- name: Upsampled_Prompt
dtype: string
- name: Image_With_Upsampled_Prompt
dtype: image
- name: model_name
dtype: string
- name: seed
dtype: int64
splits:
- name: train
num_bytes: 625589974.0
num_examples: 200
download_size: 625589110
dataset_size: 625589974.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "drawbench-sdxl"
The dataset was generated using https://github.com/sayakpaul/caption-upsampling. Refer to the repository for more details. |
iamnguyen/ds_by_sys_prompt_9 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 800642480.9705975
num_examples: 469425
download_size: 515492123
dataset_size: 800642480.9705975
---
# Dataset Card for "ds_by_sys_prompt_9"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
enriched_web_nlg | ---
annotations_creators:
- found
language_creators:
- crowdsourced
language:
- de
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-web-nlg
task_categories:
- tabular-to-text
task_ids:
- rdf-to-text
paperswithcode_id: null
pretty_name: Enriched WebNLG
dataset_info:
- config_name: en
features:
- name: category
dtype: string
- name: size
dtype: int32
- name: eid
dtype: string
- name: original_triple_sets
sequence:
- name: otriple_set
sequence: string
- name: modified_triple_sets
sequence:
- name: mtriple_set
sequence: string
- name: shape
dtype: string
- name: shape_type
dtype: string
- name: lex
sequence:
- name: comment
dtype: string
- name: lid
dtype: string
- name: text
dtype: string
- name: template
dtype: string
- name: sorted_triple_sets
sequence: string
- name: lexicalization
dtype: string
splits:
- name: train
num_bytes: 14665155
num_examples: 6940
- name: dev
num_bytes: 1843787
num_examples: 872
- name: test
num_bytes: 3931381
num_examples: 1862
download_size: 44284508
dataset_size: 20440323
- config_name: de
features:
- name: category
dtype: string
- name: size
dtype: int32
- name: eid
dtype: string
- name: original_triple_sets
sequence:
- name: otriple_set
sequence: string
- name: modified_triple_sets
sequence:
- name: mtriple_set
sequence: string
- name: shape
dtype: string
- name: shape_type
dtype: string
- name: lex
sequence:
- name: comment
dtype: string
- name: lid
dtype: string
- name: text
dtype: string
- name: template
dtype: string
- name: sorted_triple_sets
sequence: string
splits:
- name: train
num_bytes: 9748193
num_examples: 6940
- name: dev
num_bytes: 1238609
num_examples: 872
download_size: 44284508
dataset_size: 10986802
config_names:
- de
- en
---
# Dataset Card for WebNLG
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [WebNLG challenge website](https://webnlg-challenge.loria.fr/)
- **Repository:** [Enriched WebNLG Github repository](https://github.com/ThiagoCF05/webnlg)
- **Paper:** [Enriching the WebNLG corpus](https://www.aclweb.org/anthology/W18-6521/)
### Dataset Summary
The WebNLG challenge consists in mapping data to text. The training data consists of Data/Text pairs where the data is a
set of triples extracted from DBpedia and the text is a verbalisation of these triples. For instance, given the 3
DBpedia triples shown in (a), the aim is to generate a text such as (b). It is a valuable resource and benchmark for the Natural Language Generation (NLG) community. However, as other NLG benchmarks, it only consists of a collection of parallel raw representations and their corresponding textual realizations. This work aimed to provide intermediate representations of the data for the development and evaluation of popular tasks in the NLG pipeline architecture, such as Discourse Ordering, Lexicalization, Aggregation and Referring Expression Generation.
### Supported Tasks and Leaderboards
The dataset supports a `other-rdf-to-text` task which requires a model takes a set of RDF (Resource Description
Format) triples from a database (DBpedia) of the form (subject, property, object) as input and write out a natural
language sentence expressing the information contained in the triples.
### Languages
The dataset is presented in two versions: English (config `en`) and German (config `de`)
## Dataset Structure
### Data Instances
A typical example contains the original RDF triples in the set, a modified version which presented to crowd workers, and
a set of possible verbalizations for this set of triples:
```
{ 'category': 'Politician',
'eid': 'Id10',
'lex': {'comment': ['good', 'good', 'good'],
'lid': ['Id1', 'Id2', 'Id3'],
'text': ['World War II had Chiang Kai-shek as a commander and United States Army soldier Abner W. Sibal.',
'Abner W. Sibal served in the United States Army during the Second World War and during that war Chiang Kai-shek was one of the commanders.',
'Abner W. Sibal, served in the United States Army and fought in World War II, one of the commanders of which, was Chiang Kai-shek.']},
'modified_triple_sets': {'mtriple_set': [['Abner_W._Sibal | battle | World_War_II',
'World_War_II | commander | Chiang_Kai-shek',
'Abner_W._Sibal | militaryBranch | United_States_Army']]},
'original_triple_sets': {'otriple_set': [['Abner_W._Sibal | battles | World_War_II', 'World_War_II | commander | Chiang_Kai-shek', 'Abner_W._Sibal | branch | United_States_Army'],
['Abner_W._Sibal | militaryBranch | United_States_Army',
'Abner_W._Sibal | battles | World_War_II',
'World_War_II | commander | Chiang_Kai-shek']]},
'shape': '(X (X) (X (X)))',
'shape_type': 'mixed',
'size': 3}
```
### Data Fields
The following fields can be found in the instances:
- `category`: the category of the DBpedia entites present in the RDF triples.
- `eid`: an example ID, only unique per split per category.
- `size`: number of RDF triples in the set.
- `shape`: (for v3 only) Each set of RDF-triples is a tree, which is characterised by its shape and shape type. `shape`
is a string representation of the tree with nested parentheses where X is a node (
see [Newick tree format](https://en.wikipedia.org/wiki/Newick_format))
- `shape_type`: (for v3 only) is a type of the tree shape, which can be: `chain` (the object of one triple is the
subject of the other); `sibling` (triples with a shared subject); `mixed` (both chain and sibling types present).
- `2017_test_category`: (for `webnlg_challenge_2017`) tells whether the set of RDF triples was present in the training
set or not.
- `lex`: the lexicalizations, with:
- `text`: the text to be predicted.
- `lid`: a lexicalizayion ID, unique per example.
- `comment`: the lexicalizations were rated by crowd workers are either `good` or `bad`
### Data Splits
The `en` version has `train`, `test` and `dev` splits; the `de` version, only `train` and `dev`.
## Dataset Creation
### Curation Rationale
Natural Language Generation (NLG) is the process of automatically converting non-linguistic data into a linguistic output format (Reiter andDale, 2000; Gatt and Krahmer, 2018). Recently, the field has seen an increase in the number of available focused data resources as E2E (Novikova et al., 2017), ROTOWIRE(Wise-man et al., 2017) and WebNLG (Gardent et al.,2017a,b) corpora. Although theses recent releases are highly valuable resources for the NLG community in general,nall of them were designed to work with end-to-end NLG models. Hence, they consist of a collection of parallel raw representations and their corresponding textual realizations. No intermediate representations are available so researchersncan straight-forwardly use them to develop or evaluate popular tasks in NLG pipelines (Reiter and Dale, 2000), such as Discourse Ordering, Lexicalization, Aggregation, Referring Expression Generation, among others. Moreover, these new corpora, like many other resources in Computational Linguistics more in general, are only available in English, limiting the development of NLG-applications to other languages.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset uses the `cc-by-nc-sa-4.0` license. The source DBpedia project uses the `cc-by-sa-3.0` and `gfdl-1.1`
licenses.
### Citation Information
- If you use the Enriched WebNLG corpus, cite:
```
@InProceedings{ferreiraetal2018,
author = "Castro Ferreira, Thiago
and Moussallem, Diego
and Wubben, Sander
and Krahmer, Emiel",
title = "Enriching the WebNLG corpus",
booktitle = "Proceedings of the 11th International Conference on Natural Language Generation",
year = "2018",
series = {INLG'18},
publisher = "Association for Computational Linguistics",
address = "Tilburg, The Netherlands",
}
@inproceedings{web_nlg,
author = {Claire Gardent and
Anastasia Shimorina and
Shashi Narayan and
Laura Perez{-}Beltrachini},
editor = {Regina Barzilay and
Min{-}Yen Kan},
title = {Creating Training Corpora for {NLG} Micro-Planners},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational
Linguistics, {ACL} 2017, Vancouver, Canada, July 30 - August 4, Volume
1: Long Papers},
pages = {179--188},
publisher = {Association for Computational Linguistics},
year = {2017},
url = {https://doi.org/10.18653/v1/P17-1017},
doi = {10.18653/v1/P17-1017}
}
```
### Contributions
Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset. |
BangumiBase/watashinoyuriwaoshigotodesu | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Watashi No Yuri Wa Oshigoto Desu!
This is the image base of bangumi Watashi no Yuri wa Oshigoto Desu!, we detected 31 characters, 3255 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 221 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 10 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 15 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 12 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 12 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 10 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 23 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 14 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 26 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 22 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 416 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 142 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 31 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 5 | [Download](13/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 14 | 420 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 63 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 23 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 970 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 87 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 364 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 60 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 21 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 36 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 11 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 12 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 13 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 10 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 24 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 29 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 13 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 140 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
sunhaozhepy/ag_news_llm_keywords | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': World
'1': Sports
'2': Business
'3': Sci/Tech
- name: keywords
dtype: string
splits:
- name: train
num_bytes: 35165730
num_examples: 120000
- name: test
num_bytes: 2218894
num_examples: 7600
download_size: 22071064
dataset_size: 37384624
---
# Dataset Card for "ag_news_keywords"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/4b74fcab | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 186
num_examples: 10
download_size: 1338
dataset_size: 186
---
# Dataset Card for "4b74fcab"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
xinrongzhang2022/InfiniteBench | ---
configs:
- config_name: default
data_files:
- split: passkey
path: "passkey.jsonl"
- split: kv_retrieval
path: "kv_retrieval.jsonl"
- split: number_string
path: "number_string.jsonl"
- split: code_run
path: "code_run.jsonl"
- split: code_debug
path: "code_debug.jsonl"
- split: math_find
path: "math_find.jsonl"
- split: math_calc
path: "math_calc.jsonl"
- split: longdialogue_qa_eng
path: "longdialogue_qa_eng.jsonl"
- split: longbook_qa_eng
path: "longbook_qa_eng.jsonl"
- split: longbook_sum_eng
path: "longbook_sum_eng.jsonl"
- split: longbook_choice_eng
path: "longbook_choice_eng.jsonl"
- split: longbook_qa_chn
path: "longbook_qa_chn.jsonl"
---
---
license: apache-2.0
---
---
|
ArtifactAI/arxiv_s2orc_parsed | ---
dataset_info:
features:
- name: title
sequence: string
- name: author
sequence: string
- name: authoraffiliation
sequence: string
- name: venue
sequence: string
- name: abstract
dtype: string
- name: doi
dtype: string
- name: pdfurls
sequence: string
- name: corpusid
dtype: int64
- name: arxivid
dtype: string
- name: pdfsha
dtype: string
- name: text
dtype: string
- name: github_urls
sequence: string
splits:
- name: train
num_bytes: 89132091867
num_examples: 1671614
download_size: 35993359504
dataset_size: 89132091867
task_categories:
- text-generation
- zero-shot-classification
language:
- en
pretty_name: arxiv_s2orc_parsed
size_categories:
- 10B<n<100B
---
# Dataset Card for "ArtifactAI/arxiv_s2orc_parsed"
## Dataset Description
https://huggingface.co/datasets/ArtifactAI/arxiv_s2orc_parsed
### Dataset Summary
ArtifactAI/arxiv_s2orc_parsed is a subset of the [AllenAI S2ORC dataset](https://github.com/allenai/s2orc), a general-purpose corpus for NLP and text mining research over scientific papers,
The dataset is filtered strictly for ArXiv papers, including the full text for each paper. Github links have been extracted from each paper to aid in the development of [ArtifactAI/arxiv_python_research_code](https://huggingface.co/datasets/ArtifactAI/arxiv_python_research_code)
### How to use it
```python
from datasets import load_dataset
ds = load_dataset("ArtifactAI/arxiv_s2orc_parsed", split="train")
# dataset streaming (will only download the data as needed)
ds = load_dataset("ArtifactAI/arxiv_s2orc_parsed", streaming=True, split="train")
```
## Dataset Structure
### Data Instances
Each data instance corresponds to one file. The content of the file is in the `text` feature, and other features provide some metadata.
### Data Fields
- `title` (sequence): list of titles.
- `author` (sequence): list of authors.
- `authoraffiliation` (sequence): list of institution affiliations for each author.
- `venue`: (integer): paper publication venue.
- `doi`: (float): paper doi.
- `pdfurls`: (integer): url link to the paper.
- `corpusid`: (int): corpus ID as defined by s2orc.
- `arxivid`: (int): arxiv paper id.
- `pdfsha`: (string): unique pdf hash.
- `text`: (string): full text of the arxiv paper.
- github_urls: (sequence): list of github urls referenced within the text
### Data Splits
The dataset has no splits and all data is loaded as train split by default.
## Additional Information
### Dataset Curators
Matthew Kenney, Artifact AI, matt@artifactai.com
### Citation Information
```
@misc{arxiv_s2orc_parsed,
title={arxiv_s2orc_parsed},
author={Matthew Kenney},
year={2023}
}
``` |
monsoonery/common_voice_13_0_nl_EVAL_pseudo_labelled | ---
dataset_info:
config_name: nl
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: whisper_transcript
sequence: int64
splits:
- name: validation
num_bytes: 355594037.37
num_examples: 10930
download_size: 352312610
dataset_size: 355594037.37
configs:
- config_name: nl
data_files:
- split: validation
path: nl/validation-*
---
|
relbert/analogy_questions | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- n<1K
pretty_name: Analogy Question
---
# Dataset Card for "relbert/analogy_questions"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://aclanthology.org/2021.acl-long.280/](https://aclanthology.org/2021.acl-long.280/)
- **Dataset:** Analogy Questions
### Dataset Summary
This dataset contains 5 different word analogy questions used in [Analogy Language Model](https://aclanthology.org/2021.acl-long.280/).
- original analogy questions
| name | Size (valid/test) | Num of choice | Num of relation group | Original Reference |
|-----------|------------------:|--------------:|----------------------:|:--------------------------------------------------------------------------:|
| `u2` | 24/228 | 5,4,3 | 9 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) |
| `u4` | 48/432 | 5,4,3 | 5 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) |
| `google` | 50/500 | 4 | 2 | [Mikolov et al., (2013)](https://www.aclweb.org/anthology/N13-1090.pdf) |
| `bats` | 199/1799 | 4 | 3 | [Gladkova et al., (2016)](https://www.aclweb.org/anthology/N18-2017.pdf) |
- extra analogy questions
| name | Size (valid/test) | Num of choice (valid/test) | Num of relation group (valid/test) | Original Reference |
|:------------------------------------|:--------------------|:-----------------------------|:-------------------------------------|:-----------------------------------------------------------------------------------------------------------------------|
| `semeval2012_relational_similarity` | 79/- | 3/- | 79/- | [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) |
| `t_rex_relational_similarity` | 496/183 | 74/48 | 60/19 | [relbert/t_rex_relational_similarity](https://huggingface.co/datasets/relbert/t_rex_relational_similarity) |
| `conceptnet_relational_similarity` | 1112/1192 | 19/17 | 18/16 | [relbert/conceptnet_relational_similarity](https://huggingface.co/datasets/relbert/conceptnet_relational_similarity) |
| `nell_relational_similarity` | 400/600 | 5/7 | 4/6 | [relbert/nell_relational_similarity](https://huggingface.co/datasets/relbert/nell_relational_similarity) |
| `scan` | 178/1616 | 3,36,136,10,45,78,15,21,55,120,153,91,28/3,36,136,10,45,78,15,21,55,120,153,91,28 | 2/2 | [relbert/scientific_and_creative_analogy](https://huggingface.co/datasets/relbert/scientific_and_creative_analogy) |
## Dataset Structure
### Data Instances
An example of `test` looks as follows.
```
{
"stem": ["raphael", "painter"],
"answer": 2,
"choice": [["andersen", "plato"],
["reading", "berkshire"],
["marx", "philosopher"],
["tolstoi", "edison"]]
}
```
The `stem` is the query word pair, `choice` has word pair candidates,
and `answer` indicates the index of correct candidate which starts from `0`.
All data is lowercased except Google dataset.
### Citation Information
```
@inproceedings{ushio-etal-2021-bert-is,
title ={{BERT} is to {NLP} what {A}lex{N}et is to {CV}: {C}an {P}re-{T}rained {L}anguage {M}odels {I}dentify {A}nalogies?},
author={Ushio, Asahi and
Espinosa-Anke, Luis and
Schockaert, Steven and
Camacho-Collados, Jose},
booktitle={Proceedings of the {ACL}-{IJCNLP} 2021 Main Conference},
year={2021},
publisher={Association for Computational Linguistics}
}
```
### LICENSE
The LICENSE of all the resources are under [CC-BY-NC-4.0](./LICENSE). Thus, they are freely available for academic purpose or individual research, but restricted for commercial use.
|
CVasNLPExperiments/DTD_parition1_test_google_flan_t5_xl_mode_C_T_A_T_SPECIFIC_ns_1880 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_ViT_L_14_LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14_clip_tags_ViT_L_14_simple_specific_rices
num_bytes: 673355
num_examples: 1880
download_size: 226073
dataset_size: 673355
---
# Dataset Card for "DTD_parition1_test_google_flan_t5_xl_mode_C_T_A_T_SPECIFIC_ns_1880"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
theogorg/vi_corpora_parliament_processed | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 309805622
num_examples: 2884451
download_size: 193607904
dataset_size: 309805622
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "vi_corpora_parliament_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NLPC-UOM/MWP_Dataset | ---
license:
- mit
language:
- si
- ta
- en
task_categories:
- neural-machine-translation
- text-generation
---
# MWP-Dataset
English-Sinhala-Tamil Math Word Problem Dataset
## File Structure
- Simple-English.txt -> Simple English Math Word Problems
- Simple-Sinhala.txt -> Simple Sinhala Math Word Problems
- Simple-Tamil.txt -> Simple Tamil Math Word Problems
- Algebraic-English.txt -> Algebraic English Math Word Problems
- Algebraic-Sinhala.txt -> Algebraic Sinhala Math Word Problems
- Algebraic-Tamil.txt -> Algebraic Tamil Math Word Problems
Authors: |
hle2000/Mintaka_T5_xl_ssm_outputs | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: target
dtype: string
- name: answer_0
dtype: string
- name: answer_1
dtype: string
- name: answer_2
dtype: string
- name: answer_3
dtype: string
- name: answer_4
dtype: string
- name: answer_5
dtype: string
- name: answer_6
dtype: string
- name: answer_7
dtype: string
- name: answer_8
dtype: string
- name: answer_9
dtype: string
- name: answer_10
dtype: string
- name: answer_11
dtype: string
- name: answer_12
dtype: string
- name: answer_13
dtype: string
- name: answer_14
dtype: string
- name: answer_15
dtype: string
- name: answer_16
dtype: string
- name: answer_17
dtype: string
- name: answer_18
dtype: string
- name: answer_19
dtype: string
- name: answer_20
dtype: string
- name: answer_21
dtype: string
- name: answer_22
dtype: string
- name: answer_23
dtype: string
- name: answer_24
dtype: string
- name: answer_25
dtype: string
- name: answer_26
dtype: string
- name: answer_27
dtype: string
- name: answer_28
dtype: string
- name: answer_29
dtype: string
- name: answer_30
dtype: string
- name: answer_31
dtype: string
- name: answer_32
dtype: string
- name: answer_33
dtype: string
- name: answer_34
dtype: string
- name: answer_35
dtype: string
- name: answer_36
dtype: string
- name: answer_37
dtype: string
- name: answer_38
dtype: string
- name: answer_39
dtype: string
- name: answer_40
dtype: string
- name: answer_41
dtype: string
- name: answer_42
dtype: string
- name: answer_43
dtype: string
- name: answer_44
dtype: string
- name: answer_45
dtype: string
- name: answer_46
dtype: string
- name: answer_47
dtype: string
- name: answer_48
dtype: string
- name: answer_49
dtype: string
- name: answer_50
dtype: string
- name: answer_51
dtype: string
- name: answer_52
dtype: string
- name: answer_53
dtype: string
- name: answer_54
dtype: string
- name: answer_55
dtype: string
- name: answer_56
dtype: string
- name: answer_57
dtype: string
- name: answer_58
dtype: string
- name: answer_59
dtype: string
- name: answer_60
dtype: string
- name: answer_61
dtype: string
- name: answer_62
dtype: string
- name: answer_63
dtype: string
- name: answer_64
dtype: string
- name: answer_65
dtype: string
- name: answer_66
dtype: string
- name: answer_67
dtype: string
- name: answer_68
dtype: string
- name: answer_69
dtype: string
- name: answer_70
dtype: string
- name: answer_71
dtype: string
- name: answer_72
dtype: string
- name: answer_73
dtype: string
- name: answer_74
dtype: string
- name: answer_75
dtype: string
- name: answer_76
dtype: string
- name: answer_77
dtype: string
- name: answer_78
dtype: string
- name: answer_79
dtype: string
- name: answer_80
dtype: string
- name: answer_81
dtype: string
- name: answer_82
dtype: string
- name: answer_83
dtype: string
- name: answer_84
dtype: string
- name: answer_85
dtype: string
- name: answer_86
dtype: string
- name: answer_87
dtype: string
- name: answer_88
dtype: string
- name: answer_89
dtype: string
- name: answer_90
dtype: string
- name: answer_91
dtype: string
- name: answer_92
dtype: string
- name: answer_93
dtype: string
- name: answer_94
dtype: string
- name: answer_95
dtype: string
- name: answer_96
dtype: string
- name: answer_97
dtype: string
- name: answer_98
dtype: string
- name: answer_99
dtype: string
- name: answer_100
dtype: string
- name: answer_101
dtype: string
- name: answer_102
dtype: string
- name: answer_103
dtype: string
- name: answer_104
dtype: string
- name: answer_105
dtype: string
- name: answer_106
dtype: string
- name: answer_107
dtype: string
- name: answer_108
dtype: string
- name: answer_109
dtype: string
- name: answer_110
dtype: string
- name: answer_111
dtype: string
- name: answer_112
dtype: string
- name: answer_113
dtype: string
- name: answer_114
dtype: string
- name: answer_115
dtype: string
- name: answer_116
dtype: string
- name: answer_117
dtype: string
- name: answer_118
dtype: string
- name: answer_119
dtype: string
- name: answer_120
dtype: string
- name: answer_121
dtype: string
- name: answer_122
dtype: string
- name: answer_123
dtype: string
- name: answer_124
dtype: string
- name: answer_125
dtype: string
- name: answer_126
dtype: string
- name: answer_127
dtype: string
- name: answer_128
dtype: string
- name: answer_129
dtype: string
- name: answer_130
dtype: string
- name: answer_131
dtype: string
- name: answer_132
dtype: string
- name: answer_133
dtype: string
- name: answer_134
dtype: string
- name: answer_135
dtype: string
- name: answer_136
dtype: string
- name: answer_137
dtype: string
- name: answer_138
dtype: string
- name: answer_139
dtype: string
- name: answer_140
dtype: string
- name: answer_141
dtype: string
- name: answer_142
dtype: string
- name: answer_143
dtype: string
- name: answer_144
dtype: string
- name: answer_145
dtype: string
- name: answer_146
dtype: string
- name: answer_147
dtype: string
- name: answer_148
dtype: string
- name: answer_149
dtype: string
- name: answer_150
dtype: string
- name: answer_151
dtype: string
- name: answer_152
dtype: string
- name: answer_153
dtype: string
- name: answer_154
dtype: string
- name: answer_155
dtype: string
- name: answer_156
dtype: string
- name: answer_157
dtype: string
- name: answer_158
dtype: string
- name: answer_159
dtype: string
- name: answer_160
dtype: string
- name: answer_161
dtype: string
- name: answer_162
dtype: string
- name: answer_163
dtype: string
- name: answer_164
dtype: string
- name: answer_165
dtype: string
- name: answer_166
dtype: string
- name: answer_167
dtype: string
- name: answer_168
dtype: string
- name: answer_169
dtype: string
- name: answer_170
dtype: string
- name: answer_171
dtype: string
- name: answer_172
dtype: string
- name: answer_173
dtype: string
- name: answer_174
dtype: string
- name: answer_175
dtype: string
- name: answer_176
dtype: string
- name: answer_177
dtype: string
- name: answer_178
dtype: string
- name: answer_179
dtype: string
- name: answer_180
dtype: string
- name: answer_181
dtype: string
- name: answer_182
dtype: string
- name: answer_183
dtype: string
- name: answer_184
dtype: string
- name: answer_185
dtype: string
- name: answer_186
dtype: string
- name: answer_187
dtype: string
- name: answer_188
dtype: string
- name: answer_189
dtype: string
- name: answer_190
dtype: string
- name: answer_191
dtype: string
- name: answer_192
dtype: string
- name: answer_193
dtype: string
- name: answer_194
dtype: string
- name: answer_195
dtype: string
- name: answer_196
dtype: string
- name: answer_197
dtype: string
- name: answer_198
dtype: string
- name: answer_199
dtype: string
- name: target_out_of_vocab
dtype: bool
splits:
- name: train
num_bytes: 116272791
num_examples: 32000
- name: validation
num_bytes: 7453582
num_examples: 2000
- name: test
num_bytes: 14833727
num_examples: 4000
download_size: 94335289
dataset_size: 138560100
---
# Dataset Card for "Mintaka_T5_xl_ssm_outputs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AptusAI/chat-eur-lex | ---
license: cc-by-4.0
dataset_info:
features:
- name: text
dtype: string
- name: language
dtype: string
- name: celex
dtype: string
splits:
- name: train
num_bytes: 2170096432
num_examples: 37226
download_size: 489777195
dataset_size: 2170096432
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- en
- it
size_categories:
- 10K<n<100K
---
# Dataset Card for the Chat-EUR-Lex dataset
## Dataset Description
- **Homepage:** [Chat-EUR-Lex project Homepage](https://github.com/Aptus-AI/chat-eur-lex)
- **Repository:** [Chat-EUR-Lex project Homepage](https://github.com/Aptus-AI/chat-eur-lex)
- **Point of Contact:** [Aptus Research Team](research@aptus.ai)
-
## Dataset Description
- **Homepage:** [Chat-EUR-Lex project Homepage](https://github.com/Aptus-AI/chat-eur-lex)
- **Repository:** [Chat-EUR-Lex project Homepage](https://github.com/Aptus-AI/chat-eur-lex)
- **Point of Contact:** [Aptus Research Team](research@aptus.ai)
The Chat-EUR-Lex dataset comprises a selection of legal acts in English and Italian sourced from EUR-Lex, covering the period from January 1, 2014, to December 31, 2023. Specifically, it includes all historical texts preserved in Celex 3 that remain unaltered over time, along with the most recent consolidated versions in Celex 0 for acts that have undergone amendments. Corrigenda are omitted from this dataset. Additionally, all the EUR-Lex entries that are not provided with XML or HTML data are excluded from the selection.\
Chat-EUR-Lex dataset is originated in the context of the [Chat-EUR-Lex project](https://github.com/Aptus-AI/chat-eur-lex). The Chat-EUR-Lex project is funded by the European Union within the framework of the [NGI Search project](https://ngi-search-2nd-open-call.fundingbox.com/) under grant agreement No 101069364. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Commission.\
Chat-EUR-Lex project is realized by the [Institute of Legal Informatics and Judicial Systems (IGSG-CNR)](https://www.igsg.cnr.it/en/) and the [Aptus.AI](https://www.aptus.ai/) startup.
### Languages
All documents are written either in Engish or Italian. Specifically, the dataset consists of 19,062 English documents and 18,164 Italian documents.
## Dataset Structure
### Data Instances
Example of dataset instance:
|text|language|celex|
|----|--------|-----|
|02018R0338 — IT — 21.08.2019 — 001.001 Il presente testo è un semplice strumento di documentazione e non produce alcun effetto giuridico. Le istituzioni dell’Unione non assumono alcuna responsabilità per i suoi contenuti. Le versioni facenti fede degli atti pertinenti, compresi i loro preamboli, sono quelle pubblicate nella Gazzetta ufficiale dell’Unione europea e disponibili in EUR-Lex. Tali testi ufficiali sono direttamente accessibili attraverso i link inseriti nel presente documento[...]| ITA| 02018R0338-20190821
### Data Fields
The following data fields are provided for each document:
`text`: (**str**) The full content of each document.\
`language`: (**str**) The language in which the document text is expressed.\
`celex`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both EUR-Lex and CELLAR.
## Dataset Creation
### Curation Rationale
The dataset was created in the context of the [Chat-EUR-Lex project](https://github.com/Aptus-AI/chat-eur-lex)\. The project aim is to improve the accessibility of EU laws, thus democratizing the availability of legal information for companies, lawyers, researchers and citizens.
The rationale underlying the creation of this dataset is the selection of all the texts of legal acts in force, so as to build a system capable of providing information focusing on regulations in force only.
### Source Data
#### Initial Data Collection and Normalization
The original data are available at [EUR-Lex portal](https://eur-lex.europa.eu) in an unprocessed format.
The documents were downloaded from EUR-Lex portal in HTML format. All HTML code has been removed except the tables, therefore only textual information has been retained.
### Personal and Sensitive Information
The dataset does not include personal or sensitive information.
## Additional Information
### Dataset Curators
[Aptus Research Team](research@aptus.ai)
### Licensing Information
© European Union, 1998-2024
The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.
Some documents, like the International Accounting Standards, may be subject to special conditions of use; these are mentioned in the respective Official Journal/document. You can also consult the rules on on reproducing euro coin/note images.
The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source [https://eur-lex.europa.eu/content/legal-notice/legal-notice.html](https://eur-lex.europa.eu/content/legal-notice/legal-notice.html?locale=en)
### Contributions
[Aptus.AI](https://www.aptus.ai/) and [Institute of Legal Informatics and Judicial Systems (IGSG-CNR)](https://www.igsg.cnr.it/en/). |
projectbaraat/kannada-qa-data-v0.1 | ---
dataset_info:
features:
- name: answer
dtype: string
- name: context
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 69642578
num_examples: 99544
download_size: 26721665
dataset_size: 69642578
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dane | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-Danish-Universal-Dependencies-treebank
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
paperswithcode_id: dane
pretty_name: DaNE
dataset_info:
features:
- name: sent_id
dtype: string
- name: text
dtype: string
- name: tok_ids
sequence: int64
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NUM
'1': CCONJ
'2': PRON
'3': VERB
'4': INTJ
'5': AUX
'6': ADJ
'7': PROPN
'8': PART
'9': ADV
'10': PUNCT
'11': ADP
'12': NOUN
'13': X
'14': DET
'15': SYM
'16': SCONJ
- name: morph_tags
sequence: string
- name: dep_ids
sequence: int64
- name: dep_labels
sequence:
class_label:
names:
'0': parataxis
'1': mark
'2': nummod
'3': discourse
'4': compound:prt
'5': reparandum
'6': vocative
'7': list
'8': obj
'9': dep
'10': det
'11': obl:loc
'12': flat
'13': iobj
'14': cop
'15': expl
'16': obl
'17': conj
'18': nmod
'19': root
'20': acl:relcl
'21': goeswith
'22': appos
'23': fixed
'24': obl:tmod
'25': xcomp
'26': advmod
'27': nmod:poss
'28': aux
'29': ccomp
'30': amod
'31': cc
'32': advcl
'33': nsubj
'34': punct
'35': case
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 7311212
num_examples: 4383
- name: test
num_bytes: 909699
num_examples: 565
- name: validation
num_bytes: 940413
num_examples: 564
download_size: 1209710
dataset_size: 9161324
---
# Dataset Card for DaNE
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [DaNE homepage](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#dane)
- **Repository:** [Github](https://github.com/alexandrainst/danlp)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/2020.lrec-1.565)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Danish Dependency Treebank (DaNE) is a named entity annotation for the Danish Universal Dependencies treebank using the CoNLL-2003 annotation scheme.
The Danish UD treebank (Johannsen et al., 2015, UD-DDT) is a conversion of the Danish Dependency Treebank (Buch-Kromann et al. 2003) based on texts from Parole (Britt, 1998). UD-DDT has annotations for dependency parsing and part-of-speech (POS) tagging. The dataset was annotated with Named Entities for PER, ORG, and LOC by the Alexandra Institute in the DaNE dataset (Hvingelby et al. 2020).
### Supported Tasks and Leaderboards
Parts-of-speech tagging, dependency parsing and named entitity recognition.
### Languages
Danish
## Dataset Structure
### Data Instances
This is an example in the "train" split:
```python
{
'sent_id': 'train-v2-0\n',
'lemmas': ['på', 'fredag', 'have', 'SiD', 'invitere', 'til', 'reception', 'i', 'SID-hus', 'i', 'anledning', 'af', 'at', 'formand', 'Kjeld', 'Christensen', 'gå', 'ind', 'i', 'den', 'glad', 'tresser', '.'],
'dep_labels': [35, 16, 28, 33, 19, 35, 16, 35, 18, 35, 18, 1, 1, 33, 22, 12, 32, 11, 35, 10, 30, 16, 34],
'ner_tags': [0, 0, 0, 3, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 0, 0, 0],
'morph_tags': ['AdpType=Prep', 'Definite=Ind|Gender=Com|Number=Sing', 'Mood=Ind|Tense=Pres|VerbForm=Fin|Voice=Act', '_', 'Definite=Ind|Number=Sing|Tense=Past|VerbForm=Part', 'AdpType=Prep', 'Definite=Ind|Gender=Com|Number=Sing', 'AdpType=Prep', 'Definite=Def|Gender=Neut|Number=Sing', 'AdpType=Prep', 'Definite=Ind|Gender=Com|Number=Sing', 'AdpType=Prep', '_', 'Definite=Def|Gender=Com|Number=Sing', '_', '_', 'Mood=Ind|Tense=Pres|VerbForm=Fin|Voice=Act', '_', 'AdpType=Prep', 'Number=Plur|PronType=Dem', 'Degree=Pos|Number=Plur', 'Definite=Ind|Gender=Com|Number=Plur', '_'],
'dep_ids': [2, 5, 5, 5, 0, 7, 5, 9, 7, 11, 7, 17, 17, 17, 14, 15, 11, 17, 22, 22, 22, 18, 5],
'pos_tags': [11, 12, 5, 7, 3, 11, 12, 11, 12, 11, 12, 11, 16, 12, 7, 7, 3, 9, 11, 14, 6, 12, 10],
'text': 'På fredag har SID inviteret til reception i SID-huset i anledning af at formanden Kjeld Christensen går ind i de glade tressere.\n',
'tokens': ['På', 'fredag', 'har', 'SID', 'inviteret', 'til', 'reception', 'i', 'SID-huset', 'i', 'anledning', 'af', 'at', 'formanden', 'Kjeld', 'Christensen', 'går', 'ind', 'i', 'de', 'glade', 'tressere', '.'],
'tok_ids': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]
}
```
### Data Fields
Data Fields:
- q_id: a string question identifier for each example, corresponding to its ID in the Pushshift.io Reddit submission dumps.
- subreddit: One of explainlikeimfive, askscience, or AskHistorians, indicating which subreddit the question came from
- title: title of the question, with URLs extracted and replaced by URL_n tokens
- title_urls: list of the extracted URLs, the nth element of the list was replaced by URL_n
- sent_id: a string identifier for each example
- text: a string, the original sentence (not tokenized)
- tok_ids: a list of ids (int), one for each token
- tokens: a list of strings, the tokens
- lemmas: a list of strings, the lemmas of the tokens
- pos_tags: a list of strings, the part-of-speech tags of the tokens
- morph_tags: a list of strings, the morphological tags of the tokens
- dep_ids: a list of ids (int), the id of the head of the incoming dependency for each token
- dep_labels: a list of strings, the dependency labels
- ner_tags: a list of strings, the named entity tags (BIO format)
### Data Splits
| | train | validation | test |
|-------------|-------:|-----------:|-------:|
| # sentences | 4383 | 564 | 565 |
| # tokens | 80 378 | 10 322 | 10 023 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Citation Information
```
@inproceedings{hvingelby-etal-2020-dane,
title = "{D}a{NE}: A Named Entity Resource for {D}anish",
author = "Hvingelby, Rasmus and
Pauli, Amalie Brogaard and
Barrett, Maria and
Rosted, Christina and
Lidegaard, Lasse Malm and
S{\o}gaard, Anders",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.565",
pages = "4597--4604",
abstract = "We present a named entity annotation for the Danish Universal Dependencies treebank using the CoNLL-2003 annotation scheme: DaNE. It is the largest publicly available, Danish named entity gold annotation. We evaluate the quality of our annotations intrinsically by double annotating the entire treebank and extrinsically by comparing our annotations to a recently released named entity annotation of the validation and test sections of the Danish Universal Dependencies treebank. We benchmark the new resource by training and evaluating competitive architectures for supervised named entity recognition (NER), including FLAIR, monolingual (Danish) BERT and multilingual BERT. We explore cross-lingual transfer in multilingual BERT from five related languages in zero-shot and direct transfer setups, and we show that even with our modestly-sized training set, we improve Danish NER over a recent cross-lingual approach, as well as over zero-shot transfer from five related languages. Using multilingual BERT, we achieve higher performance by fine-tuning on both DaNE and a larger Bokm{\aa}l (Norwegian) training set compared to only using DaNE. However, the highest performance isachieved by using a Danish BERT fine-tuned on DaNE. Our dataset enables improvements and applicability for Danish NER beyond cross-lingual methods. We employ a thorough error analysis of the predictions of the best models for seen and unseen entities, as well as their robustness on un-capitalized text. The annotated dataset and all the trained models are made publicly available.",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
### Contributions
Thanks to [@ophelielacroix](https://github.com/ophelielacroix), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
npdcya/Npd_Cya | ---
license: apache-2.0
---
|
afmck/peanuts-flan-t5-xl | ---
license: apache-2.0
task_categories:
- text-to-image
language:
- en
pretty_name: Peanuts Dataset (Snoopy and Co.)
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: image
dtype: image
- name: panel_name
dtype: string
- name: characters
sequence: string
- name: themes
sequence: string
- name: color
dtype: string
- name: year
dtype: int64
- name: caption
dtype: string
splits:
- name: train
num_bytes: 2947874869.848
num_examples: 77456
download_size: 0
dataset_size: 2947874869.848
---
# Peanut Comic Strip Dataset (Snoopy & Co.)

This is a dataset Peanuts comic strips from `1950/10/02` to `2000/02/13`.
There are `77,456` panels extracted from `17,816` comic strips.
The dataset size is approximately `4.4G`.
Each row in the dataset contains the following fields:
- `image`: `PIL.Image` containing the extracted panel.
- `panel_name`: unique identifier for the row.
- `characters`: `tuple[str, ...]` of characters included in the comic strip the panel is part of.
- `themes`: `tuple[str, ...]` of theme in the comic strip the panel is part of.
- `color`: `str` indicating whether the panel is grayscale or in color.
- `caption`: [BLIP-2_FLAN-T5-XL](https://huggingface.co/docs/transformers/main/model_doc/blip-2) generated caption from the panel.
- `year`: `int` storing the year the specific panel was released.
> **FLAN-T5-XL has a commercial use license and so this dataset can be used for commercial projects. Alternatively use [this similar dataset](https://huggingface.co/datasets/afmck/peanuts-opt-6.7b) that uses OPT-6.7B as the caption pipeline's text model, however it does not permit commercial use.**
Character and theme information was extracted from [Peanuts Wiki (Fandom)](https://peanuts.fandom.com/wiki/Peanuts_Wiki) using [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/).
Images were extracted from [Peanuts Search](https://peanuts-search.com/).
Only strips with the following characters were extracted:
```
- "Charlie Brown"
- "Sally Brown"
- "Joe Cool" # Snoopy alter-ego
- "Franklin"
- "Violet Gray"
- "Eudora"
- "Frieda"
- "Marcie"
- "Peppermint Patty"
- "Patty"
- "Pig-Pen"
- "Linus van Pelt"
- "Lucy van Pelt"
- "Rerun van Pelt"
- "Schroeder"
- "Snoopy"
- "Shermy"
- "Spike"
- "Woodstock"
- "the World War I Flying Ace" # Snoopy alter-ego
```
### Extraction Details
Panel detection and extraction was done using the following codeblock:
```python
def check_contour(cnt):
area = cv2.contourArea(cnt)
if area < 600:
return False
_, _, w, h = cv2.boundingRect(cnt)
if w / h < 1 / 2: return False
if w / h > 2 / 1: return False
return True
def get_panels_from_image(path):
panels = []
original_img = cv2.imread(path)
gray = cv2.cvtColor(original_img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5,5), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=1)
invert = 255 - opening
cnts, _ = cv2.findContours(invert, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
idx = 0
for cnt in cnts:
if not check_contour(cnt): continue
idx += 1
x,y,w,h = cv2.boundingRect(cnt)
roi = original_img[y:y+h,x:x+w]
panels.append(roi)
return panels
```
`check_contour` will reject panels with `area < 600` or with aspect ratios larger than `2` or smaller than `0.5`.
Grayscale detection was done using the following codeblock:
```python
def is_grayscale(panel):
LAB_THRESHOLD = 10.
img = cv2.cvtColor(panel, cv2.COLOR_RGB2LAB)
_, ea, eb = cv2.split(img)
de = abs(ea - eb)
mean_e = np.mean(de)
return mean_e < LAB_THRESHOLD
```
Captioning was done using the standard BLIP-2 pipeline shown in the [Huggingface docs](https://huggingface.co/docs/transformers/main/model_doc/blip-2) using beam search over 10 beams and a repetition penalty of `2.0`.
Raw captions are extracted and no postprocessing is applied. You may wish to normalise captions (such as replacing "cartoon" with "peanuts cartoon") or incorporate extra metadata into prompts. |
xcz9811/4q | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: quadrant
dtype:
class_label:
names:
'0': Q1
'1': Q2
'2': Q3
'3': Q4
splits:
- name: train
num_bytes: 291173680.0
num_examples: 900
download_size: 291039981
dataset_size: 291173680.0
---
# Dataset Card for "4q"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
andstor/smart_contract_code_comments | ---
dataset_info:
features:
- name: contract_name
dtype: string
- name: file_path
dtype: string
- name: contract_address
dtype: string
- name: language
dtype: string
- name: class_name
dtype: string
- name: class_code
dtype: string
- name: class_documentation
dtype: string
- name: class_documentation_type
dtype: string
- name: func_name
dtype: string
- name: func_code
dtype: string
- name: func_documentation
dtype: string
- name: func_documentation_type
dtype: string
- name: compiler_version
dtype: string
- name: license_type
dtype: string
- name: swarm_source
dtype: string
- name: meta
struct:
- name: func_code_index
sequence: int64
- name: __index_level_0__
dtype: int64
config_name: data
splits:
- name: train
num_bytes: 11530607173
num_examples: 1267441
- name: test
num_bytes: 1306082431
num_examples: 143080
- name: validation
num_bytes: 1264266873
num_examples: 130849
download_size: 1995835391
dataset_size: 14100956477
paperswithcode_id: verified-smart-contract-code-comments
---
|
LucasThil/miniwob_plusplus_v2_raw | ---
dataset_info:
features:
- name: task_name
dtype: string
- name: utterance
dtype: string
- name: reward
dtype: float64
- name: raw_reward
dtype: float64
- name: processed_states
dtype: string
splits:
- name: train
num_bytes: 5781242512
num_examples: 18124
download_size: 537245885
dataset_size: 5781242512
---
# Dataset Card for "miniwob_plusplus_v2_raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
udmurtNLP/django-localization-eng-udm | ---
dataset_info:
features:
- name: English
dtype: string
- name: Udmurt
dtype: string
splits:
- name: train
num_bytes: 8721
num_examples: 216
download_size: 7174
dataset_size: 8721
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- udm
--- |
joseph6102/Rionegro | ---
license: openrail
---
|
hitachi-nlp/FLD.v2 | ---
dataset_info:
- config_name: default
features:
- name: version
dtype: string
- name: hypothesis
dtype: string
- name: hypothesis_formula
dtype: string
- name: context
dtype: string
- name: context_formula
dtype: string
- name: proofs
sequence: string
- name: proofs_formula
sequence: string
- name: negative_hypothesis
dtype: string
- name: negative_hypothesis_formula
dtype: string
- name: negative_proofs
sequence: string
- name: negative_original_tree_depth
dtype: int64
- name: original_tree_depth
dtype: int64
- name: depth
dtype: int64
- name: num_formula_distractors
dtype: int64
- name: num_translation_distractors
dtype: int64
- name: num_all_distractors
dtype: int64
- name: proof_label
dtype: string
- name: negative_proof_label
dtype: string
- name: world_assump_label
dtype: string
- name: negative_world_assump_label
dtype: string
- name: prompt_serial
dtype: string
- name: proof_serial
dtype: string
splits:
- name: train
num_bytes: 103394163
num_examples: 30000
- name: validation
num_bytes: 17205990
num_examples: 5000
- name: test
num_bytes: 17215356
num_examples: 5000
download_size: 51122839
dataset_size: 137815509
- config_name: star
features:
- name: version
dtype: string
- name: hypothesis
dtype: string
- name: hypothesis_formula
dtype: string
- name: context
dtype: string
- name: context_formula
dtype: string
- name: proofs
sequence: string
- name: proofs_formula
sequence: string
- name: negative_hypothesis
dtype: string
- name: negative_hypothesis_formula
dtype: string
- name: negative_proofs
sequence: string
- name: negative_original_tree_depth
dtype: int64
- name: original_tree_depth
dtype: int64
- name: depth
dtype: int64
- name: num_formula_distractors
dtype: int64
- name: num_translation_distractors
dtype: int64
- name: num_all_distractors
dtype: int64
- name: proof_label
dtype: string
- name: negative_proof_label
dtype: string
- name: world_assump_label
dtype: string
- name: negative_world_assump_label
dtype: string
- name: prompt_serial
dtype: string
- name: proof_serial
dtype: string
splits:
- name: train
num_bytes: 129618848
num_examples: 30000
- name: validation
num_bytes: 21529187
num_examples: 5000
- name: test
num_bytes: 21731836
num_examples: 5000
download_size: 63147762
dataset_size: 172879871
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: star
data_files:
- split: train
path: star/train-*
- split: validation
path: star/validation-*
- split: test
path: star/test-*
---
# Dataset Card for "FLD.v2"
For the schema of the dataset, see [here](https://github.com/hitachi-nlp/FLD-corpus.git).
For the whole of the project, see [our project page](https://github.com/hitachi-nlp/FLD/).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Multimodal-Fatima/DTD_parition1_train | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': banded
'1': blotchy
'2': braided
'3': bubbly
'4': bumpy
'5': chequered
'6': cobwebbed
'7': cracked
'8': crosshatched
'9': crystalline
'10': dotted
'11': fibrous
'12': flecked
'13': freckled
'14': frilly
'15': gauzy
'16': grid
'17': grooved
'18': honeycombed
'19': interlaced
'20': knitted
'21': lacelike
'22': lined
'23': marbled
'24': matted
'25': meshed
'26': paisley
'27': perforated
'28': pitted
'29': pleated
'30': polka-dotted
'31': porous
'32': potholed
'33': scaly
'34': smeared
'35': spiralled
'36': sprinkled
'37': stained
'38': stratified
'39': striped
'40': studded
'41': swirly
'42': veined
'43': waffled
'44': woven
'45': wrinkled
'46': zigzagged
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: LLM_Description_opt175b_downstream_tasks_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: clip_tags_ViT_L_14_ensemble_specific
dtype: string
- name: clip_tags_ViT_L_14_simple_specific
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_dtd
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_full
sequence: string
- name: clip_tags_ViT_L_14_with_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_wo_openai_classes
sequence: string
- name: clip_tags_ViT_B_16_simple_specific
dtype: string
- name: clip_tags_ViT_B_16_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_32_simple_specific
dtype: string
- name: clip_tags_ViT_B_32_ensemble_specific
dtype: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_simple_specific
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B_ensemble_specific
dtype: string
splits:
- name: train
num_bytes: 235001213.4
num_examples: 1880
download_size: 230863096
dataset_size: 235001213.4
---
# Dataset Card for "DTD_parition1_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jonathanjordan21/drugs-composition-indonesian-donut | ---
dataset_info:
features:
- name: images
dtype: image
- name: labels
dtype: string
splits:
- name: train
num_bytes: 13650178.0
num_examples: 22
download_size: 13642464
dataset_size: 13650178.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "drugs-composition-indonesian-donut"
## Generate Custom Data
Please visit `https://huggingface.co/spaces/jonathanjordan21/donut-labelling` for the interface to generate custom data.
The data format is (.zip). Images and Labels are stored in separated .zip files.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards |
anilguven/turkish_product_reviews_sentiment | ---
license: unknown
language:
- tr
tags:
- turkish
- product
- review
pretty_name: d
size_categories:
- 100K<n<1M
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset was obtained via https://www.kaggle.com/datasets/bulentsiyah/hepsi-burada-yorum |
plncmm/cowese-sample | ---
dataset_info:
features:
- name: text
dtype: string
- name: len
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 60335735
num_examples: 20000
download_size: 36148347
dataset_size: 60335735
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cowese-sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pseudolab/autotrain-data-Medical_Terminology_Zephyr | ---
dataset_info:
features:
- name: tags
dtype: string
- name: categories
dtype: string
- name: topics
dtype: string
- name: title
dtype: string
- name: es-title
dtype: string
- name: url
dtype: string
- name: es-bite
dtype: string
- name: audience
dtype: string
- name: segment
dtype: string
- name: insurance-status
dtype: string
- name: state
dtype: string
- name: condition
dtype: string
- name: autotrain_text
dtype: string
splits:
- name: train
num_bytes: 123044
num_examples: 257
- name: validation
num_bytes: 123044
num_examples: 257
download_size: 128192
dataset_size: 246088
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "autotrain-data-Medical_Terminology_Zephyr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
irds/antique | ---
pretty_name: '`antique`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `antique`
The `antique` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=403,666
This dataset is used by: [`antique_test`](https://huggingface.co/datasets/irds/antique_test), [`antique_test_non-offensive`](https://huggingface.co/datasets/irds/antique_test_non-offensive), [`antique_train`](https://huggingface.co/datasets/irds/antique_train), [`antique_train_split200-train`](https://huggingface.co/datasets/irds/antique_train_split200-train), [`antique_train_split200-valid`](https://huggingface.co/datasets/irds/antique_train_split200-valid)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/antique', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hashemi2020Antique,
title={ANTIQUE: A Non-Factoid Question Answering Benchmark},
author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft},
booktitle={ECIR},
year={2020}
}
```
|
celinelee/python_fn_calls | ---
dataset_info:
features:
- name: code
dtype: string
splits:
- name: train
num_bytes: 24364343
num_examples: 679704
download_size: 10014515
dataset_size: 24364343
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
maxolotl/must-c-en-de-wait9-01 | ---
dataset_info:
features:
- name: current_source
dtype: string
- name: current_target
dtype: string
- name: target_token
dtype: string
splits:
- name: train
num_bytes: 915123929
num_examples: 4513829
- name: test
num_bytes: 11255234
num_examples: 57041
- name: validation
num_bytes: 5621779
num_examples: 26843
download_size: 153197691
dataset_size: 932000942
---
# Dataset Card for "must-c-en-de-wait9-01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
1aurent/individuality-of-handwriting | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- image-classification
pretty_name: Individuality Of Handwriting (CEDAR)
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': original
'1': forgeries
- name: individual
dtype: uint8
- name: figure
dtype: uint8
splits:
- name: train
num_bytes: 195780898.8
num_examples: 2640
download_size: 252337526
dataset_size: 195780898.8
tags:
- legal
- signatures
- CEDAR
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Individuality Of Handwriting (CEDAR)
https://pubmed.ncbi.nlm.nih.gov/12136998/ \
https://cedar.buffalo.edu/NIJ/projectinfo.html
## Abstract
Motivated by several rulings in United States courts concerning expert testimony in general, and handwriting testimony in particular, we undertook a study to objectively validate the hypothesis that handwriting is individual. Handwriting samples of 1,500 individuals, representative of the U.S. population with respect to gender, age, ethnic groups, etc., were obtained. Analyzing differences in handwriting was done by using computer algorithms for extracting features from scanned images of handwriting. Attributes characteristic of the handwriting were obtained, e.g., line separation, slant, character shapes, etc. These attributes, which are a subset of attributes used by forensic document examiners (FDEs), were used to quantitatively establish individuality by using machine learning approaches. Using global attributes of handwriting and very few characters in the writing, the ability to determine the writer with a high degree of confidence was established. The work is a step towards providing scientific support for admitting handwriting evidence in court. The mathematical approach and the resulting software also have the promise of aiding the FDE.
Srihari SN, Cha SH, Arora H, Lee S. Individuality of handwriting. J Forensic Sci. 2002 Jul;47(4):856-72. PMID: 12136998. |
Flavinhouaua2022/UAUA | ---
license: apache-2.0
---
|
CyberHarem/kuroshio_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kuroshio/黒潮/黒潮 (Kantai Collection)
This is the dataset of kuroshio/黒潮/黒潮 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `black_hair, hair_ornament, hairclip, short_hair, green_eyes, ribbon, neck_ribbon, blue_ribbon, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 368.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kuroshio_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 265.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kuroshio_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1100 | 535.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kuroshio_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 344.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kuroshio_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1100 | 664.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kuroshio_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kuroshio_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 31 |  |  |  |  |  | 1girl, solo, school_uniform, short_sleeves, upper_body, white_shirt, smile, black_vest, looking_at_viewer, simple_background, white_gloves, white_background, open_mouth, blush |
| 1 | 10 |  |  |  |  |  | 1girl, bike_shorts, black_shorts, black_vest, pleated_skirt, school_uniform, short_sleeves, shorts_under_skirt, solo, white_gloves, white_shirt, black_skirt, looking_at_viewer, smile, cowboy_shot, blush, simple_background, white_background, grey_skirt, open_mouth |
| 2 | 5 |  |  |  |  |  | 1girl, bike_shorts, looking_at_viewer, school_uniform, shirt, solo, vest, white_gloves, pleated_skirt, short_sleeves, yellow_eyes, blush, open_mouth |
| 3 | 7 |  |  |  |  |  | 1girl, cowboy_shot, looking_at_viewer, solo, black_one-piece_swimsuit, flat_chest, artist_name, one-hour_drawing_challenge, smile, character_name, competition_swimsuit, lying, school_swimsuit |
| 4 | 5 |  |  |  |  |  | 1girl, alternate_costume, looking_at_viewer, open_mouth, smile, solo, floral_print, obi, one-hour_drawing_challenge, twitter_username, upper_body, yukata, blush, holding_food, purple_kimono, simple_background, takoyaki, white_background, wide_sleeves |
| 5 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, solo, alternate_costume, blue_one-piece_swimsuit, collarbone, dated, simple_background, sitting, blush, competition_school_swimsuit, signature, white_background |
| 6 | 23 |  |  |  |  |  | rabbit_ears, 1girl, fake_animal_ears, playboy_bunny, solo, detached_collar, black_leotard, black_pantyhose, blush, looking_at_viewer, wrist_cuffs, bowtie, medium_breasts, smile, cleavage, cowboy_shot, rabbit_tail, simple_background, open_mouth, strapless_leotard, yellow_eyes, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | school_uniform | short_sleeves | upper_body | white_shirt | smile | black_vest | looking_at_viewer | simple_background | white_gloves | white_background | open_mouth | blush | bike_shorts | black_shorts | pleated_skirt | shorts_under_skirt | black_skirt | cowboy_shot | grey_skirt | shirt | vest | yellow_eyes | black_one-piece_swimsuit | flat_chest | artist_name | one-hour_drawing_challenge | character_name | competition_swimsuit | lying | school_swimsuit | alternate_costume | floral_print | obi | twitter_username | yukata | holding_food | purple_kimono | takoyaki | wide_sleeves | blue_one-piece_swimsuit | collarbone | dated | sitting | competition_school_swimsuit | signature | rabbit_ears | fake_animal_ears | playboy_bunny | detached_collar | black_leotard | black_pantyhose | wrist_cuffs | bowtie | medium_breasts | cleavage | rabbit_tail | strapless_leotard |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------------|:----------------|:-------------|:--------------|:--------|:-------------|:--------------------|:--------------------|:---------------|:-------------------|:-------------|:--------|:--------------|:---------------|:----------------|:---------------------|:--------------|:--------------|:-------------|:--------|:-------|:--------------|:---------------------------|:-------------|:--------------|:-----------------------------|:-----------------|:-----------------------|:--------|:------------------|:--------------------|:---------------|:------|:-------------------|:---------|:---------------|:----------------|:-----------|:---------------|:--------------------------|:-------------|:--------|:----------|:------------------------------|:------------|:--------------|:-------------------|:----------------|:------------------|:----------------|:------------------|:--------------|:---------|:-----------------|:-----------|:--------------|:--------------------|
| 0 | 31 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | X | | | | | X | | X | | X | X | X | | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | X | | | | | X | | X | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | | | X | | X | | X | X | | X | X | X | | | | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | X | | | | | | | X | X | | X | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | |
| 6 | 23 |  |  |  |  |  | X | X | | | | | X | | X | X | | X | X | X | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
tyzhu/lmind_nq_train6000_eval6489_v1_doc_qa | ---
configs:
- config_name: default
data_files:
- split: train_qa
path: data/train_qa-*
- split: train_ic_qa
path: data/train_ic_qa-*
- split: train_recite_qa
path: data/train_recite_qa-*
- split: eval_qa
path: data/eval_qa-*
- split: eval_ic_qa
path: data/eval_ic_qa-*
- split: eval_recite_qa
path: data/eval_recite_qa-*
- split: all_docs
path: data/all_docs-*
- split: all_docs_eval
path: data/all_docs_eval-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: answers
struct:
- name: answer_start
sequence: 'null'
- name: text
sequence: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train_qa
num_bytes: 697367
num_examples: 6000
- name: train_ic_qa
num_bytes: 4540536
num_examples: 6000
- name: train_recite_qa
num_bytes: 4546536
num_examples: 6000
- name: eval_qa
num_bytes: 752802
num_examples: 6489
- name: eval_ic_qa
num_bytes: 4906186
num_examples: 6489
- name: eval_recite_qa
num_bytes: 4912675
num_examples: 6489
- name: all_docs
num_bytes: 7126313
num_examples: 10925
- name: all_docs_eval
num_bytes: 7125701
num_examples: 10925
- name: train
num_bytes: 7823680
num_examples: 16925
- name: validation
num_bytes: 752802
num_examples: 6489
download_size: 26914575
dataset_size: 43184598
---
# Dataset Card for "lmind_nq_train6000_eval6489_v1_doc_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
manjugeorge/MalSpeech | ---
license: apache-2.0
task_categories:
- text-to-speech
language:
- ml
--- |
irds/mmarco_v2_vi_dev | ---
pretty_name: '`mmarco/v2/vi/dev`'
viewer: false
source_datasets: ['irds/mmarco_v2_vi']
task_categories:
- text-retrieval
---
# Dataset Card for `mmarco/v2/vi/dev`
The `mmarco/v2/vi/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/v2/vi/dev).
# Data
This dataset provides:
- `queries` (i.e., topics); count=101,093
- `qrels`: (relevance assessments); count=59,273
- For `docs`, use [`irds/mmarco_v2_vi`](https://huggingface.co/datasets/irds/mmarco_v2_vi)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/mmarco_v2_vi_dev', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/mmarco_v2_vi_dev', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Bonifacio2021MMarco,
title={{mMARCO}: A Multilingual Version of {MS MARCO} Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Israel Campiotti and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
journal={arXiv:2108.13897}
}
```
|
CyberHarem/syalla_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of syalla (Fire Emblem)
This is the dataset of syalla (Fire Emblem), containing 94 images and their tags.
The core tags of this character are `black_hair, long_hair, breasts, bangs, blunt_bangs, hair_ornament, large_breasts, two_side_up, hairband`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 94 | 103.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/syalla_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 94 | 57.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/syalla_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 210 | 112.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/syalla_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 94 | 90.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/syalla_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 210 | 160.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/syalla_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/syalla_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | looking_at_viewer, 1girl, blush, navel, completely_nude, pussy, solo, cleft_of_venus, huge_breasts, lactation, simple_background, smile, brown_eyes, inverted_nipples, uncensored, white_background, arms_behind_back, collarbone, cowboy_shot |
| 1 | 32 |  |  |  |  |  | 1girl, solo, bracelet, bridal_gauntlets, cleavage, looking_at_viewer, smile, black_eyes, bodystocking, simple_background, white_background |
| 2 | 9 |  |  |  |  |  | hetero, penis, 1girl, blush, nipples, sex, torn_clothes, 1boy, solo_focus, vaginal, uncensored, open_mouth, spread_legs, bodystocking, cum_in_pussy, stomach_bulge, testicles |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | looking_at_viewer | 1girl | blush | navel | completely_nude | pussy | solo | cleft_of_venus | huge_breasts | lactation | simple_background | smile | brown_eyes | inverted_nipples | uncensored | white_background | arms_behind_back | collarbone | cowboy_shot | bracelet | bridal_gauntlets | cleavage | black_eyes | bodystocking | hetero | penis | nipples | sex | torn_clothes | 1boy | solo_focus | vaginal | open_mouth | spread_legs | cum_in_pussy | stomach_bulge | testicles |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------|:--------|:--------|:--------|:------------------|:--------|:-------|:-----------------|:---------------|:------------|:--------------------|:--------|:-------------|:-------------------|:-------------|:-------------------|:-------------------|:-------------|:--------------|:-----------|:-------------------|:-----------|:-------------|:---------------|:---------|:--------|:----------|:------|:---------------|:-------|:-------------|:----------|:-------------|:--------------|:---------------|:----------------|:------------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 1 | 32 |  |  |  |  |  | X | X | | | | | X | | | | X | X | | | | X | | | | X | X | X | X | X | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | | X | X | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
datahrvoje/twitter_dataset_1713027432 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 21171
num_examples: 48
download_size: 12184
dataset_size: 21171
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Thouph/Text2Video1 | ---
license: mit
viewer: false
---
|
CyberHarem/maki_bluearchive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of maki/小塗マキ/真纪 (Blue Archive)
This is the dataset of maki/小塗マキ/真纪 (Blue Archive), containing 277 images and their tags.
The core tags of this character are `red_hair, halo, blue_eyes, hair_between_eyes, long_hair, red_halo, braid, hat, twin_braids, grey_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 277 | 463.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maki_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 277 | 381.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maki_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 710 | 800.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/maki_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/maki_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 18 |  |  |  |  |  | 1girl, beanie, long_sleeves, official_alternate_costume, official_alternate_hairstyle, open_jacket, solo, black_pantyhose, blush, collared_shirt, looking_at_viewer, smile, grey_jacket, open_mouth, white_shirt, simple_background, white_background, brown_jacket, brown_shirt, blue_shorts, boots, brown_footwear, full_body |
| 1 | 27 |  |  |  |  |  | 1girl, beanie, official_alternate_costume, official_alternate_hairstyle, open_jacket, solo, blush, collared_shirt, long_sleeves, white_shirt, upper_body, simple_background, open_mouth, brown_jacket, smile, brown_vest, looking_at_viewer, white_background, brown_headwear, brown_shirt, grey_jacket |
| 2 | 12 |  |  |  |  |  | 1girl, ahoge, blue_necktie, collared_shirt, double_bun, long_sleeves, looking_at_viewer, pleated_skirt, short_hair, solo, white_shirt, black_jacket, blue_sweater_vest, paint_on_clothes, puffy_sleeves, sidelocks, spray_can, white_skirt, open_jacket, smile, sneakers, holding_can, black_socks, paint_splatter_on_face, graffiti, id_card, blush, sitting |
| 3 | 5 |  |  |  |  |  | 1girl, ahoge, black_jacket, blue_necktie, collared_shirt, double_bun, long_sleeves, short_hair, simple_background, solo, white_background, white_shirt, blush, cowboy_shot, hooded_jacket, looking_at_viewer, open_jacket, pleated_skirt, sidelocks, smile, white_skirt, blue_sweater_vest, closed_mouth, id_card, cropped_legs, hands_in_pockets, tongue_out |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | beanie | long_sleeves | official_alternate_costume | official_alternate_hairstyle | open_jacket | solo | black_pantyhose | blush | collared_shirt | looking_at_viewer | smile | grey_jacket | open_mouth | white_shirt | simple_background | white_background | brown_jacket | brown_shirt | blue_shorts | boots | brown_footwear | full_body | upper_body | brown_vest | brown_headwear | ahoge | blue_necktie | double_bun | pleated_skirt | short_hair | black_jacket | blue_sweater_vest | paint_on_clothes | puffy_sleeves | sidelocks | spray_can | white_skirt | sneakers | holding_can | black_socks | paint_splatter_on_face | graffiti | id_card | sitting | cowboy_shot | hooded_jacket | closed_mouth | cropped_legs | hands_in_pockets | tongue_out |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:---------------|:-----------------------------|:-------------------------------|:--------------|:-------|:------------------|:--------|:-----------------|:--------------------|:--------|:--------------|:-------------|:--------------|:--------------------|:-------------------|:---------------|:--------------|:--------------|:--------|:-----------------|:------------|:-------------|:-------------|:-----------------|:--------|:---------------|:-------------|:----------------|:-------------|:---------------|:--------------------|:-------------------|:----------------|:------------|:------------|:--------------|:-----------|:--------------|:--------------|:-------------------------|:-----------|:----------|:----------|:--------------|:----------------|:---------------|:---------------|:-------------------|:-------------|
| 0 | 18 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 27 |  |  |  |  |  | X | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 12 |  |  |  |  |  | X | | X | | | X | X | | X | X | X | X | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | X | | | X | X | | X | X | X | X | | | X | X | X | | | | | | | | | | X | X | X | X | X | X | X | | | X | | X | | | | | | X | | X | X | X | X | X | X |
|
deepghs/highres_datasets | ---
tags:
- not-for-all-audiences
--- |
shamotskyi/ukr_pravda_2y | ---
license: cc-by-nc-4.0
language:
- uk
- en
- ru
pretty_name: Ukrainska Pravda articles in ukr/rus/eng published on or after 01.01.2022
multilinguality:
- multilingual
---
This dataset contains the articles from [Ukrainska Pravda](https://www.pravda.com.ua/) of the years 2022-2023, in all translations.
The dataset was created as part of my Master's Thesis, better documentation will follow. For now:
### Basics
One row of the dataset contains an article, title/author/tags in up to three languages (ukr-rus-eng) w/ the corresponding title, author and tags.
Different translations of the same article often have inconsistent tags, so the main `tags` column contains the representations of the tags from all languages (each tag is named after its URI on the UP website).
The mapping of each tag to its URIs and names in all the languages it's present in is fuond in the `tags_mapping.json` file, found in the metadata. The list of URIs for all downloaded articles can be found there as well.
### Files
- Two versions:
- The version 0.0.1 (split name `incomplete`) covers articles from 01.01.2022 until 12.12.2023, kept for now as it's used in some other datasets
- **The version 0.0.2 (split name `train`) is the one you need** and contains all articles from 01.01.2022 till 31.12.2023
- File structure:
- `data/train` is the full 2y 0.0.2 dataset, the one you need
- `data/incomplete` is the old 0.0.1 version
- `metadata/` contains the tags mappings and list of downloaded URIs for both versions
### The rest
- **<https://serhii.net/dtb/2023-12-13-231213-1710-ukrainska-pravda-dataset/>** is the draft of the relevant thesis section
- **[pchr8/up_crawler](https://github.com/pchr8/up_crawler)** is the crawler I wrote to gather this dataset
<br><br>
For any questions, my first name is Serhii, and my email is my_first_name@my_first_name.net.
|
t3aile/WizardLM_evol_instruct_V2_196k-Turkish | ---
language:
- tr
size_categories:
- 100K<n<1M
---
# WizardLM_evol_instruct_V2_196k-Turkish
```
Dataset Cost: USD 305
Translated with: gpt-3.5-turbo-1106
Elapsed Time: 3 hours 41 minutes
```
## Metrics:
```
English Token Count: 67.686.140
Token Count After Turkish Translation: 99.760.316
Number of Successfully Translated Row: 143.000
``` |
CyberHarem/plume_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of plume/プリュム/翎羽 (Arknights)
This is the dataset of plume/プリュム/翎羽 (Arknights), containing 169 images and their tags.
The core tags of this character are `brown_hair, short_hair, hair_between_eyes, multicolored_hair, two-tone_hair, ahoge, white_hair, hat, black_headwear, beret, yellow_eyes, wings, bird_girl, orange_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 169 | 158.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/plume_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 169 | 144.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/plume_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 415 | 275.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/plume_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/plume_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, black_gloves, black_thighhighs, holding_polearm, solo, bird_wings, black_footwear, feathered_wings, cloak, full_body, looking_at_viewer, simple_background, infection_monitor_(arknights), standing, white_background, boots, cape, armband, closed_mouth, grey_shirt, halberd, long_sleeves, feathers |
| 1 | 5 |  |  |  |  |  | 1girl, black_gloves, black_thighhighs, cape, cloak, looking_at_viewer, simple_background, solo, cowboy_shot, elbow_gloves, holding_polearm, infection_monitor_(arknights), white_background, garter_straps, shirt, bird_wings, feathered_wings, feathers |
| 2 | 5 |  |  |  |  |  | 1girl, black_gloves, holding_polearm, solo, upper_body, cloak, looking_at_viewer, brown_eyes, cape, closed_mouth, grey_shirt, infection_monitor_(arknights), halberd |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_gloves | black_thighhighs | holding_polearm | solo | bird_wings | black_footwear | feathered_wings | cloak | full_body | looking_at_viewer | simple_background | infection_monitor_(arknights) | standing | white_background | boots | cape | armband | closed_mouth | grey_shirt | halberd | long_sleeves | feathers | cowboy_shot | elbow_gloves | garter_straps | shirt | upper_body | brown_eyes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-------------------|:------------------|:-------|:-------------|:-----------------|:------------------|:--------|:------------|:--------------------|:--------------------|:--------------------------------|:-----------|:-------------------|:--------|:-------|:----------|:---------------|:-------------|:----------|:---------------|:-----------|:--------------|:---------------|:----------------|:--------|:-------------|:-------------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | | X | X | | X | X | X | | X | | X | | | | | | X | X | X | X | X | | |
| 2 | 5 |  |  |  |  |  | X | X | | X | X | | | | X | | X | | X | | | | X | | X | X | X | | | | | | | X | X |
|
TRoboto/names | ---
project: Maha
license: cc-by-4.0
---
## Dataset Summary
It includes list of Arabic names with meaning and origin of most names
|
result-kand2-sdxl-wuerst-karlo/e06f76e8 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 169
num_examples: 10
download_size: 1323
dataset_size: 169
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "e06f76e8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
selimyagci/dynamic-hate-speech-data | ---
license: unknown
---
|
Neuronovo/neuronovo-utc-data-goemotions | ---
dataset_info:
features:
- name: x
dtype: string
- name: y
dtype: int64
- name: label_id
dtype: int64
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 75062697
num_examples: 306616
- name: validation
num_bytes: 36845081
num_examples: 151928
- name: test
num_bytes: 36723015
num_examples: 151956
download_size: 23670038
dataset_size: 148630793
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
AdapterOcean/dollyaug-standardized_cluster_1_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4250833
num_examples: 4032
download_size: 2491457
dataset_size: 4250833
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dollyaug-standardized_cluster_1_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
olmer/wiki_mpnet_index | ---
license: cc-by-sa-3.0
---
## Semantic search over the 44 million of English Wikipedia paragraphs using sentence transformers encoder.
The dataset contains:
- 43 911 155 paragraphs from 6 458 670 wikipedia articles stored in a zip archive;
- FAISS index with the embeddings;
- Retriever module for semantic search over the paragraphs.
The size of each paragraph varies from 20 to 2000 characters.
The embedding vector size is 768.
The index is 4-bit-quantized 2-level IVF16384_HNSW32 constructed with the [FAISS library](https://github.com/facebookresearch/faiss).
Sentence encoder: [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). |
hkust-nlp/deita-complexity-scorer-data | ---
license: mit
language:
- en
size_categories:
- 1K<n<10K
---
<img src="https://huggingface.co/datasets/hkust-nlp/deita-images/resolve/main/logo-final.png" alt="Deita banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Dataset Card for Deita Complexity Scorer Training Data
[GitHub](https://github.com/hkust-nlp/deita) | [Paper](https://arxiv.org/abs/2312.15685)
Deita is an open-sourced project designed to facilitate **Automatic Data Selection** for instruction tuning in Large Language Models (LLMs).
This dataset includes data for training Deita Complexity Scorer.
**Model Family**: Other models and the dataset are found in the [Deita Collection](https://huggingface.co/collections/hkust-nlp/deita-6569c198c174808d94cf5bd4)
## Performance
| Model | Align | Data Size | MT-Bench | AlpacaEval(%) | OpenLLM (Avg.) |
|------------------------------------------------|-----------|------------|----------|---------------|----------------|
| **Proprietary Models** | | | | | |
| GPT-4-Turbo | ? | -- | 9.32 | 97.70 | -- |
| GPT-4 | SFT + PPO | -- | 8.99 | 95.03 | -- |
| Claude-2 | SFT + PPO | -- | 8.06 | 91.36 | -- |
| GPT-3.5-turbo | SFT + PPO | -- | 7.94 | 89.37 | -- |
| **Open-sourced Models based on LLaMA-1-13B** | | | | | |
| LIMA | SFT | 1K SFT | 4.29 | 41.98 | 59.82 |
| WizardLM-13B | SFT | 70K SFT | 6.35 | 75.31 | 58.96 |
| Vicuna-13B-v1.3 | SFT | 125K SFT | 6.39 | 82.11 | 60.01 |
| Random | SFT | 10K SFT | 6.03 | 71.52 | 60.14 |
| DEITA-LLaMA1-13B-v1.0-sft | SFT | 10K SFT | 6.60 | 78.01 | 64.27 |
| **Open-sourced Models based on LLaMA-2-13B** | | | | | |
| Tulu-2-13B | SFT | 326K SFT | 6.70 | 78.90 | -- |
| Tulu-2-13B+DPO | SFT + DPO | 326K SFT + 60K DPO | 7.00 | 89.50 | -- |
| LLaMA2-13B-Chat | SFT + PPO | -- | 6.65 | 81.09 | -- |
| WizardLM-13B-v1.2 | SFT | >70K SFT | 7.09 | 89.17 | -- |
| Vicuna-13B-v1.5 | SFT | 125K SFT | 6.57 | 78.80 | 61.63 |
| Random | SFT | 10K SFT | 5.78 | 65.19 | 61.32 |
| DEITA-LLaMA2-13B-v1.0-sft | SFT | 10K SFT | 6.79 | 81.09 | 62.71 |
| **Open-sourced Models based on Mistral-7B** | | | | | |
| Mistral-7B-Instruct-v0.1 | -- | -- | 6.84 | 69.65 | 60.45 |
| Zephyr-7B-sft | SFT | 200K SFT | 5.32 | 75.12 | 60.93 |
| $\text{Zephyr-7B-}\beta$ | SFT + DPO | 200K SFT + 60K DPO | 7.34 | 90.60 | 66.36 |
| OpenChat-3.5 | C-RLFT | >> 70K C-RLFT | 7.81 | 88.51 | -- |
| Starling-7B | C-RLFT + APA | >>70K C-RLFT + 183K APA | 8.09 | 91.99 | -- |
| Random | SFT | 10K SFT | 5.89 | 56.90 | 61.72 |
| DEITA-7B-v1.0-sft (6K) | SFT | 6K SFT | 7.22 | 80.78 | 64.94 |
| DEITA-7B-v1.0-sft (10K) | SFT | 10K SFT | 7.32 | 81.67 | 64.00 |
| DEITA-7B-v1.0 | SFT + DPO | 6K SFT + 10K DPO | 7.55 | 90.06 | 69.86 |
## Citation
If you find the content of this project helpful, please cite our paper as follows:
```
@misc{liu2023what,
title={What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning},
author={Wei Liu and Weihao Zeng and Keqing He and Yong Jiang and Junxian He},
year={2023},
eprint={2312.15685},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
autoevaluate/autoeval-staging-eval-project-8ef742e5-7734972 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad
eval_info:
task: extractive_question_answering
model: mrp/bert-finetuned-squad
metrics: []
dataset_name: squad
dataset_config: plain_text
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mrp/bert-finetuned-squad
* Dataset: squad
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@thomwolf](https://huggingface.co/thomwolf) for evaluating this model. |
ssanni/dolly-15k-RP | ---
license: cc-by-sa-3.0
---
|
tyzhu/squad_qa_wrong_title_v5_full_recite_full_passage_first_permute_rerun | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: correct_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 9054846.508642636
num_examples: 4778
- name: validation
num_bytes: 599488
num_examples: 300
download_size: 1804496
dataset_size: 9654334.508642636
---
# Dataset Card for "squad_qa_wrong_title_v5_full_recite_full_passage_first_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alfredplpl/wikipedia-simple-ja-500k | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 129643127
num_examples: 516932
download_size: 64505805
dataset_size: 129643127
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-3.0
task_categories:
- summarization
language:
- ja
---
# Dataset Card for "wikipedia-simple-ja-500k"
# Original Dataset
- hpprc/wikipedia-20240101
# Procedure
- Exract the first line of the title from the dataset.
- Generate the answer by summizing the line using LLM:
- Input RAG-like prompt to CALM 2 7B Chat.
- Format the response.
# RAG-like Prompt
```python
f"""USER: {title}とはなんですか?次の文章を参考に一言でまとめてください。{text}
ASSISTANT: """
``` |
Thanmay/commonsense_qa-ta | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: question_concept
dtype: string
- name: choices
sequence:
- name: label
dtype: string
- name: text
dtype: string
- name: answerKey
dtype: string
- name: itv2 ta question
dtype: string
splits:
- name: validation
num_bytes: 547460
num_examples: 1221
- name: test
num_bytes: 520757
num_examples: 1140
download_size: 510339
dataset_size: 1068217
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
ytzi/the-stack-dedup-python-filtered | ---
dataset_info:
features:
- name: hexsha
dtype: string
- name: size
dtype: int64
- name: ext
dtype: string
- name: lang
dtype: string
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_repo_head_hexsha
dtype: string
- name: max_stars_repo_licenses
sequence: string
- name: max_stars_count
dtype: int64
- name: max_stars_repo_stars_event_min_datetime
dtype: string
- name: max_stars_repo_stars_event_max_datetime
dtype: string
- name: max_issues_repo_path
dtype: string
- name: max_issues_repo_name
dtype: string
- name: max_issues_repo_head_hexsha
dtype: string
- name: max_issues_repo_licenses
sequence: string
- name: max_issues_count
dtype: int64
- name: max_issues_repo_issues_event_min_datetime
dtype: string
- name: max_issues_repo_issues_event_max_datetime
dtype: string
- name: max_forks_repo_path
dtype: string
- name: max_forks_repo_name
dtype: string
- name: max_forks_repo_head_hexsha
dtype: string
- name: max_forks_repo_licenses
sequence: string
- name: max_forks_count
dtype: int64
- name: max_forks_repo_forks_event_min_datetime
dtype: string
- name: max_forks_repo_forks_event_max_datetime
dtype: string
- name: content
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
splits:
- name: train
num_bytes: 30218858735
num_examples: 12725978
download_size: 14628118101
dataset_size: 30218858735
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This is a dataset from the-stack-dedup that have passed through 6 filters:
- remove_non_ascii
- remove_decorators
- remove_async
- remove_classes
- remove_generators
- remove_function_no_docstring
|
joey234/mmlu-electrical_engineering-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: ori_prompt
dtype: string
- name: neg_prompt
dtype: string
- name: fewshot_context_neg
dtype: string
- name: fewshot_context_ori
dtype: string
splits:
- name: dev
num_bytes: 6493
num_examples: 5
- name: test
num_bytes: 855411
num_examples: 145
download_size: 121276
dataset_size: 861904
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-electrical_engineering-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
asyafiqe/orca_mini_v1_indonesia | ---
license: apache-2.0
---
This is dataset is a modified version of psmathur's [orca_mini_v1](https://huggingface.co/datasets/psmathur/orca_mini_v1_dataset) dataset translated into Bahasa Indonesia by Google Translate. |
lca0503/GPTspeech_encodec_v2 | ---
dataset_info:
features:
- name: file_id
dtype: string
- name: instruction
dtype: string
- name: transcription
dtype: string
- name: src_encodec_0
sequence: int64
- name: src_encodec_1
sequence: int64
- name: src_encodec_2
sequence: int64
- name: src_encodec_3
sequence: int64
- name: src_encodec_4
sequence: int64
- name: src_encodec_5
sequence: int64
- name: src_encodec_6
sequence: int64
- name: src_encodec_7
sequence: int64
- name: tgt_encodec_0
sequence: int64
- name: tgt_encodec_1
sequence: int64
- name: tgt_encodec_2
sequence: int64
- name: tgt_encodec_3
sequence: int64
- name: tgt_encodec_4
sequence: int64
- name: tgt_encodec_5
sequence: int64
- name: tgt_encodec_6
sequence: int64
- name: tgt_encodec_7
sequence: int64
splits:
- name: train
num_bytes: 42732349968
num_examples: 704563
- name: validation
num_bytes: 706650258
num_examples: 12855
- name: test
num_bytes: 700741253
num_examples: 12463
download_size: 4503561741
dataset_size: 44139741479
---
# Dataset Card for "GPTspeech_encodec_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lzmd/CI-400 | ---
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- medical
size_categories:
- n<1K
---
Over four hundred sample training data to simulate the social media post conversation between patients/patients' family to exchange experience with Cochlear implants.
Topics including cochlear implant surgery experience, activation experience, insurance coverage and denial, device upgrade etc. |
CyberHarem/elma_kobayashisanchinomaidragon | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Elma
This is the dataset of Elma, containing 233 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 233 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 531 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 615 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 233 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 233 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 233 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 531 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 531 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 431 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 615 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 615 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
yuan-sf63/word_label_0.8_72_D | ---
dataset_info:
features:
- name: text
dtype: string
- name: '0'
dtype: int64
- name: '1'
dtype: int64
- name: '2'
dtype: int64
- name: '3'
dtype: int64
- name: '4'
dtype: int64
- name: '5'
dtype: int64
- name: '6'
dtype: int64
- name: '7'
dtype: int64
- name: '8'
dtype: int64
- name: '9'
dtype: int64
- name: '10'
dtype: int64
- name: '11'
dtype: int64
- name: '12'
dtype: int64
- name: '13'
dtype: int64
- name: '14'
dtype: int64
- name: '15'
dtype: int64
- name: '16'
dtype: int64
- name: '17'
dtype: int64
- name: '18'
dtype: int64
- name: '19'
dtype: int64
- name: '20'
dtype: int64
- name: '21'
dtype: int64
- name: '22'
dtype: int64
- name: '23'
dtype: int64
- name: '24'
dtype: int64
- name: '25'
dtype: int64
- name: '26'
dtype: int64
- name: '27'
dtype: int64
- name: '28'
dtype: int64
- name: '29'
dtype: int64
- name: '30'
dtype: int64
- name: '31'
dtype: int64
- name: '32'
dtype: int64
- name: '33'
dtype: int64
- name: '34'
dtype: int64
- name: '35'
dtype: int64
- name: '36'
dtype: int64
- name: '37'
dtype: int64
- name: '38'
dtype: int64
- name: '39'
dtype: int64
- name: '40'
dtype: int64
- name: '41'
dtype: int64
- name: '42'
dtype: int64
- name: '43'
dtype: int64
- name: '44'
dtype: int64
- name: '45'
dtype: int64
- name: '46'
dtype: int64
- name: '47'
dtype: int64
- name: '48'
dtype: int64
- name: '49'
dtype: int64
- name: '50'
dtype: int64
- name: '51'
dtype: int64
- name: '52'
dtype: int64
- name: '53'
dtype: int64
- name: '54'
dtype: int64
- name: '55'
dtype: int64
- name: '56'
dtype: int64
- name: '57'
dtype: int64
- name: '58'
dtype: int64
- name: '59'
dtype: int64
- name: '60'
dtype: int64
- name: '61'
dtype: int64
- name: '62'
dtype: int64
- name: '63'
dtype: int64
- name: '64'
dtype: int64
- name: '65'
dtype: int64
- name: '66'
dtype: int64
- name: '67'
dtype: int64
- name: '68'
dtype: int64
- name: '69'
dtype: int64
- name: '70'
dtype: int64
- name: '71'
dtype: int64
splits:
- name: train
num_bytes: 49395397.74539947
num_examples: 71893
- name: validation
num_bytes: 5488988.254600536
num_examples: 7989
download_size: 9068599
dataset_size: 54884386.0
---
# Dataset Card for "word_label_0.8_72_D"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PanoEvJ/T5_summarization_RLAIF | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: summary_1
dtype: string
- name: summary_2
dtype: string
splits:
- name: train
num_bytes: 162321
num_examples: 100
download_size: 105546
dataset_size: 162321
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "T5_summarization_RLAIF"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
quincyqiang/test2 | ---
license: apache-2.0
---
|
open-llm-leaderboard/details_yunconglong__Mixtral_7Bx2_MoE_13B_DPO | ---
pretty_name: Evaluation run of yunconglong/Mixtral_7Bx2_MoE_13B_DPO
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [yunconglong/Mixtral_7Bx2_MoE_13B_DPO](https://huggingface.co/yunconglong/Mixtral_7Bx2_MoE_13B_DPO)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_yunconglong__Mixtral_7Bx2_MoE_13B_DPO\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-27T12:40:24.653748](https://huggingface.co/datasets/open-llm-leaderboard/details_yunconglong__Mixtral_7Bx2_MoE_13B_DPO/blob/main/results_2024-01-27T12-40-24.653748.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6209274592388272,\n\
\ \"acc_stderr\": 0.03276428505011885,\n \"acc_norm\": 0.6256353651392412,\n\
\ \"acc_norm_stderr\": 0.033421220014659365,\n \"mc1\": 0.43818849449204406,\n\
\ \"mc1_stderr\": 0.017369236164404434,\n \"mc2\": 0.6176132094440308,\n\
\ \"mc2_stderr\": 0.015409081181909872\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5989761092150171,\n \"acc_stderr\": 0.014322255790719869,\n\
\ \"acc_norm\": 0.6544368600682594,\n \"acc_norm_stderr\": 0.013896938461145675\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6399123680541725,\n\
\ \"acc_stderr\": 0.004790445139186367,\n \"acc_norm\": 0.840071698864768,\n\
\ \"acc_norm_stderr\": 0.0036579044379436544\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n\
\ \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n\
\ \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7368421052631579,\n \"acc_stderr\": 0.035834961763610736,\n\
\ \"acc_norm\": 0.7368421052631579,\n \"acc_norm_stderr\": 0.035834961763610736\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n\
\ \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \
\ \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.028637235639800886,\n\
\ \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.028637235639800886\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7291666666666666,\n\
\ \"acc_stderr\": 0.03716177437566017,\n \"acc_norm\": 0.7291666666666666,\n\
\ \"acc_norm_stderr\": 0.03716177437566017\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\"\
: 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n\
\ \"acc_stderr\": 0.036928207672648664,\n \"acc_norm\": 0.6242774566473989,\n\
\ \"acc_norm_stderr\": 0.036928207672648664\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3627450980392157,\n \"acc_stderr\": 0.047840607041056527,\n\
\ \"acc_norm\": 0.3627450980392157,\n \"acc_norm_stderr\": 0.047840607041056527\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5276595744680851,\n \"acc_stderr\": 0.03263597118409769,\n\
\ \"acc_norm\": 0.5276595744680851,\n \"acc_norm_stderr\": 0.03263597118409769\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n\
\ \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n\
\ \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n\
\ \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4126984126984127,\n \"acc_stderr\": 0.025355741263055266,\n \"\
acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.025355741263055266\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.40476190476190477,\n\
\ \"acc_stderr\": 0.04390259265377562,\n \"acc_norm\": 0.40476190476190477,\n\
\ \"acc_norm_stderr\": 0.04390259265377562\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.603225806451613,\n \"acc_stderr\": 0.027831231605767944,\n \"\
acc_norm\": 0.603225806451613,\n \"acc_norm_stderr\": 0.027831231605767944\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n \"\
acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\"\
: 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7878787878787878,\n \"acc_stderr\": 0.03192271569548301,\n\
\ \"acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.03192271569548301\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7828282828282829,\n \"acc_stderr\": 0.02937661648494563,\n \"\
acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.02937661648494563\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8549222797927462,\n \"acc_stderr\": 0.025416343096306433,\n\
\ \"acc_norm\": 0.8549222797927462,\n \"acc_norm_stderr\": 0.025416343096306433\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6,\n \"acc_stderr\": 0.024838811988033165,\n \"acc_norm\"\
: 0.6,\n \"acc_norm_stderr\": 0.024838811988033165\n },\n \"harness|hendrycksTest-high_school_mathematics|5\"\
: {\n \"acc\": 0.28888888888888886,\n \"acc_stderr\": 0.027634907264178544,\n\
\ \"acc_norm\": 0.28888888888888886,\n \"acc_norm_stderr\": 0.027634907264178544\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6386554621848739,\n \"acc_stderr\": 0.03120469122515002,\n \
\ \"acc_norm\": 0.6386554621848739,\n \"acc_norm_stderr\": 0.03120469122515002\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"\
acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8201834862385321,\n \"acc_stderr\": 0.01646534546739152,\n \"\
acc_norm\": 0.8201834862385321,\n \"acc_norm_stderr\": 0.01646534546739152\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4583333333333333,\n \"acc_stderr\": 0.03398110890294636,\n \"\
acc_norm\": 0.4583333333333333,\n \"acc_norm_stderr\": 0.03398110890294636\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7892156862745098,\n \"acc_stderr\": 0.0286265479124374,\n \"acc_norm\"\
: 0.7892156862745098,\n \"acc_norm_stderr\": 0.0286265479124374\n },\n\
\ \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\":\
\ 0.8059071729957806,\n \"acc_stderr\": 0.025744902532290916,\n \"\
acc_norm\": 0.8059071729957806,\n \"acc_norm_stderr\": 0.025744902532290916\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n\
\ \"acc_stderr\": 0.031602951437766785,\n \"acc_norm\": 0.6681614349775785,\n\
\ \"acc_norm_stderr\": 0.031602951437766785\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8091603053435115,\n \"acc_stderr\": 0.03446513350752599,\n\
\ \"acc_norm\": 0.8091603053435115,\n \"acc_norm_stderr\": 0.03446513350752599\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8181818181818182,\n \"acc_stderr\": 0.03520893951097652,\n \"\
acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.03520893951097652\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7177914110429447,\n \"acc_stderr\": 0.03536117886664742,\n\
\ \"acc_norm\": 0.7177914110429447,\n \"acc_norm_stderr\": 0.03536117886664742\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n\
\ \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n\
\ \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n\
\ \"acc_stderr\": 0.020930193185179333,\n \"acc_norm\": 0.8846153846153846,\n\
\ \"acc_norm_stderr\": 0.020930193185179333\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8122605363984674,\n\
\ \"acc_stderr\": 0.013964393769899136,\n \"acc_norm\": 0.8122605363984674,\n\
\ \"acc_norm_stderr\": 0.013964393769899136\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6820809248554913,\n \"acc_stderr\": 0.025070713719153176,\n\
\ \"acc_norm\": 0.6820809248554913,\n \"acc_norm_stderr\": 0.025070713719153176\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.42681564245810055,\n\
\ \"acc_stderr\": 0.016542401954631917,\n \"acc_norm\": 0.42681564245810055,\n\
\ \"acc_norm_stderr\": 0.016542401954631917\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7058823529411765,\n \"acc_stderr\": 0.026090162504279056,\n\
\ \"acc_norm\": 0.7058823529411765,\n \"acc_norm_stderr\": 0.026090162504279056\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n\
\ \"acc_stderr\": 0.02558306248998482,\n \"acc_norm\": 0.7170418006430869,\n\
\ \"acc_norm_stderr\": 0.02558306248998482\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6851851851851852,\n \"acc_stderr\": 0.025842248700902168,\n\
\ \"acc_norm\": 0.6851851851851852,\n \"acc_norm_stderr\": 0.025842248700902168\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \
\ \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.455019556714472,\n\
\ \"acc_stderr\": 0.012718456618701766,\n \"acc_norm\": 0.455019556714472,\n\
\ \"acc_norm_stderr\": 0.012718456618701766\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6507352941176471,\n \"acc_stderr\": 0.02895975519682487,\n\
\ \"acc_norm\": 0.6507352941176471,\n \"acc_norm_stderr\": 0.02895975519682487\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6797385620915033,\n \"acc_stderr\": 0.018875682938069443,\n \
\ \"acc_norm\": 0.6797385620915033,\n \"acc_norm_stderr\": 0.018875682938069443\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302505,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302505\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7224489795918367,\n \"acc_stderr\": 0.02866685779027465,\n\
\ \"acc_norm\": 0.7224489795918367,\n \"acc_norm_stderr\": 0.02866685779027465\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.572139303482587,\n\
\ \"acc_stderr\": 0.03498541988407795,\n \"acc_norm\": 0.572139303482587,\n\
\ \"acc_norm_stderr\": 0.03498541988407795\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.81,\n \"acc_stderr\": 0.03942772444036625,\n \
\ \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.03942772444036625\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4879518072289157,\n\
\ \"acc_stderr\": 0.03891364495835821,\n \"acc_norm\": 0.4879518072289157,\n\
\ \"acc_norm_stderr\": 0.03891364495835821\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.02796678585916089,\n\
\ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.02796678585916089\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.43818849449204406,\n\
\ \"mc1_stderr\": 0.017369236164404434,\n \"mc2\": 0.6176132094440308,\n\
\ \"mc2_stderr\": 0.015409081181909872\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7845303867403315,\n \"acc_stderr\": 0.011555295286059279\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4351781652767248,\n \
\ \"acc_stderr\": 0.013656253875470736\n }\n}\n```"
repo_url: https://huggingface.co/yunconglong/Mixtral_7Bx2_MoE_13B_DPO
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|arc:challenge|25_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|gsm8k|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hellaswag|10_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-27T12-40-24.653748.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-27T12-40-24.653748.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- '**/details_harness|winogrande|5_2024-01-27T12-40-24.653748.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-27T12-40-24.653748.parquet'
- config_name: results
data_files:
- split: 2024_01_27T12_40_24.653748
path:
- results_2024-01-27T12-40-24.653748.parquet
- split: latest
path:
- results_2024-01-27T12-40-24.653748.parquet
---
# Dataset Card for Evaluation run of yunconglong/Mixtral_7Bx2_MoE_13B_DPO
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [yunconglong/Mixtral_7Bx2_MoE_13B_DPO](https://huggingface.co/yunconglong/Mixtral_7Bx2_MoE_13B_DPO) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_yunconglong__Mixtral_7Bx2_MoE_13B_DPO",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-27T12:40:24.653748](https://huggingface.co/datasets/open-llm-leaderboard/details_yunconglong__Mixtral_7Bx2_MoE_13B_DPO/blob/main/results_2024-01-27T12-40-24.653748.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6209274592388272,
"acc_stderr": 0.03276428505011885,
"acc_norm": 0.6256353651392412,
"acc_norm_stderr": 0.033421220014659365,
"mc1": 0.43818849449204406,
"mc1_stderr": 0.017369236164404434,
"mc2": 0.6176132094440308,
"mc2_stderr": 0.015409081181909872
},
"harness|arc:challenge|25": {
"acc": 0.5989761092150171,
"acc_stderr": 0.014322255790719869,
"acc_norm": 0.6544368600682594,
"acc_norm_stderr": 0.013896938461145675
},
"harness|hellaswag|10": {
"acc": 0.6399123680541725,
"acc_stderr": 0.004790445139186367,
"acc_norm": 0.840071698864768,
"acc_norm_stderr": 0.0036579044379436544
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.562962962962963,
"acc_stderr": 0.04284958639753401,
"acc_norm": 0.562962962962963,
"acc_norm_stderr": 0.04284958639753401
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7368421052631579,
"acc_stderr": 0.035834961763610736,
"acc_norm": 0.7368421052631579,
"acc_norm_stderr": 0.035834961763610736
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6830188679245283,
"acc_stderr": 0.028637235639800886,
"acc_norm": 0.6830188679245283,
"acc_norm_stderr": 0.028637235639800886
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7291666666666666,
"acc_stderr": 0.03716177437566017,
"acc_norm": 0.7291666666666666,
"acc_norm_stderr": 0.03716177437566017
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6242774566473989,
"acc_stderr": 0.036928207672648664,
"acc_norm": 0.6242774566473989,
"acc_norm_stderr": 0.036928207672648664
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3627450980392157,
"acc_stderr": 0.047840607041056527,
"acc_norm": 0.3627450980392157,
"acc_norm_stderr": 0.047840607041056527
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5276595744680851,
"acc_stderr": 0.03263597118409769,
"acc_norm": 0.5276595744680851,
"acc_norm_stderr": 0.03263597118409769
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5087719298245614,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.5087719298245614,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5379310344827586,
"acc_stderr": 0.04154659671707548,
"acc_norm": 0.5379310344827586,
"acc_norm_stderr": 0.04154659671707548
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.025355741263055266,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.025355741263055266
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.04390259265377562,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.04390259265377562
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.603225806451613,
"acc_stderr": 0.027831231605767944,
"acc_norm": 0.603225806451613,
"acc_norm_stderr": 0.027831231605767944
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.03192271569548301,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.03192271569548301
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7828282828282829,
"acc_stderr": 0.02937661648494563,
"acc_norm": 0.7828282828282829,
"acc_norm_stderr": 0.02937661648494563
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8549222797927462,
"acc_stderr": 0.025416343096306433,
"acc_norm": 0.8549222797927462,
"acc_norm_stderr": 0.025416343096306433
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6,
"acc_stderr": 0.024838811988033165,
"acc_norm": 0.6,
"acc_norm_stderr": 0.024838811988033165
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.28888888888888886,
"acc_stderr": 0.027634907264178544,
"acc_norm": 0.28888888888888886,
"acc_norm_stderr": 0.027634907264178544
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6386554621848739,
"acc_stderr": 0.03120469122515002,
"acc_norm": 0.6386554621848739,
"acc_norm_stderr": 0.03120469122515002
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8201834862385321,
"acc_stderr": 0.01646534546739152,
"acc_norm": 0.8201834862385321,
"acc_norm_stderr": 0.01646534546739152
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4583333333333333,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.4583333333333333,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7892156862745098,
"acc_stderr": 0.0286265479124374,
"acc_norm": 0.7892156862745098,
"acc_norm_stderr": 0.0286265479124374
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8059071729957806,
"acc_stderr": 0.025744902532290916,
"acc_norm": 0.8059071729957806,
"acc_norm_stderr": 0.025744902532290916
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6681614349775785,
"acc_stderr": 0.031602951437766785,
"acc_norm": 0.6681614349775785,
"acc_norm_stderr": 0.031602951437766785
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8091603053435115,
"acc_stderr": 0.03446513350752599,
"acc_norm": 0.8091603053435115,
"acc_norm_stderr": 0.03446513350752599
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.03520893951097652,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.03520893951097652
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252627,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7177914110429447,
"acc_stderr": 0.03536117886664742,
"acc_norm": 0.7177914110429447,
"acc_norm_stderr": 0.03536117886664742
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.04718471485219588,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.04718471485219588
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.020930193185179333,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.020930193185179333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8122605363984674,
"acc_stderr": 0.013964393769899136,
"acc_norm": 0.8122605363984674,
"acc_norm_stderr": 0.013964393769899136
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6820809248554913,
"acc_stderr": 0.025070713719153176,
"acc_norm": 0.6820809248554913,
"acc_norm_stderr": 0.025070713719153176
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.42681564245810055,
"acc_stderr": 0.016542401954631917,
"acc_norm": 0.42681564245810055,
"acc_norm_stderr": 0.016542401954631917
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7058823529411765,
"acc_stderr": 0.026090162504279056,
"acc_norm": 0.7058823529411765,
"acc_norm_stderr": 0.026090162504279056
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7170418006430869,
"acc_stderr": 0.02558306248998482,
"acc_norm": 0.7170418006430869,
"acc_norm_stderr": 0.02558306248998482
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6851851851851852,
"acc_stderr": 0.025842248700902168,
"acc_norm": 0.6851851851851852,
"acc_norm_stderr": 0.025842248700902168
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4858156028368794,
"acc_stderr": 0.02981549448368206,
"acc_norm": 0.4858156028368794,
"acc_norm_stderr": 0.02981549448368206
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.455019556714472,
"acc_stderr": 0.012718456618701766,
"acc_norm": 0.455019556714472,
"acc_norm_stderr": 0.012718456618701766
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6507352941176471,
"acc_stderr": 0.02895975519682487,
"acc_norm": 0.6507352941176471,
"acc_norm_stderr": 0.02895975519682487
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6797385620915033,
"acc_stderr": 0.018875682938069443,
"acc_norm": 0.6797385620915033,
"acc_norm_stderr": 0.018875682938069443
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302505,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302505
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7224489795918367,
"acc_stderr": 0.02866685779027465,
"acc_norm": 0.7224489795918367,
"acc_norm_stderr": 0.02866685779027465
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.572139303482587,
"acc_stderr": 0.03498541988407795,
"acc_norm": 0.572139303482587,
"acc_norm_stderr": 0.03498541988407795
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.81,
"acc_stderr": 0.03942772444036625,
"acc_norm": 0.81,
"acc_norm_stderr": 0.03942772444036625
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4879518072289157,
"acc_stderr": 0.03891364495835821,
"acc_norm": 0.4879518072289157,
"acc_norm_stderr": 0.03891364495835821
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.02796678585916089,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.02796678585916089
},
"harness|truthfulqa:mc|0": {
"mc1": 0.43818849449204406,
"mc1_stderr": 0.017369236164404434,
"mc2": 0.6176132094440308,
"mc2_stderr": 0.015409081181909872
},
"harness|winogrande|5": {
"acc": 0.7845303867403315,
"acc_stderr": 0.011555295286059279
},
"harness|gsm8k|5": {
"acc": 0.4351781652767248,
"acc_stderr": 0.013656253875470736
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
KETI-AIR/kor_amazon_polarity | ---
language:
- ko
license: cc0-1.0
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
- name: title
dtype: string
- name: content
dtype: string
- name: data_index_by_user
dtype: int32
splits:
- name: train
num_bytes: 2059069183
num_examples: 3600000
- name: test
num_bytes: 228905323
num_examples: 400000
download_size: 1298504656
dataset_size: 2287974506
---
# Dataset Card for amazon_polarity
## Licensing Information
The data is distributed under the [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/) license.
## Source Data Citation Information
McAuley, Julian, and Jure Leskovec. "Hidden factors and hidden topics: understanding rating dimensions with review text." In Proceedings of the 7th ACM conference on Recommender systems, pp. 165-172. 2013.
Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015)
|
bazudde/Sweetpotato_images | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: sweet-potato-classification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project sweet-potato-classification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<256x192 RGB PIL image>",
"target": 0
},
{
"image": "<256x192 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Leaf rust', 'Root rot', 'alternaria_sweet_potato_leaf_spot'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 46 |
| valid | 13 |
|
davidfant/natural-questions-chunk-12 | ---
dataset_info:
features:
- name: id
dtype: string
- name: document
struct:
- name: html
dtype: string
- name: title
dtype: string
- name: tokens
sequence:
- name: end_byte
dtype: int64
- name: is_html
dtype: bool
- name: start_byte
dtype: int64
- name: token
dtype: string
- name: url
dtype: string
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: long_answer_candidates
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: top_level
dtype: bool
- name: annotations
sequence:
- name: id
dtype: string
- name: long_answer
struct:
- name: candidate_index
dtype: int64
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: short_answers
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: text
dtype: string
- name: yes_no_answer
dtype:
class_label:
names:
'0': 'NO'
'1': 'YES'
splits:
- name: train
num_bytes: 4702513040
num_examples: 10000
download_size: 1825603078
dataset_size: 4702513040
---
# Dataset Card for "natural-questions-chunk-12"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AKKIKKIRA/eurvc | ---
license: openrail
---
|
abacusai/HellaSwag_DPO_FewShot | ---
license: apache-2.0
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 288673226
num_examples: 119715
- name: eval
num_bytes: 74508834
num_examples: 30126
download_size: 80725728
dataset_size: 363182060
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
---

# Dataset Card for "HellaSwag_DPOP_FewShot"
[HellaSwag](https://rowanzellers.com/hellaswag/) is a dataset containing commonsense inference questions known to be hard for LLMs.
In the original dataset, each instance consists of a prompt, with one correct completion and three incorrect completions.
We create a paired preference-ranked dataset by creating three pairs for each correct response in the training split.
An example prompt is "Then, the man writes over the snow covering the window of a car, and a woman wearing winter clothes smiles. then"
And the potential completions from the original HellaSwag dataset are:
[", the man adds wax to the windshield and cuts it.", ", a person board a ski lift, while two men supporting the head of the person wearing winter clothes snow as the we girls sled.", ", the man puts on a christmas coat, knitted with netting.", ", the man continues removing the snow on his car."]
The dataset is meant to be used to fine-tune LLMs (which have already undergone SFT) using the DPOP loss function. We used this dataset to create the [Smaug series of models](https://github.com/abacusai/smaug). See our paper for more details.
This dataset contains 119,715 training examples and 30,126 evaluation examples.
See more details in the [datasheet](https://github.com/abacusai/smaug/blob/main/datasheet.md). |
DBQ/Ounass.Product.prices.Qatar | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
- image-classification
- feature-extraction
- image-segmentation
- image-to-image
- image-to-text
- object-detection
- summarization
- zero-shot-image-classification
pretty_name: Qatar - Ounass - Product-level price list
tags:
- webscraping
- ecommerce
- Ounass
- fashion
- fashion product
- image
- fashion image
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: website_name
dtype: string
- name: competence_date
dtype: string
- name: country_code
dtype: string
- name: currency_code
dtype: string
- name: brand
dtype: string
- name: category1_code
dtype: string
- name: category2_code
dtype: string
- name: category3_code
dtype: string
- name: product_code
dtype: int64
- name: title
dtype: string
- name: itemurl
dtype: string
- name: imageurl
dtype: string
- name: full_price
dtype: float64
- name: price
dtype: float64
- name: full_price_eur
dtype: float64
- name: price_eur
dtype: float64
- name: flg_discount
dtype: int64
splits:
- name: train
num_bytes: 28197555
num_examples: 69623
download_size: 8717370
dataset_size: 28197555
---
# Ounass web scraped data
## About the website
Ounass operates in the dynamic and fast-growing **E-commerce industry** within the **EMEA region**, with a strong focus particularly in **Qatar**. The Qatari E-commerce industry is experiencing significant growth, fueled by the rapid digitization, high internet penetration, and a strong inclination towards online shopping among consumers. Particularly, the luxury segment, in which Ounass operates, witnesses a high demand driven by the nations affluent population. The dataset observed includes **Ecommerce product-list page (PLP) data** on Ounass operations in Qatar, offering comprehensive insights into the business strategies, consumer behavior, and emerging trends in this thriving marketplace.
## Link to **dataset**
[Qatar - Ounass - Product-level price list dataset](https://www.databoutique.com/buy-data-page/Ounass%20Product-prices%20Qatar/r/rec6BSAJjNfjYBEUp)
|
Asap7772/alpaca_human_preference_gold | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output_1
dtype: string
- name: output_2
dtype: string
- name: preference
dtype: int64
- name: raw_preference
dtype: int64
- name: reward_1
dtype: float64
- name: reward_2
dtype: float64
- name: formatted_text_1
dtype: string
- name: formatted_text_2
dtype: string
- name: text_1
dtype: string
- name: text_2
dtype: string
splits:
- name: preference
num_bytes: 24734316
num_examples: 9691
download_size: 13145561
dataset_size: 24734316
configs:
- config_name: default
data_files:
- split: preference
path: data/preference-*
---
# Dataset Card for "alpaca_human_preference_gold"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-xsum-default-ca7304-1504954794 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: morenolq/bart-base-xsum
metrics: ['bertscore']
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: morenolq/bart-base-xsum
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model. |
greathero/evenmorex10-newthreeclass-newercontrailsvalidationdataset | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 334802526.055
num_examples: 16695
download_size: 69938746
dataset_size: 334802526.055
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
litmonster0521/pencildrawing | ---
license: openrail
---
|
anjunhu/naively_captioned_CUB2002011_test_10shot | ---
dataset_info:
features:
- name: text
dtype: string
- name: text_cupl
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 54878741.0
num_examples: 2000
download_size: 43969743
dataset_size: 54878741.0
---
# Dataset Card for "naively_captioned_CUB2002011_test_10shot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kudelabs/controlnet-images | ---
license: openrail
---
for public access |
distilled-from-one-sec-cv12/chunk_185 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 748897364
num_examples: 145927
download_size: 764119144
dataset_size: 748897364
---
# Dataset Card for "chunk_185"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pgurazada1/summarization-demo-logs | ---
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
DaviGamer/KennyMaccormic | ---
license: openrail
---
|
hugaru/embeel | ---
license: mit
---
|
juliusGauth/france_stations | ---
task_categories:
- token-classification
language:
- fr
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
alexandrainst/danish-citizen-tests | ---
dataset_info:
features:
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: answer
dtype: string
- name: test_type
dtype: string
- name: year
dtype: int64
- name: version
dtype: string
- name: question_id
dtype: int64
splits:
- name: train
num_bytes: 125902
num_examples: 720
download_size: 48325
dataset_size: 125902
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc0-1.0
language:
- da
size_categories:
- n<1K
---
# Dataset Card for "danish-citizen-tests"
## Dataset Description
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
- **Size of dataset:** 126 KB
- **Repository:** https://gist.github.com/saattrupdan/91c3fd53ceae252dd54439b45736c2e0
### Dataset Summary
This dataset contains tests for citizenship ("indfødsretsprøven") and permanent residence ("medborgerskabsprøven") in Denmark, from the years 2016-2023.
### Languages
The dataset is available in Danish (`da`).
## Dataset Structure
An example from the dataset looks as follows.
```
{
'question': 'Må en dommer bære religiøse symboler i en retssal i Danmark?',
'option_a': 'Ja',
'option_b': 'Nej',
'option_c': None,
'answer': 'B',
'test_type': 'indfødsretsprøven',
'year': 2020,
'version': 'summer',
'question_id': 1
}
```
### Data Fields
- `question`: a `string` feature.
- `option_a`: a `string` feature.
- `option_b`: a `string` feature.
- `option_c`: a `string` feature.
- `answer`: a `string` feature.
- `test_type`: a `string` feature.
- `year`: an `int64` feature.
- `version`: a `string` feature.
- `question_id`: an `int64` feature.
## Dataset Creation
### Curation Rationale
There is not a publicly available dataset testing the knowledge about the Danish society.
### Source Data
These tests are all available as PDFs [at this https URL](https://danskogproever.dk/), and extracted using [this Python script](https://gist.github.com/saattrupdan/91c3fd53ceae252dd54439b45736c2e0).
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://huggingface.co/saattrupdan) from the [The Alexandra
Institute](https://alexandra.dk/)
### Licensing Information
The dataset is licensed under the [CC0
license](https://creativecommons.org/share-your-work/public-domain/cc0/). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.