datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
ai-ml-ops-eng/ru-quiz-qa | ---
license: unknown
---
|
David-Egea/phishing-texts | ---
license: mit
task_categories:
- text-classification
language:
- en
size_categories:
- 10K<n<100K
tags:
- phishing
- text
pretty_name: Phishing Texts Dataset
---
## Phishing Texts Dataset 🎣
### Description:
This dataset is a collection of data designed for training text classifiers capable of determining whether a message or email is a phishing attempt or not.
### Dataset Information 📨:
The dataset consists of more than 20,000 entries of text messages, which are potential phishing attempts.
Data is structured in two columns:
- `text`: The text of the message or email.
- `phising`: An indicator of whether the message in the `text` column is a phishing attempt (1) or not (0).
The dataset has undergone a data cleaning process and preprocessing to remove possible duplicate entries.
It is worth mentioning that the dataset is **balanced**, with 62% non-phishing and 38% phishing instances.
In some of the aforementioned datasets, it was identified that the data overlapped.
To avoid redundant values, duplicate entries have been removed from this dataset during the last data cleaning phase.
### Data Sources 📖:
This dataset has been constructed from the following sources:
- [Hugging Face - Phishing Email Dataset](https://huggingface.co/datasets/zefang-liu/phishing-email-dataset)
- [Hugging Face - Phishing Dataset](https://huggingface.co/datasets/ealvaradob/phishing-dataset)
- [Kaggle - Phishing Emails](https://www.kaggle.com/datasets/subhajournal/phishingemails)
- [Kaggle - Phishing Email Data by Type](https://www.kaggle.com/datasets/charlottehall/phishing-email-data-by-type)
> Big thanks to all the creators of these datasets for their awesome work! 🙌
*In some of the aforementioned datasets, it was identified that the data overlapped.
To avoid redundant values, duplicate entries have been removed from this dataset during the last data cleaning phase.*
|
zen-E/ANLI-simcse-roberta-large-embeddings-pca-256 | ---
task_categories:
- sentence-similarity
language:
- en
size_categories:
- 100K<n<1M
---
A dataset that contains all data except those labeled as 'neutral' in 'https://sbert.net/datasets/AllNLI.tsv.gz'' which the corresponding text embedding produced by 'princeton-nlp/unsup-simcse-roberta-large'. The features are transformed to a size of 256 by the PCA object.
In order to load the dictionary of the teacher embeddings corresponding to the anli dataset:
```python
!git clone https://huggingface.co/datasets/zen-E/ANLI-simcse-roberta-large-embeddings-pca-256
# if dimension reduction to 256 is required
import joblib
pca = joblib.load('ANLI-simcse-roberta-large-embeddings-pca-256/pca_model.sav')
teacher_embeddings = torch.load("./ANLI-simcse-roberta-large-embeddings-pca-256/anli_train_simcse_robertra_sent_embed.pt")
if pca is not None:
all_sents = sorted(teacher_embeddings.keys())
teacher_embeddings_values = torch.stack([teacher_embeddings[s] for s in all_sents], dim=0).numpy()
teacher_embeddings_values_trans = pca.transform(teacher_embeddings_values)
teacher_embeddings = {k:torch.tensor(v) for k, v in zip(all_sents, teacher_embeddings_values_trans)}
``` |
chenghao/NEWS-COPY-train | ---
dataset_info:
features:
- name: Text 1
dtype: string
- name: Text 2
dtype: string
- name: Label
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 285532211
num_examples: 73928
- name: dev
num_bytes: 18222482
num_examples: 6288
download_size: 131881405
dataset_size: 303754693
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
license: unknown
---
# NEWS COPY
This dataset contains the trianing sets for the NEWS COPY dataset. Original source can be found at [Github](https://github.com/dell-research-harvard/NEWS-COPY). The license is unclear.
It contains the following data:
- Historical Newspapers
Evaluation datasets can be found at [chenghao/NEWS-COPY-eval](https://huggingface.co/datasets/chenghao/NEWS-COPY-eval/).
## Citation
```
@inproceedings{silcock-etal-2020-noise,
title = "Noise-Robust De-Duplication at Scale",
author = "Silcock, Emily and D'Amico-Wong, Luca and Yang, Jinglin and Dell, Melissa",
booktitle = "International Conference on Learning Representations (ICLR)",
year = "2023",
}
```
|
Yorai/detect-waste_loading_script | ---
dataset_info:
config_name: taco-multi
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': metals_and_plastic
'1': other
'2': non_recyclable
'3': glass
'4': paper
'5': bio
'6': unknown
splits:
- name: train
num_bytes: 1006510
num_examples: 3647
- name: test
num_bytes: 248312
num_examples: 915
download_size: 10265127938
dataset_size: 1254822
language:
- en
tags:
- climate
pretty_name: detect-waste
size_categories:
- 1K<n<10K
---
# Dataset Card for detect-waste
## Dataset Description
- **Homepage: https://github.com/wimlds-trojmiasto/detect-waste**
### Dataset Summary
AI4Good project for detecting waste in environment. www.detectwaste.ml.
Our latest results were published in Waste Management journal in article titled Deep learning-based waste detection in natural and urban environments.
You can find more technical details in our technical report Waste detection in Pomerania: non-profit project for detecting waste in environment.
Did you know that we produce 300 million tons of plastic every year? And only the part of it is properly recycled.
The idea of detect waste project is to use Artificial Intelligence to detect plastic waste in the environment. Our solution is applicable for video and photography. Our goal is to use AI for Good.
### Supported Tasks and Leaderboards
Object Detection
### Languages
English
### Data Fields
https://github.com/wimlds-trojmiasto/detect-waste/tree/main/annotations
## Dataset Creation
The images are post processed to remove exif and reorient as required. Some images are labelled without the exif rotation in mind thus they're not rotated at all but have their exif metadata removed
### Personal and Sensitive Information
**BEWARE** This repository had been created by a third-party and is not affiliated in any way with the original detect-waste creators/
## Considerations for Using the Data
### Licensing Information
https://raw.githubusercontent.com/wimlds-trojmiasto/detect-waste/main/LICENSE |
N1lanser/openassistant_best_replies_train-csv | ---
license: mit
---
|
HanxuHU/mmmu_tr_filter | ---
dataset_info:
- config_name: Accounting
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 106588.13333333333
num_examples: 2
download_size: 188905
dataset_size: 106588.13333333333
- config_name: Agriculture
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 119217599.0
num_examples: 30
download_size: 119223838
dataset_size: 119217599.0
- config_name: Architecture_and_Engineering
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 433065.8
num_examples: 18
download_size: 468287
dataset_size: 433065.8
- config_name: Art
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 29934575.0
num_examples: 30
download_size: 29942059
dataset_size: 29934575.0
- config_name: Art_Theory
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 33481314.0
num_examples: 30
download_size: 29784005
dataset_size: 33481314.0
- config_name: Basic_Medical_Science
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 3988372.2
num_examples: 29
download_size: 4093748
dataset_size: 3988372.2
- config_name: Biology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 7642794.499999999
num_examples: 27
download_size: 8023622
dataset_size: 7642794.499999999
- config_name: Chemistry
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1366662.0
num_examples: 27
download_size: 1363678
dataset_size: 1366662.0
- config_name: Clinical_Medicine
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 10882501.0
num_examples: 30
download_size: 10888211
dataset_size: 10882501.0
- config_name: Computer_Science
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1934158.1333333333
num_examples: 28
download_size: 2009878
dataset_size: 1934158.1333333333
- config_name: Design
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 17923052.0
num_examples: 30
download_size: 16227867
dataset_size: 17923052.0
- config_name: Diagnostics_and_Laboratory_Medicine
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 37106101.0
num_examples: 30
download_size: 37090121
dataset_size: 37106101.0
- config_name: Economics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 644572.7666666667
num_examples: 13
download_size: 929257
dataset_size: 644572.7666666667
- config_name: Electronics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 641460.0
num_examples: 30
download_size: 645006
dataset_size: 641460.0
- config_name: Energy_and_Power
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1642432.0
num_examples: 30
download_size: 1647101
dataset_size: 1642432.0
- config_name: Finance
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 35718.433333333334
num_examples: 1
download_size: 31806
dataset_size: 35718.433333333334
- config_name: Geography
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 6448993.233333333
num_examples: 29
download_size: 6612112
dataset_size: 6448993.233333333
- config_name: History
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 8232083.733333333
num_examples: 28
download_size: 8207244
dataset_size: 8232083.733333333
- config_name: Literature
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 14241094.0
num_examples: 30
download_size: 14247199
dataset_size: 14241094.0
- config_name: Manage
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1967091.6
num_examples: 18
download_size: 2084337
dataset_size: 1967091.6
- config_name: Marketing
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 343837.3333333333
num_examples: 7
download_size: 860258
dataset_size: 343837.3333333333
- config_name: Materials
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1997838.6666666667
num_examples: 26
download_size: 2199515
dataset_size: 1997838.6666666667
- config_name: Math
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1396426.2666666666
num_examples: 29
download_size: 1437571
dataset_size: 1396426.2666666666
- config_name: Mechanical_Engineering
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 875271.0
num_examples: 30
download_size: 877212
dataset_size: 875271.0
- config_name: Music
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 9359391.0
num_examples: 30
download_size: 9364095
dataset_size: 9359391.0
- config_name: Pharmacy
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1435675.3333333333
num_examples: 26
download_size: 1330784
dataset_size: 1435675.3333333333
- config_name: Physics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1114295.0
num_examples: 30
download_size: 1117802
dataset_size: 1114295.0
- config_name: Psychology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 3964965.3
num_examples: 27
download_size: 3979235
dataset_size: 3964965.3
- config_name: Public_Health
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 251566.83333333334
num_examples: 5
download_size: 672327
dataset_size: 251566.83333333334
- config_name: Sociology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 17840094.633333333
num_examples: 29
download_size: 17596464
dataset_size: 17840094.633333333
configs:
- config_name: Accounting
data_files:
- split: validation
path: Accounting/validation-*
- config_name: Agriculture
data_files:
- split: validation
path: Agriculture/validation-*
- config_name: Architecture_and_Engineering
data_files:
- split: validation
path: Architecture_and_Engineering/validation-*
- config_name: Art
data_files:
- split: validation
path: Art/validation-*
- config_name: Art_Theory
data_files:
- split: validation
path: Art_Theory/validation-*
- config_name: Basic_Medical_Science
data_files:
- split: validation
path: Basic_Medical_Science/validation-*
- config_name: Biology
data_files:
- split: validation
path: Biology/validation-*
- config_name: Chemistry
data_files:
- split: validation
path: Chemistry/validation-*
- config_name: Clinical_Medicine
data_files:
- split: validation
path: Clinical_Medicine/validation-*
- config_name: Computer_Science
data_files:
- split: validation
path: Computer_Science/validation-*
- config_name: Design
data_files:
- split: validation
path: Design/validation-*
- config_name: Diagnostics_and_Laboratory_Medicine
data_files:
- split: validation
path: Diagnostics_and_Laboratory_Medicine/validation-*
- config_name: Economics
data_files:
- split: validation
path: Economics/validation-*
- config_name: Electronics
data_files:
- split: validation
path: Electronics/validation-*
- config_name: Energy_and_Power
data_files:
- split: validation
path: Energy_and_Power/validation-*
- config_name: Finance
data_files:
- split: validation
path: Finance/validation-*
- config_name: Geography
data_files:
- split: validation
path: Geography/validation-*
- config_name: History
data_files:
- split: validation
path: History/validation-*
- config_name: Literature
data_files:
- split: validation
path: Literature/validation-*
- config_name: Manage
data_files:
- split: validation
path: Manage/validation-*
- config_name: Marketing
data_files:
- split: validation
path: Marketing/validation-*
- config_name: Materials
data_files:
- split: validation
path: Materials/validation-*
- config_name: Math
data_files:
- split: validation
path: Math/validation-*
- config_name: Mechanical_Engineering
data_files:
- split: validation
path: Mechanical_Engineering/validation-*
- config_name: Music
data_files:
- split: validation
path: Music/validation-*
- config_name: Pharmacy
data_files:
- split: validation
path: Pharmacy/validation-*
- config_name: Physics
data_files:
- split: validation
path: Physics/validation-*
- config_name: Psychology
data_files:
- split: validation
path: Psychology/validation-*
- config_name: Public_Health
data_files:
- split: validation
path: Public_Health/validation-*
- config_name: Sociology
data_files:
- split: validation
path: Sociology/validation-*
---
|
Chasen64/DatasetPruebaChas | ---
license: mit
---
|
Multimodal-Fatima/Caltech101_with_background_test_facebook_opt_6.7b_Visclues_ns_6084 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_0_bs_16
num_bytes: 101626234.5
num_examples: 6084
- name: fewshot_1_bs_16
num_bytes: 103738576.5
num_examples: 6084
- name: fewshot_3_bs_16
num_bytes: 107968014.5
num_examples: 6084
download_size: 287673188
dataset_size: 313332825.5
---
# Dataset Card for "Caltech101_with_background_test_facebook_opt_6.7b_Visclues_ns_6084"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
EulerianKnight/breast-histopathology-images-train-test-valid-split | ---
license: apache-2.0
task_categories:
- image-classification
size_categories:
- 100K<n<1M
---
# Breast Histopathology Image dataset
- This dataset is just a rearrangement of the Original dataset at Kaggle: https://www.kaggle.com/datasets/paultimothymooney/breast-histopathology-images
- Data Citation: https://www.ncbi.nlm.nih.gov/pubmed/27563488 , http://spie.org/Publications/Proceedings/Paper/10.1117/12.2043872
- The original dataset has structure:
<pre>
|-- patient_id
|-- class(0 and 1)
</pre>
- The present dataset has following structure:
<pre>
|-- train
|-- class(0 and 1)
|-- valid
|-- class(0 and 1)
|-- test
|-- class(0 and 1) |
CyberHarem/silverash_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of silverash_arknights
This is the dataset of silverash_arknights, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 408 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 408 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 408 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 408 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
Neu256/LLama_ru_for_fine-turing | ---
license: mit
---
|
Hadnet/olavo-articles-17k-dataset-text | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: output
dtype: string
- name: input
dtype: string
- name: instruction
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9762976
num_examples: 17361
download_size: 5498669
dataset_size: 9762976
---
# Dataset Card for "olavo-notes-dataset-text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nexdata/Japanese_Speaking_English_Speech_Data_by_Mobile_Phone | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Japanese_Speaking_English_Speech_Data_by_Mobile_Phone
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/1048?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
400 native Japanese speakers involved, balanced for gender. The recording corpus is rich in content, and it covers a wide domain such as generic command and control category, human-machine interaction category; smart home category; in-car category. The transcription corpus has been manually proofread to ensure high accuracy.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1048?source=Huggingface
### Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Japanese English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions |
liuyanchen1015/MULTI_VALUE_mrpc_plural_to_singular_human | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 182718
num_examples: 648
- name: train
num_bytes: 399568
num_examples: 1406
- name: validation
num_bytes: 38546
num_examples: 134
download_size: 411962
dataset_size: 620832
---
# Dataset Card for "MULTI_VALUE_mrpc_plural_to_singular_human"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pszemraj/fleece2instructions-codealpaca | ---
license: cc-by-nc-4.0
task_categories:
- text2text-generation
- text-generation
language:
- en
size_categories:
- 10K<n<100K
tags:
- instructions
- domain adaptation
---
# codealpaca for text2text generation
This dataset was downloaded from the [sahil280114/codealpaca](https://github.com/sahil280114/codealpaca) github repo and parsed into text2text format for "generating" instructions.
It was downloaded under the **wonderful** Creative Commons Attribution-NonCommercial 4.0 International Public License (see snapshots of the [repo](https://web.archive.org/web/20230325040745/https://github.com/sahil280114/codealpaca) and [data license](https://web.archive.org/web/20230325041314/https://github.com/sahil280114/codealpaca/blob/master/DATA_LICENSE)), so that license applies to this dataset.
Note that the `inputs` and `instruction` columns in the original dataset have been aggregated together for text2text generation. Each has a token with either `<instruction>` or `<inputs>` in front of the relevant text, both for model understanding and regex separation later.
## structure
dataset structure:
```python
DatasetDict({
train: Dataset({
features: ['instructions_inputs', 'output'],
num_rows: 18014
})
test: Dataset({
features: ['instructions_inputs', 'output'],
num_rows: 1000
})
validation: Dataset({
features: ['instructions_inputs', 'output'],
num_rows: 1002
})
})
```
## example
The example shows what rows **without** inputs will look like (approximately 60% of the dataset according to repo). Note the special tokens to identify what is what when the model generates text: `<instruction>` and `<input>`:

## token lengths
bart

t5
 |
skarwa/scientific_papers_segmented | ---
license: mit
---
|
TinyPixel/airo-1 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
- name: category
dtype: string
- name: question_id
dtype: float64
splits:
- name: train
num_bytes: 57737476
num_examples: 34204
download_size: 30991700
dataset_size: 57737476
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "airo-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Saba06huggingface/resume_dataset | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
---
# Dataset Card for Saba06huggingface/resume_dataset
A collection of Resume Examples taken from livecareer.com for categorizing a given resume into any of the labels defined in the dataset.
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
About Dataset
Context
A collection of Resume Examples taken from livecareer.com for categorizing a given resume into any of the labels defined in the dataset.
Content
Contains 2400+ Resumes in string as well as PDF format.
PDF stored in the data folder differentiated into their respective labels as folders with each resume residing inside the folder in pdf form with filename as the id defined in the csv.
Inside the CSV:
ID: Unique identifier and file name for the respective pdf.
Resume_str : Contains the resume text only in string format.
Resume_html : Contains the resume data in html format as present while web scrapping.
Category : Category of the job the resume was used to apply.
Present categories are
HR, Designer, Information-Technology, Teacher, Advocate, Business-Development, Healthcare,
Fitness, Agriculture, BPO, Sales, Consultant, Digital-Media, Automobile, Chef, Finance, Apparel,
Engineering, Accountant, Construction, Public-Relations, Banking, Arts, Aviation
## Dataset Card Contact
Saba06huggingface/resume_dataset |
ryanyang0/latexify | ---
license: mit
---
|
ludiusvox/OZ | ---
license: bsd
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 8601182
num_examples: 31
download_size: 5572388
dataset_size: 8601182
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
reyrg/thermal-camera_v3 | ---
license: unknown
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 766087220.0
num_examples: 546
download_size: 49415770
dataset_size: 766087220.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
smangrul/hf-stack-v1 | ---
dataset_info:
features:
- name: repo_id
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 91907731
num_examples: 5905
download_size: 30589828
dataset_size: 91907731
---
# Dataset Card for "hf-stack-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MohammedNasri/cv_11_arabic_test_denoisy_II | ---
dataset_info:
features:
- name: audio
sequence: float64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 5817636498
num_examples: 10440
download_size: 2897757284
dataset_size: 5817636498
---
# Dataset Card for "cv_11_arabic_test_denoisy_II"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
irds/nfcorpus_train | ---
pretty_name: '`nfcorpus/train`'
viewer: false
source_datasets: ['irds/nfcorpus']
task_categories:
- text-retrieval
---
# Dataset Card for `nfcorpus/train`
The `nfcorpus/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/train).
# Data
This dataset provides:
- `queries` (i.e., topics); count=2,594
- `qrels`: (relevance assessments); count=139,350
- For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/nfcorpus_train', 'queries')
for record in queries:
record # {'query_id': ..., 'title': ..., 'all': ...}
qrels = load_dataset('irds/nfcorpus_train', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Boteva2016Nfcorpus,
title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval",
author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler",
booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})",
location = "Padova, Italy",
publisher = "Springer",
year = 2016
}
```
|
rokset3/136Mkeystrokes | ---
dataset_info:
features:
- name: PARTICIPANT_ID
dtype: int64
- name: TEST_SECTION_ID
dtype: int64
- name: SENTENCE
dtype: string
- name: USER_INPUT
dtype: string
- name: KEYSTROKE_ID
dtype: int64
- name: PRESS_TIME
dtype: int64
- name: RELEASE_TIME
dtype: int64
- name: LETTER
dtype: string
- name: KEYCODE
dtype: float64
splits:
- name: train
num_bytes: 17618096680
num_examples: 113719769
download_size: 2735520752
dataset_size: 17618096680
---
# Dataset Card for "136Mkeystrokes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zolak/twitter_dataset_79_1713094177 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 3160558
num_examples: 7968
download_size: 1589578
dataset_size: 3160558
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_KaeriJenti__Kaori-34b-v2 | ---
pretty_name: Evaluation run of KaeriJenti/Kaori-34b-v2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [KaeriJenti/Kaori-34b-v2](https://huggingface.co/KaeriJenti/Kaori-34b-v2) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_KaeriJenti__Kaori-34b-v2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-23T19:17:38.902154](https://huggingface.co/datasets/open-llm-leaderboard/details_KaeriJenti__Kaori-34b-v2/blob/main/results_2023-12-23T19-17-38.902154.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2562435688049368,\n\
\ \"acc_stderr\": 0.03087677995486888,\n \"acc_norm\": 0.25622099120034325,\n\
\ \"acc_norm_stderr\": 0.03166775316506421,\n \"mc1\": 0.2864137086903305,\n\
\ \"mc1_stderr\": 0.015826142439502346,\n \"mc2\": 0.49462441219025927,\n\
\ \"mc2_stderr\": 0.016011015086112988\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.189419795221843,\n \"acc_stderr\": 0.011450705115910769,\n\
\ \"acc_norm\": 0.23890784982935154,\n \"acc_norm_stderr\": 0.012461071376316614\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.27394941246763593,\n\
\ \"acc_stderr\": 0.004450718673552667,\n \"acc_norm\": 0.2896833300139414,\n\
\ \"acc_norm_stderr\": 0.004526883021027624\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2740740740740741,\n\
\ \"acc_stderr\": 0.03853254836552003,\n \"acc_norm\": 0.2740740740740741,\n\
\ \"acc_norm_stderr\": 0.03853254836552003\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.2236842105263158,\n \"acc_stderr\": 0.033911609343436025,\n\
\ \"acc_norm\": 0.2236842105263158,\n \"acc_norm_stderr\": 0.033911609343436025\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.23,\n\
\ \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.23,\n \
\ \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.2188679245283019,\n \"acc_stderr\": 0.025447863825108594,\n\
\ \"acc_norm\": 0.2188679245283019,\n \"acc_norm_stderr\": 0.025447863825108594\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2222222222222222,\n\
\ \"acc_stderr\": 0.03476590104304136,\n \"acc_norm\": 0.2222222222222222,\n\
\ \"acc_norm_stderr\": 0.03476590104304136\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.22,\n \"acc_stderr\": 0.0416333199893227,\n \
\ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.0416333199893227\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.3,\n\
\ \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \
\ \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909282,\n \
\ \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909282\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.21965317919075145,\n\
\ \"acc_stderr\": 0.031568093627031744,\n \"acc_norm\": 0.21965317919075145,\n\
\ \"acc_norm_stderr\": 0.031568093627031744\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.20588235294117646,\n \"acc_stderr\": 0.04023382273617747,\n\
\ \"acc_norm\": 0.20588235294117646,\n \"acc_norm_stderr\": 0.04023382273617747\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n\
\ \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.30638297872340425,\n \"acc_stderr\": 0.030135906478517563,\n\
\ \"acc_norm\": 0.30638297872340425,\n \"acc_norm_stderr\": 0.030135906478517563\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.20175438596491227,\n\
\ \"acc_stderr\": 0.037752050135836386,\n \"acc_norm\": 0.20175438596491227,\n\
\ \"acc_norm_stderr\": 0.037752050135836386\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.25517241379310346,\n \"acc_stderr\": 0.03632984052707842,\n\
\ \"acc_norm\": 0.25517241379310346,\n \"acc_norm_stderr\": 0.03632984052707842\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.2671957671957672,\n \"acc_stderr\": 0.02278967314577657,\n \"\
acc_norm\": 0.2671957671957672,\n \"acc_norm_stderr\": 0.02278967314577657\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2857142857142857,\n\
\ \"acc_stderr\": 0.0404061017820884,\n \"acc_norm\": 0.2857142857142857,\n\
\ \"acc_norm_stderr\": 0.0404061017820884\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536934,\n \
\ \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536934\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.3161290322580645,\n \"acc_stderr\": 0.02645087448904277,\n \"\
acc_norm\": 0.3161290322580645,\n \"acc_norm_stderr\": 0.02645087448904277\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.19704433497536947,\n \"acc_stderr\": 0.027986724666736205,\n \"\
acc_norm\": 0.19704433497536947,\n \"acc_norm_stderr\": 0.027986724666736205\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.24,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\"\
: 0.24,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.23030303030303031,\n \"acc_stderr\": 0.032876667586034886,\n\
\ \"acc_norm\": 0.23030303030303031,\n \"acc_norm_stderr\": 0.032876667586034886\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.35353535353535354,\n \"acc_stderr\": 0.03406086723547153,\n \"\
acc_norm\": 0.35353535353535354,\n \"acc_norm_stderr\": 0.03406086723547153\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.37823834196891193,\n \"acc_stderr\": 0.03499807276193339,\n\
\ \"acc_norm\": 0.37823834196891193,\n \"acc_norm_stderr\": 0.03499807276193339\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.2128205128205128,\n \"acc_stderr\": 0.020752423722128016,\n\
\ \"acc_norm\": 0.2128205128205128,\n \"acc_norm_stderr\": 0.020752423722128016\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3037037037037037,\n \"acc_stderr\": 0.02803792996911499,\n \
\ \"acc_norm\": 0.3037037037037037,\n \"acc_norm_stderr\": 0.02803792996911499\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.21428571428571427,\n \"acc_stderr\": 0.026653531596715477,\n\
\ \"acc_norm\": 0.21428571428571427,\n \"acc_norm_stderr\": 0.026653531596715477\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2847682119205298,\n \"acc_stderr\": 0.03684881521389023,\n \"\
acc_norm\": 0.2847682119205298,\n \"acc_norm_stderr\": 0.03684881521389023\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.3155963302752294,\n \"acc_stderr\": 0.019926117513869666,\n \"\
acc_norm\": 0.3155963302752294,\n \"acc_norm_stderr\": 0.019926117513869666\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.24074074074074073,\n \"acc_stderr\": 0.0291575221846056,\n \"\
acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.0291575221846056\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.30392156862745096,\n \"acc_stderr\": 0.03228210387037892,\n \"\
acc_norm\": 0.30392156862745096,\n \"acc_norm_stderr\": 0.03228210387037892\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.22362869198312235,\n \"acc_stderr\": 0.027123298205229972,\n \
\ \"acc_norm\": 0.22362869198312235,\n \"acc_norm_stderr\": 0.027123298205229972\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.24663677130044842,\n\
\ \"acc_stderr\": 0.028930413120910894,\n \"acc_norm\": 0.24663677130044842,\n\
\ \"acc_norm_stderr\": 0.028930413120910894\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.2366412213740458,\n \"acc_stderr\": 0.03727673575596918,\n\
\ \"acc_norm\": 0.2366412213740458,\n \"acc_norm_stderr\": 0.03727673575596918\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.3140495867768595,\n \"acc_stderr\": 0.042369647530410184,\n \"\
acc_norm\": 0.3140495867768595,\n \"acc_norm_stderr\": 0.042369647530410184\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.2222222222222222,\n\
\ \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.2222222222222222,\n\
\ \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.2883435582822086,\n \"acc_stderr\": 0.03559039531617342,\n\
\ \"acc_norm\": 0.2883435582822086,\n \"acc_norm_stderr\": 0.03559039531617342\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.23214285714285715,\n\
\ \"acc_stderr\": 0.04007341809755805,\n \"acc_norm\": 0.23214285714285715,\n\
\ \"acc_norm_stderr\": 0.04007341809755805\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.27184466019417475,\n \"acc_stderr\": 0.044052680241409216,\n\
\ \"acc_norm\": 0.27184466019417475,\n \"acc_norm_stderr\": 0.044052680241409216\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.19230769230769232,\n\
\ \"acc_stderr\": 0.025819233256483706,\n \"acc_norm\": 0.19230769230769232,\n\
\ \"acc_norm_stderr\": 0.025819233256483706\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.27330779054916987,\n\
\ \"acc_stderr\": 0.015936681062628556,\n \"acc_norm\": 0.27330779054916987,\n\
\ \"acc_norm_stderr\": 0.015936681062628556\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.2543352601156069,\n \"acc_stderr\": 0.02344582627654554,\n\
\ \"acc_norm\": 0.2543352601156069,\n \"acc_norm_stderr\": 0.02344582627654554\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2636871508379888,\n\
\ \"acc_stderr\": 0.014736926383761973,\n \"acc_norm\": 0.2636871508379888,\n\
\ \"acc_norm_stderr\": 0.014736926383761973\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.2875816993464052,\n \"acc_stderr\": 0.02591780611714716,\n\
\ \"acc_norm\": 0.2875816993464052,\n \"acc_norm_stderr\": 0.02591780611714716\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2572347266881029,\n\
\ \"acc_stderr\": 0.024826171289250888,\n \"acc_norm\": 0.2572347266881029,\n\
\ \"acc_norm_stderr\": 0.024826171289250888\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.25617283950617287,\n \"acc_stderr\": 0.024288533637726095,\n\
\ \"acc_norm\": 0.25617283950617287,\n \"acc_norm_stderr\": 0.024288533637726095\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.24822695035460993,\n \"acc_stderr\": 0.025770015644290396,\n \
\ \"acc_norm\": 0.24822695035460993,\n \"acc_norm_stderr\": 0.025770015644290396\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.23272490221642764,\n\
\ \"acc_stderr\": 0.010792595553888496,\n \"acc_norm\": 0.23272490221642764,\n\
\ \"acc_norm_stderr\": 0.010792595553888496\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.22426470588235295,\n \"acc_stderr\": 0.02533684856333236,\n\
\ \"acc_norm\": 0.22426470588235295,\n \"acc_norm_stderr\": 0.02533684856333236\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.2679738562091503,\n \"acc_stderr\": 0.017917974069594722,\n \
\ \"acc_norm\": 0.2679738562091503,\n \"acc_norm_stderr\": 0.017917974069594722\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.2545454545454545,\n\
\ \"acc_stderr\": 0.04172343038705383,\n \"acc_norm\": 0.2545454545454545,\n\
\ \"acc_norm_stderr\": 0.04172343038705383\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.24897959183673468,\n \"acc_stderr\": 0.02768297952296023,\n\
\ \"acc_norm\": 0.24897959183673468,\n \"acc_norm_stderr\": 0.02768297952296023\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.1890547263681592,\n\
\ \"acc_stderr\": 0.027686913588013024,\n \"acc_norm\": 0.1890547263681592,\n\
\ \"acc_norm_stderr\": 0.027686913588013024\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768079,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768079\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.26506024096385544,\n\
\ \"acc_stderr\": 0.03436024037944966,\n \"acc_norm\": 0.26506024096385544,\n\
\ \"acc_norm_stderr\": 0.03436024037944966\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.3157894736842105,\n \"acc_stderr\": 0.035650796707083106,\n\
\ \"acc_norm\": 0.3157894736842105,\n \"acc_norm_stderr\": 0.035650796707083106\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2864137086903305,\n\
\ \"mc1_stderr\": 0.015826142439502346,\n \"mc2\": 0.49462441219025927,\n\
\ \"mc2_stderr\": 0.016011015086112988\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5722178374112076,\n \"acc_stderr\": 0.013905134013839957\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.006823351023502654,\n \
\ \"acc_stderr\": 0.0022675371022544905\n }\n}\n```"
repo_url: https://huggingface.co/KaeriJenti/Kaori-34b-v2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|arc:challenge|25_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|gsm8k|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hellaswag|10_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-23T19-17-38.902154.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-23T19-17-38.902154.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- '**/details_harness|winogrande|5_2023-12-23T19-17-38.902154.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-23T19-17-38.902154.parquet'
- config_name: results
data_files:
- split: 2023_12_23T19_17_38.902154
path:
- results_2023-12-23T19-17-38.902154.parquet
- split: latest
path:
- results_2023-12-23T19-17-38.902154.parquet
---
# Dataset Card for Evaluation run of KaeriJenti/Kaori-34b-v2
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [KaeriJenti/Kaori-34b-v2](https://huggingface.co/KaeriJenti/Kaori-34b-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_KaeriJenti__Kaori-34b-v2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-23T19:17:38.902154](https://huggingface.co/datasets/open-llm-leaderboard/details_KaeriJenti__Kaori-34b-v2/blob/main/results_2023-12-23T19-17-38.902154.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2562435688049368,
"acc_stderr": 0.03087677995486888,
"acc_norm": 0.25622099120034325,
"acc_norm_stderr": 0.03166775316506421,
"mc1": 0.2864137086903305,
"mc1_stderr": 0.015826142439502346,
"mc2": 0.49462441219025927,
"mc2_stderr": 0.016011015086112988
},
"harness|arc:challenge|25": {
"acc": 0.189419795221843,
"acc_stderr": 0.011450705115910769,
"acc_norm": 0.23890784982935154,
"acc_norm_stderr": 0.012461071376316614
},
"harness|hellaswag|10": {
"acc": 0.27394941246763593,
"acc_stderr": 0.004450718673552667,
"acc_norm": 0.2896833300139414,
"acc_norm_stderr": 0.004526883021027624
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.2740740740740741,
"acc_stderr": 0.03853254836552003,
"acc_norm": 0.2740740740740741,
"acc_norm_stderr": 0.03853254836552003
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.2236842105263158,
"acc_stderr": 0.033911609343436025,
"acc_norm": 0.2236842105263158,
"acc_norm_stderr": 0.033911609343436025
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.2188679245283019,
"acc_stderr": 0.025447863825108594,
"acc_norm": 0.2188679245283019,
"acc_norm_stderr": 0.025447863825108594
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.03476590104304136,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.03476590104304136
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.22,
"acc_stderr": 0.0416333199893227,
"acc_norm": 0.22,
"acc_norm_stderr": 0.0416333199893227
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909282,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909282
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.21965317919075145,
"acc_stderr": 0.031568093627031744,
"acc_norm": 0.21965317919075145,
"acc_norm_stderr": 0.031568093627031744
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.20588235294117646,
"acc_stderr": 0.04023382273617747,
"acc_norm": 0.20588235294117646,
"acc_norm_stderr": 0.04023382273617747
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.30638297872340425,
"acc_stderr": 0.030135906478517563,
"acc_norm": 0.30638297872340425,
"acc_norm_stderr": 0.030135906478517563
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.20175438596491227,
"acc_stderr": 0.037752050135836386,
"acc_norm": 0.20175438596491227,
"acc_norm_stderr": 0.037752050135836386
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.25517241379310346,
"acc_stderr": 0.03632984052707842,
"acc_norm": 0.25517241379310346,
"acc_norm_stderr": 0.03632984052707842
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2671957671957672,
"acc_stderr": 0.02278967314577657,
"acc_norm": 0.2671957671957672,
"acc_norm_stderr": 0.02278967314577657
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2857142857142857,
"acc_stderr": 0.0404061017820884,
"acc_norm": 0.2857142857142857,
"acc_norm_stderr": 0.0404061017820884
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.18,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.18,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.3161290322580645,
"acc_stderr": 0.02645087448904277,
"acc_norm": 0.3161290322580645,
"acc_norm_stderr": 0.02645087448904277
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.19704433497536947,
"acc_stderr": 0.027986724666736205,
"acc_norm": 0.19704433497536947,
"acc_norm_stderr": 0.027986724666736205
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.24,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.24,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.23030303030303031,
"acc_stderr": 0.032876667586034886,
"acc_norm": 0.23030303030303031,
"acc_norm_stderr": 0.032876667586034886
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.35353535353535354,
"acc_stderr": 0.03406086723547153,
"acc_norm": 0.35353535353535354,
"acc_norm_stderr": 0.03406086723547153
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.37823834196891193,
"acc_stderr": 0.03499807276193339,
"acc_norm": 0.37823834196891193,
"acc_norm_stderr": 0.03499807276193339
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2128205128205128,
"acc_stderr": 0.020752423722128016,
"acc_norm": 0.2128205128205128,
"acc_norm_stderr": 0.020752423722128016
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3037037037037037,
"acc_stderr": 0.02803792996911499,
"acc_norm": 0.3037037037037037,
"acc_norm_stderr": 0.02803792996911499
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.21428571428571427,
"acc_stderr": 0.026653531596715477,
"acc_norm": 0.21428571428571427,
"acc_norm_stderr": 0.026653531596715477
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2847682119205298,
"acc_stderr": 0.03684881521389023,
"acc_norm": 0.2847682119205298,
"acc_norm_stderr": 0.03684881521389023
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.3155963302752294,
"acc_stderr": 0.019926117513869666,
"acc_norm": 0.3155963302752294,
"acc_norm_stderr": 0.019926117513869666
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.24074074074074073,
"acc_stderr": 0.0291575221846056,
"acc_norm": 0.24074074074074073,
"acc_norm_stderr": 0.0291575221846056
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.30392156862745096,
"acc_stderr": 0.03228210387037892,
"acc_norm": 0.30392156862745096,
"acc_norm_stderr": 0.03228210387037892
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.22362869198312235,
"acc_stderr": 0.027123298205229972,
"acc_norm": 0.22362869198312235,
"acc_norm_stderr": 0.027123298205229972
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.24663677130044842,
"acc_stderr": 0.028930413120910894,
"acc_norm": 0.24663677130044842,
"acc_norm_stderr": 0.028930413120910894
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2366412213740458,
"acc_stderr": 0.03727673575596918,
"acc_norm": 0.2366412213740458,
"acc_norm_stderr": 0.03727673575596918
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.3140495867768595,
"acc_stderr": 0.042369647530410184,
"acc_norm": 0.3140495867768595,
"acc_norm_stderr": 0.042369647530410184
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.2883435582822086,
"acc_stderr": 0.03559039531617342,
"acc_norm": 0.2883435582822086,
"acc_norm_stderr": 0.03559039531617342
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.23214285714285715,
"acc_stderr": 0.04007341809755805,
"acc_norm": 0.23214285714285715,
"acc_norm_stderr": 0.04007341809755805
},
"harness|hendrycksTest-management|5": {
"acc": 0.27184466019417475,
"acc_stderr": 0.044052680241409216,
"acc_norm": 0.27184466019417475,
"acc_norm_stderr": 0.044052680241409216
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.19230769230769232,
"acc_stderr": 0.025819233256483706,
"acc_norm": 0.19230769230769232,
"acc_norm_stderr": 0.025819233256483706
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.27330779054916987,
"acc_stderr": 0.015936681062628556,
"acc_norm": 0.27330779054916987,
"acc_norm_stderr": 0.015936681062628556
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.2543352601156069,
"acc_stderr": 0.02344582627654554,
"acc_norm": 0.2543352601156069,
"acc_norm_stderr": 0.02344582627654554
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2636871508379888,
"acc_stderr": 0.014736926383761973,
"acc_norm": 0.2636871508379888,
"acc_norm_stderr": 0.014736926383761973
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.2875816993464052,
"acc_stderr": 0.02591780611714716,
"acc_norm": 0.2875816993464052,
"acc_norm_stderr": 0.02591780611714716
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2572347266881029,
"acc_stderr": 0.024826171289250888,
"acc_norm": 0.2572347266881029,
"acc_norm_stderr": 0.024826171289250888
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.25617283950617287,
"acc_stderr": 0.024288533637726095,
"acc_norm": 0.25617283950617287,
"acc_norm_stderr": 0.024288533637726095
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.24822695035460993,
"acc_stderr": 0.025770015644290396,
"acc_norm": 0.24822695035460993,
"acc_norm_stderr": 0.025770015644290396
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.23272490221642764,
"acc_stderr": 0.010792595553888496,
"acc_norm": 0.23272490221642764,
"acc_norm_stderr": 0.010792595553888496
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.22426470588235295,
"acc_stderr": 0.02533684856333236,
"acc_norm": 0.22426470588235295,
"acc_norm_stderr": 0.02533684856333236
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.2679738562091503,
"acc_stderr": 0.017917974069594722,
"acc_norm": 0.2679738562091503,
"acc_norm_stderr": 0.017917974069594722
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.2545454545454545,
"acc_stderr": 0.04172343038705383,
"acc_norm": 0.2545454545454545,
"acc_norm_stderr": 0.04172343038705383
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.24897959183673468,
"acc_stderr": 0.02768297952296023,
"acc_norm": 0.24897959183673468,
"acc_norm_stderr": 0.02768297952296023
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.1890547263681592,
"acc_stderr": 0.027686913588013024,
"acc_norm": 0.1890547263681592,
"acc_norm_stderr": 0.027686913588013024
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-virology|5": {
"acc": 0.26506024096385544,
"acc_stderr": 0.03436024037944966,
"acc_norm": 0.26506024096385544,
"acc_norm_stderr": 0.03436024037944966
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.3157894736842105,
"acc_stderr": 0.035650796707083106,
"acc_norm": 0.3157894736842105,
"acc_norm_stderr": 0.035650796707083106
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2864137086903305,
"mc1_stderr": 0.015826142439502346,
"mc2": 0.49462441219025927,
"mc2_stderr": 0.016011015086112988
},
"harness|winogrande|5": {
"acc": 0.5722178374112076,
"acc_stderr": 0.013905134013839957
},
"harness|gsm8k|5": {
"acc": 0.006823351023502654,
"acc_stderr": 0.0022675371022544905
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
dlibf/glaive-code-assistant | ---
configs:
- config_name: default
data_files:
- split: train_sft
path: data/train_sft-*
- split: test_sft
path: data/test_sft-*
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_sft
num_bytes: 210616334.29604948
num_examples: 136009
- name: test_sft
num_bytes: 154854.70395051024
num_examples: 100
download_size: 102642844
dataset_size: 210771189.0
---
# Dataset Card for "glaive-code-assistant"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lmqg/qag_dequad | ---
license: cc-by-sa-4.0
pretty_name: SQuAD for question generation
language: de
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: lmqg/qg_dequad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qag_dequad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the DEQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
German (de)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": "51._Bundesstaat === District of Columbia === Der District of Columbia gilt neben Puerto Rico als einer der aussichtsreichen Kandidaten für die Anerkennung als Bundesstaat in naher Zukunft. Die Einwohner des Bundesdistrikts gelten als größte Befürworter dieser Entscheidung, die jedoch einer Verfassungsänderung bedürfte. Die Anhänger nutzen das Motto des Unabhängigkeitskrieges in abgewandelter Form – „Taxation without representation“ –, um auf die mangelnde Repräsentation im Kongress hinzuweisen. Das Motto wird heute auf die Nummernschilder neu zugelassener Autos gedruckt (wobei der Fahrer alternativ die Internet-Adresse des D.C. wählen kann). Bill Clintons Präsidenten-Limousine hatte ein solches Nummernschild kurz vor Ende seiner Amtszeit. George W. Bush ließ diese Nummernschilder nach seinem Amtsantritt wieder entfernen. Die kleine ''D.C. Statehood Party'' vertrat diese Ansicht und vereinte sich mit den Grünen zur ''D.C. Statehood Green Party''. 1978 kamen sie ihrem Ziel am nächsten, als der Kongress das ''District of Columbia Voting Rights Amendment'' verabschiedete. Zwei Jahre später beriefen lokale Bürger mit einer Initiative eine konstitutionelle Versammlung für einen neuen Bundesstaat. 1982 ratifizierten die Wähler die Verfassung des Bundesstaates, der ''New Columbia'' heißen sollte. 1985 wurde der Plan jedoch gestoppt, als das Amendment scheiterte, weil es nicht von genug Staaten innerhalb von sieben Jahren ratifiziert wurde. Eine andere Möglichkeit wäre die Rückgliederung des Gebietes in den Bundesstaat Maryland. Damit würden die Einwohner des D.C. in den Genuss der Vorteile kommen, in einem Bundesstaat zu leben, ohne dass ein 51. Bundesstaat geschaffen werden müsste. Am 26. Juni 2020 stimmte das US-Repräsentantenhaus mit 232 zu 180 Stimmen dafür, den District of Columbia als 51. Bundesstaat anzuerkennen. Ein positives Votum des durch die Republikaner dominierten US-Senats gilt als unwahrscheinlich. Außerdem kündigte Präsident Trump sein Veto gegen ein solches, potenzielles Vorhaben an. Dennoch war es das erste positive Votum einer der beiden Kammern des US-Kongresses für eine Anerkennung als Bundesstaat.",
"questions": [ "Was ist das Motto der Befürworter der Anerkennung von District of Columbia als neuer US-Bundesstaat?", "Warum hat die Anerkennung von District of Columbia zu einem neuen US-Bundesstaat 1985 nicht geklappt?", "Was war der potenzielle Name für den neuen US-Bundesstaat anstelle von District of Columbia?", "Aus welchen ehemaligen Parteien bestand die D.C. Statehood Green Party?" ],
"answers": [ "das Motto des Unabhängigkeitskrieges in abgewandelter Form – „Taxation without representation“ ", "weil es nicht von genug Staaten innerhalb von sieben Jahren ratifiziert wurde", " ''New Columbia'' ", "Die kleine ''D.C. Statehood Party'' vertrat diese Ansicht und vereinte sich mit den Grünen" ],
"questions_answers": "question: Was ist das Motto der Befürworter der Anerkennung von District of Columbia als neuer US-Bundesstaat?, answer: das Motto des Unabhängigkeitskrieges in abgewandelter Form – „Taxation without representation“ | question: Warum hat die Anerkennung von District of Columbia zu einem neuen US-Bundesstaat 1985 nicht geklappt?, answer: weil es nicht von genug Staaten innerhalb von sieben Jahren ratifiziert wurde | question: Was war der potenzielle Name für den neuen US-Bundesstaat anstelle von District of Columbia?, answer: ''New Columbia'' | question: Aus welchen ehemaligen Parteien bestand die D.C. Statehood Green Party?, answer: Die kleine ''D.C. Statehood Party'' vertrat diese Ansicht und vereinte sich mit den Grünen"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|2489 | 1476 | 474 |
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
Nolan1206/WhisperSmallTest20000 | ---
dataset_info:
features:
- name: audio
sequence: float32
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 3882577961
num_examples: 18450
- name: test
num_bytes: 40879128
num_examples: 377
download_size: 3938134546
dataset_size: 3923457089
---
# Dataset Card for "WhisperSmallTest20000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hamtech/tst | ---
license: pddl
language:
- en
pretty_name: tst
size_categories:
- 100B<n<1T
--- |
israelfx/brunoleonardo | ---
license: openrail
---
|
Multimodal-Fatima/VizWiz_train | ---
dataset_info:
features:
- name: id
dtype: int32
- name: image
dtype: image
- name: filename
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: answers_original
list:
- name: answer
dtype: string
- name: answer_confidence
dtype: string
- name: answer_type
dtype: string
- name: answerable
dtype: int32
- name: id_image
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B
sequence: string
- name: blip_caption_beam_5
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_LAION-ViT-H-14-2B
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes
list:
- name: attribute
dtype: string
- name: box
sequence: float32
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float32
- name: size
dtype: string
- name: tag
dtype: string
splits:
- name: train
num_bytes: 9906518637.0
num_examples: 20523
download_size: 9880125036
dataset_size: 9906518637.0
---
# Dataset Card for "VizWiz_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arieg/bw_spec_cls_4_14_s_200 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '1197'
'1': '1270'
'2': '1276'
'3': '1277'
splits:
- name: train
num_bytes: 43731623.0
num_examples: 800
- name: test
num_bytes: 1102972.0
num_examples: 20
download_size: 37991761
dataset_size: 44834595.0
---
# Dataset Card for "bw_spec_cls_4_14_s_200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
maghwa/OpenHermes-2-AR-10K-41-850k-860k | ---
dataset_info:
features:
- name: topic
dtype: 'null'
- name: conversations
dtype: string
- name: source
dtype: string
- name: category
dtype: 'null'
- name: title
dtype: 'null'
- name: idx
dtype: 'null'
- name: language
dtype: 'null'
- name: custom_instruction
dtype: 'null'
- name: avatarUrl
dtype: 'null'
- name: model_name
dtype: 'null'
- name: model
dtype: 'null'
- name: hash
dtype: 'null'
- name: views
dtype: float64
- name: id
dtype: 'null'
- name: system_prompt
dtype: 'null'
- name: skip_prompt_formatting
dtype: 'null'
splits:
- name: train
num_bytes: 26796849
num_examples: 10001
download_size: 11296502
dataset_size: 26796849
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
one-sec-cv12/chunk_228 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 22202359632.125
num_examples: 231159
download_size: 18820040745
dataset_size: 22202359632.125
---
# Dataset Card for "chunk_228"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
abhinand/tamil-alpaca | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
- name: system_prompt
dtype: string
splits:
- name: train
num_bytes: 287556653
num_examples: 51876
download_size: 0
dataset_size: 287556653
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: gpl-3.0
task_categories:
- text-generation
language:
- ta
pretty_name: tamil-alpaca
size_categories:
- 10K<n<100K
---
# Dataset Card for "tamil-alpaca"
This repository includes a Tamil-translated version of the [Alpaca dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned).
This dataset is part of the release of Tamil LLaMA family of models – an important step in advancing LLMs for the Tamil language. To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP) ]() that outlines our journey and the model's potential impact.
**GitHub Repository:** [https://github.com/abhinand5/tamil-llama](https://github.com/abhinand5/tamil-llama)
## Models trained using this dataset
| Model | Type | Data | Base Model | # Params | Download Links |
|--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------|
| Tamil LLaMA 7B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) |
| Tamil LLaMA 13B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) |
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/)
## Citation
If you use this model or any of the the Tamil-Llama datasets in your research, please cite:
```bibtex
@misc{balachandran2023tamilllama,
title={Tamil-Llama: A New Tamil Language Model Based on Llama 2},
author={Abhinand Balachandran},
year={2023},
eprint={2311.05845},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
pinzhenchen/alpaca-cleaned-zh | ---
license: cc-by-nc-4.0
task_categories:
- text-generation
- question-answering
language:
- zh
tags:
- instruction tuning
size_categories:
- 10K<n<100K
---
### Data Description
This HF data repository contains the Chinese Alpaca dataset used in our study of monolingual versus multilingual instruction tuning.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Creation
* Machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) into Chinese.
#### Usage
* This data is intended to be used for Chinese instruction tuning.
* The dataset has roughly 52K instances in the JSON format.
* Each instance has an instruction, an output, and an optional input. An example is shown below:
```
{
"instruction": "三种主要颜色是什么?",
"input": "",
"output": "三种主要颜色是红色, 蓝色和黄色。 这些颜色被称为主要颜色, 因为它们不能通过混合其他颜色来创建, 所有其他颜色都可以通过以各种比例组合来创建。 在添加色系中, 用于光线的主要颜色是红色, 绿色和蓝色 (R GB) 。"
}
```
#### Known issues
* The machine translation process might have corrupted data containing code, cross-lingual tasks, grammatical error correction tasks, etc.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
``` |
KSmart/chinese_traditional_chengyu | ---
license: apache-2.0
---
|
marup/PhoebeTonkinRVC | ---
license: openrail
---
|
davidberenstein1957/ultra-feedback-dutch-cleaned-hq-with-responses | ---
dataset_info:
features:
- name: input
dtype: string
- name: generations
sequence: string
splits:
- name: train
num_bytes: 55243593
num_examples: 21577
- name: test
num_bytes: 2917623
num_examples: 1136
download_size: 34331801
dataset_size: 58161216
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
IdoAi/FypDatasetWithSplitsRgb | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1997747885.8
num_examples: 10700
- name: validation
num_bytes: 204033799.13
num_examples: 1094
- name: test
num_bytes: 68700437.0
num_examples: 365
download_size: 2263896820
dataset_size: 2270482121.93
---
# Dataset Card for "FypDatasetWithSplitsRgb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pesc101/spyder-ide-respository-raw-chunks | ---
dataset_info:
features:
- name: code
dtype: string
- name: meta_data.file_name
dtype: string
- name: meta_data.module
dtype: string
- name: meta_data.contains_class
dtype: bool
- name: meta_data.contains_function
dtype: bool
- name: meta_data.file_imports
sequence: string
- name: meta_data.start_line
dtype: int64
- name: meta_data.end_line
dtype: int64
splits:
- name: train
num_bytes: 17221590
num_examples: 7943
download_size: 3531423
dataset_size: 17221590
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlo0ollm/cj_ko_words | ---
license: openrail
---
|
SyedAunZaidi/cv-corpus-16.0-ur | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: client_id
dtype: string
- name: path
dtype: string
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accents
dtype: string
- name: variant
dtype: float64
- name: locale
dtype: string
- name: segment
dtype: float64
- name: config
dtype: string
splits:
- name: train
num_bytes: 134956314.16
num_examples: 5368
- name: test
num_bytes: 101379458.192
num_examples: 4014
- name: validation
num_bytes: 101379458.192
num_examples: 4014
download_size: 330546792
dataset_size: 337715230.544
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
boapps/alpaca-cleaned-gemini-hun-ratings | ---
license: apache-2.0
language:
- hu
---
Ez az adathalmaz úgy keletkezett, hogy a [Bazsalanszky/alpaca-cleaned-gemini-hun](https://huggingface.co/datasets/Bazsalanszky/alpaca-cleaned-gemini-hun)-n lefuttattam egy llm által támogatott értékelést.
Az értékelő modell a gemini-pro (az ingyenes) volt. A használt kód az alpagasus módosítása: https://github.com/boapps/alpagasus-hu |
vitorsonic/emi | ---
license: openrail
---
|
dim/kinopoisk_raw | ---
dataset_info:
features:
- name: content
dtype: string
- name: title
dtype: string
- name: grade3
dtype: string
- name: movie_name
dtype: string
- name: part
dtype: string
- name: review_id
dtype: string
- name: author
dtype: string
- name: date
dtype: string
- name: grade10
dtype: string
- name: Idx
dtype: int32
splits:
- name: train
num_bytes: 138684842
num_examples: 36591
download_size: 70387577
dataset_size: 138684842
---
# Dataset Card for "kinopoisk_raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LFBMS/class_dataset_real2_donut | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': bilanz_h
'1': bilanz_v
'2': guv
'3': kontennachweis_bilanz
'4': kontennachweis_guv
'5': other
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 340313532.0
num_examples: 1117
- name: test
num_bytes: 87116926.0
num_examples: 280
download_size: 400625159
dataset_size: 427430458.0
---
# Dataset Card for "class_dataset_real2_donut"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lewtun/bulk-superb-s3p-superb-49606 | ---
benchmark: superb
task: asr
type: prediction
---
# Batch job
model_id: lewtun/superb-s3prl-osanseviero__hubert_base-asr-cbcd177a
dataset_name: superb
dataset_config: asr
dataset_split: test
dataset_column: file |
dipudl/hc3-and-gpt-wiki-intro-with-perplexity-and-128-window | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: text
dtype: string
- name: source
dtype: string
- name: label
dtype: int64
- name: perplexity
dtype: float64
splits:
- name: train
num_bytes: 396594042.354058
num_examples: 330344
- name: test
num_bytes: 20925699.0
num_examples: 17387
download_size: 251966356
dataset_size: 417519741.354058
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Vinomaly/1k-sample-comex | ---
task_categories:
- feature-extraction
- text-generation
language:
- es
size_categories:
- 1K<n<10K
--- |
Elatar/Elatar | ---
dataset_info:
features:
- name: func_code_string
dtype: string
- name: func_documentation_string
dtype: string
splits:
- name: train
num_bytes: 41208091
num_examples: 48791
- name: test
num_bytes: 1920701
num_examples: 2279
- name: validation
num_bytes: 1711210
num_examples: 2209
download_size: 16729518
dataset_size: 44840002
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
vietgpt/arxiv | ---
dataset_info:
features:
- name: text
dtype: string
- name: meta
struct:
- name: timestamp
dtype: timestamp[s]
- name: yymm
dtype: string
- name: arxiv_id
dtype: string
- name: language
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 89337072771
num_examples: 1558306
download_size: 40941434576
dataset_size: 89337072771
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "arxiv"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
somosnlp/Reglamento_aeronautico_Colombiano_QA_RAC1_FULL | ---
dataset_info:
features:
- name: pagina
dtype: int64
- name: id
dtype: int64
- name: pregunta
dtype: string
- name: respuesta
dtype: string
splits:
- name: train
num_bytes: 488315
num_examples: 2205
download_size: 155260
dataset_size: 488315
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- question-answering
language:
- es
tags:
- legal
size_categories:
- 1K<n<10K
---
## Reglamento aeronautico Colombiano QA

## Descripción geneneral.
Este contenido se refiere a un conjunto de datos (dataset) que se ha elaborado basándose en el Reglamento Aeronáutico Colombiano.
A partir del contenido original del reglamento, se ha utilizado inteligencia artificial para extraer información relevante y crear un conjunto de preguntas y respuestas.
Este proceso permite transformar el reglamento, que puede ser extenso y complejo, en un formato más accesible y comprensible, facilitando el aprendizaje y la revisión de sus normativas para personas interesadas en la aviación colombiana,
ya sean profesionales del sector, estudiantes o entusiastas. La utilización de IA para este propósito no solo mejora la eficiencia en la generación de material educativo,
sino que también asegura que las preguntas y respuestas sean precisas y estén alineadas con el contenido y espíritu del reglamento.
## Descripción de los objetivos.
El proyecto descrito tiene como objetivo principal la creación de un dataset de alta calidad a partir del Reglamento Aeronáutico Colombiano mediante un proceso en dos etapas,
utilizando tanto inteligencia artificial como intervención humana. En la primera etapa, se emplea una inteligencia artificial para extraer datos relevantes del reglamento y generar un conjunto preliminar de preguntas y respuestas.
Este enfoque automatizado permite cubrir de manera eficiente un amplio espectro del material, identificando temas clave y generando preguntas pertinentes que reflejan el contenido y la estructura del reglamento.
En la segunda etapa, estos datos son revisados por etiquetadores humanos.
Este equipo de revisores valida las respuestas generadas por la IA, realizando correcciones y ajustes según sea necesario para asegurar la precisión y relevancia del contenido.
Este paso de validación es crucial para garantizar la calidad del dataset, pues permite incorporar el entendimiento humano y la interpretación precisa de las normativas, algo que la IA, por avanzada que sea, puede no lograr a la perfección.
El dataset final, validado y refinado, se destina a entrenar un modelo de inteligencia artificial más específico y de menor escala.
Este modelo está diseñado para realizar tareas concretas relacionadas con el Reglamento Aeronáutico Colombiano, posiblemente incluyendo la automatización de consultas, asistencia en la interpretación de las normativas,
y apoyo en la formación y evaluación de personal en el sector aeronáutico. El entrenamiento con datos de alta calidad asegura que el modelo sea efectivo, confiable y preciso en sus tareas designadas, reflejando así el compromiso con la excelencia y la seguridad que caracteriza al sector aeronáutico.
## modelo a fine-tune.
Este modelo previamente se a entrenado con el dataset de 'OpenAssistant/oasst2' que contiene mas de 15 idiomas y se hizo un filtro de datos.
```
https://huggingface.co/NickyNicky/gemma-2b-it_oasst2_all_chatML_Unsloth_V1
```
## Ejemplo de plantilla basica.
Es una plantilla de ejemplo para entrenamiento de gemma-2b.
El proposito de esta plantilla es que el modelo aprenda a generalizar sobre las normativas aeronauticas Colombiana.
```
<bos><start_of_turn>system
You are a helpful AI assistant.
Eres un agente experto en la normativa aeronautica Colombiana.<end_of_turn>
<start_of_turn>user
¿Qué aspectos se tratan en el CAPÍTULO II del RAC 1?<end_of_turn>
<start_of_turn>model
En el CAPÍTULO II del RAC 1 se tratan las expresiones de uso aeronáutico y su significado.<end_of_turn>
```
## Ejemplo en una variable en Python.
```py
# con esto se elimina los interrogantes incompletos.
question = "Qué aspectos se tratan en el CAPÍTULO II del RAC 1?".replace("¿","").replace("?","")
text = f"""<bos><start_of_turn>system
You are a helpful AI assistant.
Eres un agente experto en la normativa aeronautica Colombiana.<end_of_turn>
<start_of_turn>user
¿{question}?<end_of_turn>
<start_of_turn>model
"""
```
## Posibles nombres del modelo.
```
name 1: AeroReg_Col_AI
name 2: AeroReg_Cop_AI
name 3: AeroReg_AI
```
## Codigo entrenamiento.
```
En Kamino...
```

|
xu3kev/BIRD-SQL-data | ---
dataset_info:
features:
- name: db_id
dtype: string
- name: question
dtype: string
- name: evidence
dtype: string
- name: SQL
dtype: string
- name: schema
dtype: string
splits:
- name: train
num_bytes: 1039491
num_examples: 200
download_size: 98914
dataset_size: 1039491
---
# Dataset Card for "BIRD-SQL-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
health360/Healix-V1 | ---
license: odc-by
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 427613608
num_examples: 796239
download_size: 213902701
dataset_size: 427613608
language:
- en
tags:
- biology
- medical
size_categories:
- 100K<n<1M
---
# Healix-V1 Dataset
## Description
Healix-V1 is a rich and diverse dataset consisting of 809k Question-Answer pairs within the medical domain. This dataset has been meticulously curated to fuel research initiatives in the areas of medical language understanding, medical dialogue systems, and knowledge extraction. Healix-V1 serves as a valuable resource for developing and improving machine learning models for healthcare applications, enabling them to understand and generate human-like responses in medical context The dataset follows the format used in ALPACA model fine-tuning:
```plaintext
### Input:
Question
### Response:
Answer
## Data Sources
The dataset has been compiled from a variety of valuable and authoritative sources, each contributing different kinds of medical question-answer pairs:
1. **Medical books**: 426,241 QA pairs - These pairs are derived from an array of reputable medical books. The questions were extracted and provided as prompts to GPT-3.5, which in turn generated the corresponding answers.
2. **[jianghc/medical_chatbot](URL)**: 46,867 QA pairs - This is a dataset derived from a medical chatbot project.
3. **The Medical Question and Answering dataset(MQuAD)**: 23,802 QA pairs - MQuAD is a medical dataset specifically designed for the task of question answering.
4. **PubMed**: 1,000 QA pairs - These are pairs extracted from the extensive library of medical articles on PubMed.
5. **GenMedGPT**: 5,000 QA pairs - Derived from the GenMedGPT project aimed at generating medical language.
6. **iCliniq**: 7,321 QA pairs - iCliniq is a platform where users ask health-related questions which are answered by certified doctors.
7. **HealthCareMagic**: 100,000 QA pairs - HealthCareMagic is an interactive health platform with a vast amount of user-generated medical QAs.
8. **medical_meadow_wikidoc**: 10,000 QA pairs - These pairs are extracted from WikiDoc, a free medical textbook.
9. **medical_meadow_wikidoc_medical_flashcards**: 33,955 QA pairs - Medical flashcards provide concise medical information in a Q&A format.
10. **MedQA-USMLE-4-options**: 10,178 QA pairs - These are QAs similar to the format of the USMLE exam for medical licensing in the U.S.
## Potential Applications
Healix-V1 can serve a multitude of purposes such as:
- Training AI models for medical chatbots
- Developing advanced search engines for medical databases
- Creating tutoring systems for medical students
- Enhancing automated patient assistance systems
- Helping in developing systems for medical examination preparation
## Data Length Distribution
- (0.0, 256.0]: 96.724181%
- (256.0, 512.0]: 2.903792%
- (512.0, 768.0]: 0.299476%
- (768.0, 1024.0]: 0.050675%
- (1024.0, 2048.0]: 0.018910%
## Metadata
- **License:** ODC-BY
- **Language:** English
- **Tags:** Biology, Medical
- **Size Categories:** 100K<n<1M
## Dataset Info
- **Features:**
- name: text
- dtype: string
- **Splits:**
- name: train
- num_bytes: 419605911
- num_examples: 798902
- **Download Size:** 209261302 bytes
- **Dataset Size:** 419605911 bytes |
liuyanchen1015/MULTI_VALUE_mrpc_chaining_main_verbs | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 7187
num_examples: 28
- name: train
num_bytes: 12715
num_examples: 50
- name: validation
num_bytes: 1695
num_examples: 6
download_size: 25439
dataset_size: 21597
---
# Dataset Card for "MULTI_VALUE_mrpc_chaining_main_verbs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlixCF/sample | ---
license: cc
---
|
distilled-from-one-sec-cv12/chunk_222 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1292622500
num_examples: 251875
download_size: 1321159341
dataset_size: 1292622500
---
# Dataset Card for "chunk_222"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
noxneural/synthetic_beard_styles | ---
license: cc-by-4.0
---
|
zolak/twitter_dataset_79_1713214792 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 1437799
num_examples: 3478
download_size: 723323
dataset_size: 1437799
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
scirik/forecasts | ---
license: unknown
---
|
d0rj/oasst1_pairwise_rlhf_reward-ru | ---
dataset_info:
features:
- name: lang
dtype: string
- name: parent_id
dtype: string
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 67126933.0
num_examples: 17966
- name: validation
num_bytes: 3526794.0
num_examples: 952
download_size: 32509550
dataset_size: 70653727.0
---
# Dataset Card for "oasst1_pairwise_rlhf_reward-ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yangyz1230/H3 | ---
dataset_info:
features:
- name: name
dtype: string
- name: sequence
dtype: string
- name: chrom
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: strand
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 307324
num_examples: 545
- name: test
num_bytes: 34159
num_examples: 61
download_size: 171279
dataset_size: 341483
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
SAMControlNet/sam-controlnet-sprint-larg-v1 | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: conditioning_image
dtype: image
- name: overlaid
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 915499786.747
num_examples: 2047
download_size: 920626486
dataset_size: 915499786.747
---
# Dataset Card for "sam-controlnet-sprint-larg-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AV3RT/DATASETS | ---
license: openrail
--- |
longhoang06/Vi-GSM8K | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 5450234
num_examples: 8792
download_size: 2753130
dataset_size: 5450234
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Vi-GSM8K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mstz/speeddating | ---
language:
- en
tags:
- speeddating
- tabular_classification
- binary_classification
pretty_name: Speed dating
size_categories:
- 1K<n<10K
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- dating
---
# Speed dating
The [Speed dating dataset](https://www.openml.org/search?type=data&sort=nr_of_likes&status=active&id=40536) from OpenML.
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|---------------------------------------------------------------|
| dating | Binary classification | Will the two date? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/speeddating")["train"]
```
# Features
|**Features** |**Type** |
|---------------------------------------------------|---------|
|`is_dater_male` |`int8` |
|`dater_age` |`int8` |
|`dated_age` |`int8` |
|`age_difference` |`int8` |
|`dater_race` |`string` |
|`dated_race` |`string` |
|`are_same_race` |`int8` |
|`same_race_importance_for_dater` |`float64`|
|`same_religion_importance_for_dater` |`float64`|
|`attractiveness_importance_for_dated` |`float64`|
|`sincerity_importance_for_dated` |`float64`|
|`intelligence_importance_for_dated` |`float64`|
|`humor_importance_for_dated` |`float64`|
|`ambition_importance_for_dated` |`float64`|
|`shared_interests_importance_for_dated` |`float64`|
|`attractiveness_score_of_dater_from_dated` |`float64`|
|`sincerity_score_of_dater_from_dated` |`float64`|
|`intelligence_score_of_dater_from_dated` |`float64`|
|`humor_score_of_dater_from_dated` |`float64`|
|`ambition_score_of_dater_from_dated` |`float64`|
|`shared_interests_score_of_dater_from_dated` |`float64`|
|`attractiveness_importance_for_dater` |`float64`|
|`sincerity_importance_for_dater` |`float64`|
|`intelligence_importance_for_dater` |`float64`|
|`humor_importance_for_dater` |`float64`|
|`ambition_importance_for_dater` |`float64`|
|`shared_interests_importance_for_dater` |`float64`|
|`self_reported_attractiveness_of_dater` |`float64`|
|`self_reported_sincerity_of_dater` |`float64`|
|`self_reported_intelligence_of_dater` |`float64`|
|`self_reported_humor_of_dater` |`float64`|
|`self_reported_ambition_of_dater` |`float64`|
|`reported_attractiveness_of_dated_from_dater` |`float64`|
|`reported_sincerity_of_dated_from_dater` |`float64`|
|`reported_intelligence_of_dated_from_dater` |`float64`|
|`reported_humor_of_dated_from_dater` |`float64`|
|`reported_ambition_of_dated_from_dater` |`float64`|
|`reported_shared_interests_of_dated_from_dater` |`float64`|
|`dater_interest_in_sports` |`float64`|
|`dater_interest_in_tvsports` |`float64`|
|`dater_interest_in_exercise` |`float64`|
|`dater_interest_in_dining` |`float64`|
|`dater_interest_in_museums` |`float64`|
|`dater_interest_in_art` |`float64`|
|`dater_interest_in_hiking` |`float64`|
|`dater_interest_in_gaming` |`float64`|
|`dater_interest_in_clubbing` |`float64`|
|`dater_interest_in_reading` |`float64`|
|`dater_interest_in_tv` |`float64`|
|`dater_interest_in_theater` |`float64`|
|`dater_interest_in_movies` |`float64`|
|`dater_interest_in_concerts` |`float64`|
|`dater_interest_in_music` |`float64`|
|`dater_interest_in_shopping` |`float64`|
|`dater_interest_in_yoga` |`float64`|
|`interests_correlation` |`float64`|
|`expected_satisfaction_of_dater` |`float64`|
|`expected_number_of_likes_of_dater_from_20_people` |`int8` |
|`expected_number_of_dates_for_dater` |`int8` |
|`dater_liked_dated` |`float64`|
|`probability_dated_wants_to_date` |`float64`|
|`already_met_before` |`int8` |
|`dater_wants_to_date` |`int8` |
|`dated_wants_to_date` |`int8` |
|
open-llm-leaderboard/details_TheBloke__chronos-wizardlm-uc-scot-st-13B-GPTQ | ---
pretty_name: Evaluation run of TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__chronos-wizardlm-uc-scot-st-13B-GPTQ_public\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-11-07T17:01:57.084059](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__chronos-wizardlm-uc-scot-st-13B-GPTQ_public/blob/main/results_2023-11-07T17-01-57.084059.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.008284395973154363,\n\
\ \"em_stderr\": 0.0009282472025612514,\n \"f1\": 0.0820406879194631,\n\
\ \"f1_stderr\": 0.0018086518070639704,\n \"acc\": 0.40702937397863653,\n\
\ \"acc_stderr\": 0.009614901402107493\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.008284395973154363,\n \"em_stderr\": 0.0009282472025612514,\n\
\ \"f1\": 0.0820406879194631,\n \"f1_stderr\": 0.0018086518070639704\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.06899166034874905,\n \
\ \"acc_stderr\": 0.006980995834838566\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.745067087608524,\n \"acc_stderr\": 0.012248806969376422\n\
\ }\n}\n```"
repo_url: https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_11_05T09_19_09.913548
path:
- '**/details_harness|drop|3_2023-11-05T09-19-09.913548.parquet'
- split: 2023_11_07T17_01_57.084059
path:
- '**/details_harness|drop|3_2023-11-07T17-01-57.084059.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-11-07T17-01-57.084059.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_11_05T09_19_09.913548
path:
- '**/details_harness|gsm8k|5_2023-11-05T09-19-09.913548.parquet'
- split: 2023_11_07T17_01_57.084059
path:
- '**/details_harness|gsm8k|5_2023-11-07T17-01-57.084059.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-11-07T17-01-57.084059.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_11_05T09_19_09.913548
path:
- '**/details_harness|winogrande|5_2023-11-05T09-19-09.913548.parquet'
- split: 2023_11_07T17_01_57.084059
path:
- '**/details_harness|winogrande|5_2023-11-07T17-01-57.084059.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-11-07T17-01-57.084059.parquet'
- config_name: results
data_files:
- split: 2023_11_05T09_19_09.913548
path:
- results_2023-11-05T09-19-09.913548.parquet
- split: 2023_11_07T17_01_57.084059
path:
- results_2023-11-07T17-01-57.084059.parquet
- split: latest
path:
- results_2023-11-07T17-01-57.084059.parquet
---
# Dataset Card for Evaluation run of TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ](https://huggingface.co/TheBloke/chronos-wizardlm-uc-scot-st-13B-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheBloke__chronos-wizardlm-uc-scot-st-13B-GPTQ_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-07T17:01:57.084059](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__chronos-wizardlm-uc-scot-st-13B-GPTQ_public/blob/main/results_2023-11-07T17-01-57.084059.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.008284395973154363,
"em_stderr": 0.0009282472025612514,
"f1": 0.0820406879194631,
"f1_stderr": 0.0018086518070639704,
"acc": 0.40702937397863653,
"acc_stderr": 0.009614901402107493
},
"harness|drop|3": {
"em": 0.008284395973154363,
"em_stderr": 0.0009282472025612514,
"f1": 0.0820406879194631,
"f1_stderr": 0.0018086518070639704
},
"harness|gsm8k|5": {
"acc": 0.06899166034874905,
"acc_stderr": 0.006980995834838566
},
"harness|winogrande|5": {
"acc": 0.745067087608524,
"acc_stderr": 0.012248806969376422
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
shidowake/cosmopedia-japanese-subset_from_aixsatoshi_filtered-sharegpt-format-no-system-prompt_split_4 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 19834076.0
num_examples: 2495
download_size: 11956113
dataset_size: 19834076.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-e92f99-1572955855 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev
eval_info:
task: text_zero_shot_classification
model: facebook/opt-350m
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev
dataset_config: mathemakitten--winobias_antistereotype_dev
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
Rakshit122/1 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence: string
splits:
- name: train
num_bytes: 46270
num_examples: 226
download_size: 16707
dataset_size: 46270
---
# Dataset Card for "1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ChuckMcSneed/guides | ---
license: wtfpl
---
[LOCAL LLM SPEEDRUN GUIDE](LOCAL%20LLM%20SPEEDRUN%20GUIDE.pdf)
- Guide for quick local LLM setup |
enelpe/MorSpra_all | ---
dataset_info:
features:
- name: Sentences
sequence: string
- name: Labels
sequence: int64
splits:
- name: train
num_bytes: 7587838
num_examples: 23196
download_size: 0
dataset_size: 7587838
---
# Dataset Card for "MorSpra_all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rubentito/mp-docvqa | ---
pretty_name: MP-DocVQA (Multipage Document Visual Question Answering)
license: mit
task_categories:
- question-answering
- document-question-answering
- document-visual-question-answering
language:
- en
multilinguality:
- monolingual
source_datasets:
- Single Page Document Visual Question Answering
---
# Dataset Card for Multipage Document Visual Question Answering (MP-DocVQA)
## Dataset Description
- **Homepage: [Robust Reading Competition Portal](https://rrc.cvc.uab.es/?ch=17&com=introduction)**
- **Repository: [Robust Reading Competition Portal](https://rrc.cvc.uab.es/?ch=17&com=downloads)**
- **Paper: [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/abs/2212.05935.pdf])**
- **Leaderboard: [Task 4 of DocVQA on the Robust Reading Competition Portal](https://rrc.cvc.uab.es/?ch=17&com=evaluation&task=4)**
### Dataset Summary
The dataset is aimed to perform Visual Question Answering on multipage industry scanned documents. The questions and answers are reused from Single Page DocVQA (SP-DocVQA) dataset. The images also corresponds to the same in original dataset with previous and posterior pages with a limit of up to 20 pages per document.
### Download the Dataset
The dataset is not integrated with Huggingface yet. But you can download it from the [DocVQA Challenge](https://rrc.cvc.uab.es/?ch=17) in the RRC Portal, [Downloads section](https://rrc.cvc.uab.es/?ch=17&com=downloads).
### Leaderboard
You can also check the live leaderboard at the [RRC Portal](https://rrc.cvc.uab.es/?ch=17&com=evaluation&task=4)
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
| | Train | Validation | Test | Total |
|----------|:-----:|:-----------:|:------:|:-------:|
|**Questions** |36230 | 5187 |5019 | 46436 |
|**Documents** |5131 | 927 |959 | 5929 |
|**Pages / Images** |37269 | 6510 |6223 | 47952 |
Note that some documents might appear in both validation and test set. But they are never seen during training.
### Citation Information
```tex
@article{tito2022hierarchical,
title={Hierarchical multimodal transformers for Multi-Page DocVQA},
author={Tito, Rub{\`e}n and Karatzas, Dimosthenis and Valveny, Ernest},
journal={arXiv preprint arXiv:2212.05935},
year={2022}
}
```
|
Carlosgg14/gojovoicemakers | ---
license: openrail
---
|
irds/mmarco_v2_zh | ---
pretty_name: '`mmarco/v2/zh`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `mmarco/v2/zh`
The `mmarco/v2/zh` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/v2/zh).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=8,841,823
This dataset is used by: [`mmarco_v2_zh_dev`](https://huggingface.co/datasets/irds/mmarco_v2_zh_dev), [`mmarco_v2_zh_train`](https://huggingface.co/datasets/irds/mmarco_v2_zh_train)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/mmarco_v2_zh', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Bonifacio2021MMarco,
title={{mMARCO}: A Multilingual Version of {MS MARCO} Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Israel Campiotti and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
journal={arXiv:2108.13897}
}
```
|
autoevaluate/autoeval-eval-phpthinh__exampletx-toxic-7252ee-1708159806 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- phpthinh/exampletx
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-7b1
metrics: []
dataset_name: phpthinh/exampletx
dataset_config: toxic
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: phpthinh/exampletx
* Config: toxic
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@phpthinh](https://huggingface.co/phpthinh) for evaluating this model. |
DZN222/taspio | ---
license: openrail
---
|
abhinand/argilla-dpo-mix-7k-singleturn | ---
dataset_info:
features:
- name: dataset
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: chosen_rating
dtype: float64
- name: rejected_rating
dtype: float64
- name: prompt
dtype: string
- name: system
dtype: string
splits:
- name: train
num_bytes: 14543208
num_examples: 4901
download_size: 8237623
dataset_size: 14543208
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/lana_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of lana (Fire Emblem)
This is the dataset of lana (Fire Emblem), containing 22 images and their tags.
The core tags of this character are `short_hair, brown_eyes, orange_hair, brown_hair, blonde_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 22 | 20.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lana_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 22 | 14.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lana_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 41 | 26.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lana_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 22 | 20.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lana_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 41 | 33.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lana_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/lana_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------|
| 0 | 22 |  |  |  |  |  | 1girl, smile, solo, open_mouth, blush, dress, staff |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | solo | open_mouth | blush | dress | staff |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:-------------|:--------|:--------|:--------|
| 0 | 22 |  |  |  |  |  | X | X | X | X | X | X | X |
|
dumyy/test_dummy | ---
license: openrail
dataset_info:
features:
- name: pokemon
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 43
num_examples: 2
download_size: 1219
dataset_size: 43
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
introspector/meta-coq-utils | ---
license: mit
---
|
Tverous/misinfo-clusters3 | ---
dataset_info:
features:
- name: cluster_id
dtype: string
- name: doc_id
dtype: string
- name: main_text
dtype: string
- name: image
dtype: image
- name: video
dtype: string
- name: audio
dtype: string
- name: kg_embedding
sequence:
sequence: float32
splits:
- name: train
num_bytes: 198061.0
num_examples: 1
download_size: 177682
dataset_size: 198061.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "misinfo-clusters3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Giacinta/djy | ---
license: apache-2.0
task_categories:
- text-classification
language:
- zh
tags:
- medical
pretty_name: djy
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: "60_percent_data.csv"
- split: test
path: "part1.csv"
- split: eval
path: "part2.csv"
--- |
CerebralAI/ActionRoutes_Phi2_ZeroShot | ---
dataset_info:
features:
- name: texts
dtype: string
splits:
- name: train
num_bytes: 7101466
num_examples: 5020
download_size: 1044489
dataset_size: 7101466
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chrisociepa/wikipedia-pl-20230401 | ---
license: cc-by-sa-3.0
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2883878741
num_examples: 1562327
download_size: 1761971402
dataset_size: 2883878741
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
language:
- pl
pretty_name: Polish Wikipedia 2023-04-01
size_categories:
- 1M<n<10M
tags:
- pretraining
- language modelling
- wikipedia
- web
---
# Dataset Card for April 2023 Polish Wikipedia
Wikipedia dataset containing cleaned articles of Polish language.
The dataset has been built from the Wikipedia dump (https://dumps.wikimedia.org/)
using the [OLM Project](https://github.com/huggingface/olm-datasets).
Each example contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.).
### Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
(CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License)
(GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
the text.
### Citation Information
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
``` |
mertllc/f | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 10350476.0
num_examples: 500
download_size: 10292806
dataset_size: 10350476.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
as-cle-bert/breastcancer-semantic-segmentation | ---
license: cc
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 48963186.0
num_examples: 40
download_size: 9355520
dataset_size: 48963186.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
grasshoff/lhc_sents | ---
license: bsd
---
|
Saaddazhhar/predictiveswotanalysis | ---
license: cc0-1.0
---
|
BangumiBase/sailormoon1990s | ---
license: mit
tags:
- art
size_categories:
- 10K<n<100K
---
# Bangumi Image Base of Sailor Moon (1990s)
This is the image base of bangumi Sailor Moon (1990s), we detected 132 characters, 14684 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:----------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|
| 0 | 3008 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 94 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 696 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 49 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 29 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 176 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 95 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 72 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 180 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 75 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 108 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 113 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 32 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 42 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 47 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 602 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 1066 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 395 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 208 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 79 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 86 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 62 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 50 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 53 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 76 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 141 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 67 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 45 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 750 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 103 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 34 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 42 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 20 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 67 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 79 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 40 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 45 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 118 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 41 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 62 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 93 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 79 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 920 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 55 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 75 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 36 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 15 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 126 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 41 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 46 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 100 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 121 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 36 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 102 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 50 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 105 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 47 | [Download](56/dataset.zip) |  |  |  |  |  |  |  |  |
| 57 | 60 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| 58 | 26 | [Download](58/dataset.zip) |  |  |  |  |  |  |  |  |
| 59 | 47 | [Download](59/dataset.zip) |  |  |  |  |  |  |  |  |
| 60 | 79 | [Download](60/dataset.zip) |  |  |  |  |  |  |  |  |
| 61 | 74 | [Download](61/dataset.zip) |  |  |  |  |  |  |  |  |
| 62 | 11 | [Download](62/dataset.zip) |  |  |  |  |  |  |  |  |
| 63 | 73 | [Download](63/dataset.zip) |  |  |  |  |  |  |  |  |
| 64 | 30 | [Download](64/dataset.zip) |  |  |  |  |  |  |  |  |
| 65 | 32 | [Download](65/dataset.zip) |  |  |  |  |  |  |  |  |
| 66 | 102 | [Download](66/dataset.zip) |  |  |  |  |  |  |  |  |
| 67 | 17 | [Download](67/dataset.zip) |  |  |  |  |  |  |  |  |
| 68 | 49 | [Download](68/dataset.zip) |  |  |  |  |  |  |  |  |
| 69 | 24 | [Download](69/dataset.zip) |  |  |  |  |  |  |  |  |
| 70 | 28 | [Download](70/dataset.zip) |  |  |  |  |  |  |  |  |
| 71 | 38 | [Download](71/dataset.zip) |  |  |  |  |  |  |  |  |
| 72 | 96 | [Download](72/dataset.zip) |  |  |  |  |  |  |  |  |
| 73 | 52 | [Download](73/dataset.zip) |  |  |  |  |  |  |  |  |
| 74 | 747 | [Download](74/dataset.zip) |  |  |  |  |  |  |  |  |
| 75 | 50 | [Download](75/dataset.zip) |  |  |  |  |  |  |  |  |
| 76 | 43 | [Download](76/dataset.zip) |  |  |  |  |  |  |  |  |
| 77 | 21 | [Download](77/dataset.zip) |  |  |  |  |  |  |  |  |
| 78 | 22 | [Download](78/dataset.zip) |  |  |  |  |  |  |  |  |
| 79 | 23 | [Download](79/dataset.zip) |  |  |  |  |  |  |  |  |
| 80 | 38 | [Download](80/dataset.zip) |  |  |  |  |  |  |  |  |
| 81 | 20 | [Download](81/dataset.zip) |  |  |  |  |  |  |  |  |
| 82 | 44 | [Download](82/dataset.zip) |  |  |  |  |  |  |  |  |
| 83 | 19 | [Download](83/dataset.zip) |  |  |  |  |  |  |  |  |
| 84 | 19 | [Download](84/dataset.zip) |  |  |  |  |  |  |  |  |
| 85 | 19 | [Download](85/dataset.zip) |  |  |  |  |  |  |  |  |
| 86 | 11 | [Download](86/dataset.zip) |  |  |  |  |  |  |  |  |
| 87 | 48 | [Download](87/dataset.zip) |  |  |  |  |  |  |  |  |
| 88 | 18 | [Download](88/dataset.zip) |  |  |  |  |  |  |  |  |
| 89 | 14 | [Download](89/dataset.zip) |  |  |  |  |  |  |  |  |
| 90 | 24 | [Download](90/dataset.zip) |  |  |  |  |  |  |  |  |
| 91 | 19 | [Download](91/dataset.zip) |  |  |  |  |  |  |  |  |
| 92 | 10 | [Download](92/dataset.zip) |  |  |  |  |  |  |  |  |
| 93 | 10 | [Download](93/dataset.zip) |  |  |  |  |  |  |  |  |
| 94 | 33 | [Download](94/dataset.zip) |  |  |  |  |  |  |  |  |
| 95 | 28 | [Download](95/dataset.zip) |  |  |  |  |  |  |  |  |
| 96 | 58 | [Download](96/dataset.zip) |  |  |  |  |  |  |  |  |
| 97 | 13 | [Download](97/dataset.zip) |  |  |  |  |  |  |  |  |
| 98 | 29 | [Download](98/dataset.zip) |  |  |  |  |  |  |  |  |
| 99 | 17 | [Download](99/dataset.zip) |  |  |  |  |  |  |  |  |
| 100 | 32 | [Download](100/dataset.zip) |  |  |  |  |  |  |  |  |
| 101 | 21 | [Download](101/dataset.zip) |  |  |  |  |  |  |  |  |
| 102 | 27 | [Download](102/dataset.zip) |  |  |  |  |  |  |  |  |
| 103 | 22 | [Download](103/dataset.zip) |  |  |  |  |  |  |  |  |
| 104 | 11 | [Download](104/dataset.zip) |  |  |  |  |  |  |  |  |
| 105 | 7 | [Download](105/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 106 | 12 | [Download](106/dataset.zip) |  |  |  |  |  |  |  |  |
| 107 | 14 | [Download](107/dataset.zip) |  |  |  |  |  |  |  |  |
| 108 | 22 | [Download](108/dataset.zip) |  |  |  |  |  |  |  |  |
| 109 | 21 | [Download](109/dataset.zip) |  |  |  |  |  |  |  |  |
| 110 | 25 | [Download](110/dataset.zip) |  |  |  |  |  |  |  |  |
| 111 | 45 | [Download](111/dataset.zip) |  |  |  |  |  |  |  |  |
| 112 | 11 | [Download](112/dataset.zip) |  |  |  |  |  |  |  |  |
| 113 | 23 | [Download](113/dataset.zip) |  |  |  |  |  |  |  |  |
| 114 | 14 | [Download](114/dataset.zip) |  |  |  |  |  |  |  |  |
| 115 | 39 | [Download](115/dataset.zip) |  |  |  |  |  |  |  |  |
| 116 | 17 | [Download](116/dataset.zip) |  |  |  |  |  |  |  |  |
| 117 | 27 | [Download](117/dataset.zip) |  |  |  |  |  |  |  |  |
| 118 | 56 | [Download](118/dataset.zip) |  |  |  |  |  |  |  |  |
| 119 | 19 | [Download](119/dataset.zip) |  |  |  |  |  |  |  |  |
| 120 | 17 | [Download](120/dataset.zip) |  |  |  |  |  |  |  |  |
| 121 | 14 | [Download](121/dataset.zip) |  |  |  |  |  |  |  |  |
| 122 | 12 | [Download](122/dataset.zip) |  |  |  |  |  |  |  |  |
| 123 | 103 | [Download](123/dataset.zip) |  |  |  |  |  |  |  |  |
| 124 | 39 | [Download](124/dataset.zip) |  |  |  |  |  |  |  |  |
| 125 | 15 | [Download](125/dataset.zip) |  |  |  |  |  |  |  |  |
| 126 | 19 | [Download](126/dataset.zip) |  |  |  |  |  |  |  |  |
| 127 | 11 | [Download](127/dataset.zip) |  |  |  |  |  |  |  |  |
| 128 | 15 | [Download](128/dataset.zip) |  |  |  |  |  |  |  |  |
| 129 | 8 | [Download](129/dataset.zip) |  |  |  |  |  |  |  |  |
| 130 | 9 | [Download](130/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 528 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
tyzhu/squad_title_v4_train_30_eval_10_permute5 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 625263.266025641
num_examples: 399
- name: validation
num_bytes: 50807
num_examples: 50
download_size: 144382
dataset_size: 676070.266025641
---
# Dataset Card for "squad_title_v4_train_30_eval_10_permute5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/find_second_sent_train_50_eval_40 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 180317
num_examples: 140
- name: validation
num_bytes: 39419
num_examples: 40
download_size: 0
dataset_size: 219736
---
# Dataset Card for "find_second_sent_train_50_eval_40"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/tatari_kogasa_touhou | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of tatari_kogasa/祟小傘 (Touhou)
This is the dataset of tatari_kogasa/祟小傘 (Touhou), containing 27 images and their tags.
The core tags of this character are `blue_hair, red_eyes, blue_eyes, heterochromia, breasts, short_hair, medium_breasts, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 27 | 25.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tatari_kogasa_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 27 | 16.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tatari_kogasa_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 52 | 30.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tatari_kogasa_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 27 | 22.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tatari_kogasa_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 52 | 40.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tatari_kogasa_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tatari_kogasa_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------|
| 0 | 18 |  |  |  |  |  | 1girl, solo, nipples, blush, karakasa_obake, purple_umbrella, tongue, navel, panties, nude, open_clothes, pussy, shirt |
| 1 | 6 |  |  |  |  |  | 1girl, alternate_hair_length, long_hair, solo, dress, smile, aged_up, cleavage |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | nipples | blush | karakasa_obake | purple_umbrella | tongue | navel | panties | nude | open_clothes | pussy | shirt | alternate_hair_length | long_hair | dress | smile | aged_up | cleavage |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:----------|:--------|:-----------------|:------------------|:---------|:--------|:----------|:-------|:---------------|:--------|:--------|:------------------------|:------------|:--------|:--------|:----------|:-----------|
| 0 | 18 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | | | | | | | | | | | | X | X | X | X | X | X |
|
tyzhu/find_word_train_10000_eval_100 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 1441035
num_examples: 20100
- name: eval_find_word
num_bytes: 5323
num_examples: 100
download_size: 0
dataset_size: 1446358
---
# Dataset Card for "find_word_train_10000_eval_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
links-ads/mmflood | ---
license: mit
task_categories:
- image-segmentation
language:
- en
tags:
- semantic segmentation
- remote sensing
- sentinel-1
- flood
pretty_name: MMFlood
size_categories:
- 1K<n<10K
---
# MMFlood
A Multimodal Dataset for Flood Delineation from Satellite Imagery.

## Download
The dataset has been compacted into tarfiles and zipped, you will need to recompose it before working with it:
```bash
# clone the repository
$ git clone git@hf.co:datasets/links-ads/mmflood
# rebuild and extract the files
$ cat activations.tar.*.gz.part > activations.tar.gz
$ tar -xvzf activations.tar.gz
```
## Structure
The dataset is organized in directories, with a JSON file providing metadata and other information such as the split configuration we selected. Its internal structure is as follows:
```
activations/
├─ EMSR107-1/
├─ .../
├─ EMSR548-0/
│ ├─ DEM/
│ │ ├─ EMSR548-0-0.tif
│ │ ├─ EMSR548-0-1.tif
│ │ ├─ ...
│ ├─ hydro/
│ │ ├─ EMSR548-0-0.tif
│ │ ├─ EMSR548-0-1.tif
│ │ ├─ ...
│ ├─ mask/
│ │ ├─ EMSR548-0-0.tif
│ │ ├─ EMSR548-0-1.tif
│ │ ├─ ...
│ ├─ s1_raw/
│ │ ├─ EMSR548-0-0.tif
│ │ ├─ EMSR548-0-1.tif
│ │ ├─ ...
activations.json
```
Each folder is named after the Copernicus EMS code it refers to. Since most of them actually contain more than one area, an incremental counter is added to the name, e.g., `EMSR458-0`, `EMSR458-1` and so on.
Inside each EMSR folder there are four subfolders containing every available modality and the ground truth, in GeoTIFF format:
- DEM: contains the Digital Elevation Model
- hydro: contains the hydrography map for that region, if present
- s1_raw: contains the Sentinel-1 image in VV-VH format
- mask: contains the flood map, rasterized from EMS polygons
Every EMSR subregion contains a variable number of tiles. However, for the same area, each modality always contains the same amount of files with the same name. Names have the following format: `<emsr_code>-<emsr_region>_<tile_count>`. For different reasons (retrieval, storage), areas larger than 2500x2500 pixels were divided in large tiles.
> **Note:** Every modality is guaranteed to contain at least one image, except for the hydrography that may be missing.
Last, the `activations.json` contains informations about each EMS activation, as extracted from the Copernicus Rapid Mapping site, as such:
```json
{
"EMSR107": {
...
},
"EMSR548": {
"title": "Flood in Eastern Sicily, Italy",
"type": "Flood",
"country": "Italy",
"start": "2021-10-27T11:31:00",
"end": "2021-10-28T12:35:19",
"lat": 37.435056244442684,
"lon": 14.954437192250033,
"subset": "test",
"delineations": [
"EMSR548_AOI01_DEL_PRODUCT_r1_VECTORS_v1_vector.zip"
]
},
}
```
## Data specifications
|Image | Description | Format | Bands
|S1 raw | Sentinel-1 (IW GRD) | GeoTIFF | Float32 0: VV, 1: VH
|DEM | MapZen Digital Elevation Model | GeoTIFF | Float32 0: elevation
|Hydrogr. | Permanent water basins, OSM | GeoTIFF | Uint8 0: hydro
|Mask | Ground truth label, CEMS | GeoTIFF | Uint8 0: gt
### Image metadata
Every image also contains the following contextual information, as GDAL metadata tags:
```xml
<GDALMetadata>
<Item name="acquisition_date">2021-10-31T16:56:28</Item>
<Item name="code">EMSR548-0</Item>
<Item name="country">Italy</Item>
<Item name="event_date">2021-10-27T11:31:00</Item>
</GDALMetadata>
```
- `acquisition_date` refers to the acquisition timestamp of the Sentinel-1 image
- `event_date` refers to official event start date reported by Copernicus EMS
## Run experiments
You can find the associated code in the following repository:
```console
git clone git@github.com:edornd/mmflood.git && cd mmflood
python3 -m venv .venv
pip install -r requirements.txt
```
Everything goes through the run command. Run python run.py --help for more information about commands and their arguments.
### Data preparation
To prepare the raw data by tiling and preprocessing, you can run: `python run.py prepare --data-source [PATH_TO_ACTIVATIONS] --data-processed [DESTINATION]`
### Training
Training uses HuggingFace accelerate to provide single-gpu and multi-gpu support. To launch on a single GPU:
```console
CUDA_VISIBLE_DEVICES=... python run.py train [ARGS]
```
You can find an example script with parameters in the scripts folder.
### Testing
Testing is run on non-tiled images (the preprocessing will format them without tiling). You can run the test on a single GPU using the test command. At the very least, you need to point the script to the output directory. If no checkpoint is provided, the best one (according to the monitored metric) will be selected automatically. You can also avoid storing outputs with `--no-store-predictions`.
```console
CUDA_VISIBLE_DEVICES=... python run.py test --data-root [PATH_TO_OUTPUT_DIR] [--checkpoint-path [PATH]]
```
## Data Attribution and Licenses
For the realization of this project, the following data sources were used:
- Copernicus EMS
- Copernicus Sentinel-1
- MapZen/TileZen Elevation
- OpenStreetMap water layers
|
TaiyouIllusion/wiki40b_binidx | ---
license: other
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.