datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
awettig/Pile-Wikipedia-0.5B-8K-opt | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 4836195702
num_examples: 61035
- name: test
num_bytes: 64969880
num_examples: 610
download_size: 1264066847
dataset_size: 4901165582
---
# Dataset Card for "Pile-Wikipedia-0.5B-8K-opt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pawan2411/kdf_train1 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: relation
dtype: string
splits:
- name: train
num_bytes: 6582592.170553064
num_examples: 20049
- name: test
num_bytes: 6894.829446935725
num_examples: 21
download_size: 3123795
dataset_size: 6589487.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Kamaljp/earnings_3000 | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: id
dtype: int64
- name: fiscal_end
dtype: string
- name: consensus_eps_forecast
dtype: float64
- name: high_eps_forecast
dtype: float64
- name: low_eps_forecast
dtype: float64
- name: no_of_estimates
dtype: int64
- name: up
dtype: int64
- name: down
dtype: int64
splits:
- name: train
num_bytes: 267825
num_examples: 3000
download_size: 26980
dataset_size: 267825
---
# Dataset Card for "earnings_3000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
argilla/twitter-genderbias | ---
language:
- es
license:
- unknown
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
- sentiment-analysis
dataset_info:
features:
- name: text
dtype: string
- name: inputs
struct:
- name: text
dtype: string
- name: prediction
list:
- name: label
dtype: string
- name: score
dtype: float64
- name: prediction_agent
dtype: string
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
dtype: 'null'
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
struct:
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 573508
num_examples: 1914
download_size: 373847
dataset_size: 573508
---
# Dataset Card for "twitter-genderbias"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/kevinmorgado/gender-bias-spanish
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
This dataset contains more than 1900 labeled Spanish tweets with the category biased or non-biased. This was made for a Hackathon to reduce gender bias on the internet.
- contents: Text
- label:
- biased
- non-biased
### Languages
spanish
### Citation Information
https://www.kaggle.com/datasets/kevinmorgado/gender-bias-spanish
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset. |
ajmangus/qm_alice_hard_4_mixture_1.0e | ---
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: charlie_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 20101703.0
num_examples: 166263
- name: validation
num_bytes: 2026424.3333333333
num_examples: 16758
- name: test
num_bytes: 2012512.6666666667
num_examples: 16650
download_size: 6344627
dataset_size: 24140640.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
hdparmar/irish-traditional-tunes | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 3322131399.86
num_examples: 9604
download_size: 3282715107
dataset_size: 3322131399.86
license: mit
task_categories:
- text-to-image
- text-to-audio
language:
- en
tags:
- music
pretty_name: Mel-Spectrograms for Irish Traditional Music
size_categories:
- 1K<n<10K
---
# Dataset Card for "irish-traditional-tunes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
# Dataset Card for "irish-tunes-spectrograms"
## 1. Dataset Description
Dataset is used for the following project
- **Homepage:** [Trad-fusion](https://github.com/hdparmar/Tradi-fusion)
### 1.1 Dataset Summary
This dataset contains 9604 Mel spectrograms that represent Traditional Irish Music.
This dataset is smaller compared to [hdparmar/irish-tunes-spectrogram](https://huggingface.co/datasets/hdparmar/irish-tunes-spectrograms), to reduce the training time and increase the possibilty to train for longer steps/batch.
Each spectrogram image is a 5 second split of audio resulting in dimensions 512x512 and includes 3 channels (mimicking, RGB) because most of the text-to-image models are trained on 3 channels.
Although, I can find publications which says that having 3 channels for Mel Spectrogram can improve generalisation, since the other 2 channel are just the copy of first.
The simple trick I used is to use cv2 to convert a grayscale into RGB, since most of the models are trained on 3 channels.
The primary objective of this dataset is to serve as an abundant resource for those venturing into the fields of music analysis, machine learning, and artificial intelligence.
### 1.2 Languages
The dataset's metadata and documentation are all in English, ensuring accessibility and comprehension.
## 2. Dataset Structure
### 2.1 Data Instances
Each data instance in this dataset is composed of two main elements: an image and a text caption.
The image is a mel spectrogram that reflects a snippet of a traditional Irish tune. Accompanying it is a text field that serves as its caption.
#### Example:
The metadata.csv file the dataset is in this format
```
{"file_name": "path/to/the/image.png",
"text": "An Irish Traditional Tune"}
```
### 2.2 Data Fields
- **file_name**: This is the field that contains the path leading to the image file. It's the specific location where you can find each piece of the dataset.
- **text**: This is the caption accompanying each image. For the sake of uniformity and ease, the caption for every image is "An Irish Traditional Tune."
### 2.3 Data Splits
As of the current version, the dataset consists solely of a training split. Additional data splits like validation or testing may be introduced in future iterations of the dataset.
### 2.4 Uniform Captions: A Special Note
All the spectrograms in this dataset come labeled with a uniform caption: "An Irish Traditional Tune."
This consistency can be perhaps advantageous, especially in text-to-image tasks that focus primarily on image-based features, with the caption acting as a generalized label.
## NOTE
Furthur imformation to follow and same caption for all the mel-spectrograms are for ease of work put into producing the dataset |
Amirkid/redditjokes | ---
license: creativeml-openrail-m
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 103220535
num_examples: 578634
download_size: 67652707
dataset_size: 103220535
---
|
AdapterOcean/med_alpaca_standardized_cluster_60_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 17272033
num_examples: 47643
download_size: 8772326
dataset_size: 17272033
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_60_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mayflowergmbh/wiki_qa_de | ---
task_categories:
- text-generation
language:
- de
---
A german translation for the [wiki_qa](https://huggingface.co/datasets/wiki_qa) dataset.
Extracted from [seedboxventures/multitask_german_examples_32k](https://huggingface.co/datasets/seedboxventures/multitask_german_examples_32k).
Translation created by [seedbox ai](https://huggingface.co/seedboxai) for [KafkaLM](https://huggingface.co/seedboxai/KafkaLM-70B-German-V0.1) ❤️.
Available for finetuning in [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory). |
whu9/word_net_synset_lemma | ---
dataset_info:
features:
- name: entity1
dtype: string
- name: entity2
dtype: string
splits:
- name: train
num_bytes: 3035746.7327058916
num_examples: 109462
download_size: 1859404
dataset_size: 3035746.7327058916
---
# Dataset Card for "word_net_synset_lemma"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_TeeZee__GALAXY_v03_slimorca_1_epoch_50k_DPO_1_epoch_30k | ---
pretty_name: Evaluation run of TeeZee/GALAXY_v03_slimorca_1_epoch_50k_DPO_1_epoch_30k
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TeeZee/GALAXY_v03_slimorca_1_epoch_50k_DPO_1_epoch_30k](https://huggingface.co/TeeZee/GALAXY_v03_slimorca_1_epoch_50k_DPO_1_epoch_30k)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TeeZee__GALAXY_v03_slimorca_1_epoch_50k_DPO_1_epoch_30k\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-31T06:53:39.615413](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__GALAXY_v03_slimorca_1_epoch_50k_DPO_1_epoch_30k/blob/main/results_2024-03-31T06-53-39.615413.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6476001236347543,\n\
\ \"acc_stderr\": 0.031649890137086564,\n \"acc_norm\": 0.659414865893038,\n\
\ \"acc_norm_stderr\": 0.032504129449161145,\n \"mc1\": 0.37454100367197063,\n\
\ \"mc1_stderr\": 0.01694353512840533,\n \"mc2\": 0.534598735977796,\n\
\ \"mc2_stderr\": 0.01466419006488303\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6160409556313993,\n \"acc_stderr\": 0.01421244498065189,\n\
\ \"acc_norm\": 0.6527303754266212,\n \"acc_norm_stderr\": 0.013913034529620446\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.663114917347142,\n\
\ \"acc_stderr\": 0.00471679287443321,\n \"acc_norm\": 0.8562039434375622,\n\
\ \"acc_norm_stderr\": 0.0035016571073867068\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n\
\ \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n\
\ \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7302631578947368,\n \"acc_stderr\": 0.03611780560284898,\n\
\ \"acc_norm\": 0.7302631578947368,\n \"acc_norm_stderr\": 0.03611780560284898\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.69,\n\
\ \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \
\ \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6981132075471698,\n \"acc_stderr\": 0.028254200344438665,\n\
\ \"acc_norm\": 0.6981132075471698,\n \"acc_norm_stderr\": 0.028254200344438665\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n\
\ \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n\
\ \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"\
acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n\
\ \"acc_stderr\": 0.03614665424180826,\n \"acc_norm\": 0.6589595375722543,\n\
\ \"acc_norm_stderr\": 0.03614665424180826\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.048786087144669955,\n\
\ \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.048786087144669955\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.74,\n\
\ \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5829787234042553,\n \"acc_stderr\": 0.03223276266711712,\n\
\ \"acc_norm\": 0.5829787234042553,\n \"acc_norm_stderr\": 0.03223276266711712\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n\
\ \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n\
\ \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6,\n \"acc_stderr\": 0.040824829046386284,\n \
\ \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.040824829046386284\n \
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4523809523809524,\n \"acc_stderr\": 0.02563425811555495,\n \"\
acc_norm\": 0.4523809523809524,\n \"acc_norm_stderr\": 0.02563425811555495\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.04426266681379909,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.04426266681379909\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n\
\ \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7903225806451613,\n\
\ \"acc_stderr\": 0.023157879349083525,\n \"acc_norm\": 0.7903225806451613,\n\
\ \"acc_norm_stderr\": 0.023157879349083525\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4876847290640394,\n \"acc_stderr\": 0.035169204442208966,\n\
\ \"acc_norm\": 0.4876847290640394,\n \"acc_norm_stderr\": 0.035169204442208966\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\"\
: 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.806060606060606,\n \"acc_stderr\": 0.03087414513656209,\n\
\ \"acc_norm\": 0.806060606060606,\n \"acc_norm_stderr\": 0.03087414513656209\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8585858585858586,\n \"acc_stderr\": 0.02482590979334334,\n \"\
acc_norm\": 0.8585858585858586,\n \"acc_norm_stderr\": 0.02482590979334334\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9222797927461139,\n \"acc_stderr\": 0.019321805557223157,\n\
\ \"acc_norm\": 0.9222797927461139,\n \"acc_norm_stderr\": 0.019321805557223157\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6564102564102564,\n \"acc_stderr\": 0.024078696580635474,\n\
\ \"acc_norm\": 0.6564102564102564,\n \"acc_norm_stderr\": 0.024078696580635474\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3814814814814815,\n \"acc_stderr\": 0.0296167189274976,\n \
\ \"acc_norm\": 0.3814814814814815,\n \"acc_norm_stderr\": 0.0296167189274976\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6848739495798319,\n \"acc_stderr\": 0.03017680828897434,\n \
\ \"acc_norm\": 0.6848739495798319,\n \"acc_norm_stderr\": 0.03017680828897434\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3841059602649007,\n \"acc_stderr\": 0.03971301814719197,\n \"\
acc_norm\": 0.3841059602649007,\n \"acc_norm_stderr\": 0.03971301814719197\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8422018348623853,\n \"acc_stderr\": 0.015630022970092437,\n \"\
acc_norm\": 0.8422018348623853,\n \"acc_norm_stderr\": 0.015630022970092437\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6064814814814815,\n \"acc_stderr\": 0.03331747876370312,\n \"\
acc_norm\": 0.6064814814814815,\n \"acc_norm_stderr\": 0.03331747876370312\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8627450980392157,\n \"acc_stderr\": 0.02415222596280158,\n \"\
acc_norm\": 0.8627450980392157,\n \"acc_norm_stderr\": 0.02415222596280158\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8227848101265823,\n \"acc_stderr\": 0.024856364184503214,\n \
\ \"acc_norm\": 0.8227848101265823,\n \"acc_norm_stderr\": 0.024856364184503214\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7040358744394619,\n\
\ \"acc_stderr\": 0.030636591348699813,\n \"acc_norm\": 0.7040358744394619,\n\
\ \"acc_norm_stderr\": 0.030636591348699813\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7099236641221374,\n \"acc_stderr\": 0.03980066246467765,\n\
\ \"acc_norm\": 0.7099236641221374,\n \"acc_norm_stderr\": 0.03980066246467765\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8016528925619835,\n \"acc_stderr\": 0.036401182719909456,\n \"\
acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.036401182719909456\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n\
\ \"acc_stderr\": 0.03957835471980981,\n \"acc_norm\": 0.7870370370370371,\n\
\ \"acc_norm_stderr\": 0.03957835471980981\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n\
\ \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.49107142857142855,\n\
\ \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.49107142857142855,\n\
\ \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.039891398595317706,\n\
\ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.039891398595317706\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n\
\ \"acc_stderr\": 0.020588491316092375,\n \"acc_norm\": 0.8888888888888888,\n\
\ \"acc_norm_stderr\": 0.020588491316092375\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8301404853128991,\n\
\ \"acc_stderr\": 0.013428186370608303,\n \"acc_norm\": 0.8301404853128991,\n\
\ \"acc_norm_stderr\": 0.013428186370608303\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7254335260115607,\n \"acc_stderr\": 0.02402774515526502,\n\
\ \"acc_norm\": 0.7254335260115607,\n \"acc_norm_stderr\": 0.02402774515526502\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.32849162011173183,\n\
\ \"acc_stderr\": 0.01570793539849645,\n \"acc_norm\": 0.32849162011173183,\n\
\ \"acc_norm_stderr\": 0.01570793539849645\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7254901960784313,\n \"acc_stderr\": 0.025553169991826517,\n\
\ \"acc_norm\": 0.7254901960784313,\n \"acc_norm_stderr\": 0.025553169991826517\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7041800643086816,\n\
\ \"acc_stderr\": 0.02592237178881876,\n \"acc_norm\": 0.7041800643086816,\n\
\ \"acc_norm_stderr\": 0.02592237178881876\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7716049382716049,\n \"acc_stderr\": 0.023358211840626267,\n\
\ \"acc_norm\": 0.7716049382716049,\n \"acc_norm_stderr\": 0.023358211840626267\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5106382978723404,\n \"acc_stderr\": 0.02982074719142244,\n \
\ \"acc_norm\": 0.5106382978723404,\n \"acc_norm_stderr\": 0.02982074719142244\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.49934810951760106,\n\
\ \"acc_stderr\": 0.012770225252255548,\n \"acc_norm\": 0.49934810951760106,\n\
\ \"acc_norm_stderr\": 0.012770225252255548\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7279411764705882,\n \"acc_stderr\": 0.02703304115168146,\n\
\ \"acc_norm\": 0.7279411764705882,\n \"acc_norm_stderr\": 0.02703304115168146\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6895424836601307,\n \"acc_stderr\": 0.01871806705262323,\n \
\ \"acc_norm\": 0.6895424836601307,\n \"acc_norm_stderr\": 0.01871806705262323\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n\
\ \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n\
\ \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128445,\n\
\ \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128445\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8507462686567164,\n\
\ \"acc_stderr\": 0.02519692987482705,\n \"acc_norm\": 0.8507462686567164,\n\
\ \"acc_norm_stderr\": 0.02519692987482705\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \
\ \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352203\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n\
\ \"acc_stderr\": 0.03891364495835816,\n \"acc_norm\": 0.5120481927710844,\n\
\ \"acc_norm_stderr\": 0.03891364495835816\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.02954774168764004,\n\
\ \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.02954774168764004\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.37454100367197063,\n\
\ \"mc1_stderr\": 0.01694353512840533,\n \"mc2\": 0.534598735977796,\n\
\ \"mc2_stderr\": 0.01466419006488303\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8271507498026835,\n \"acc_stderr\": 0.01062696452997186\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.000758150113722517,\n \
\ \"acc_stderr\": 0.0007581501137225419\n }\n}\n```"
repo_url: https://huggingface.co/TeeZee/GALAXY_v03_slimorca_1_epoch_50k_DPO_1_epoch_30k
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|arc:challenge|25_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|arc:challenge|25_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|gsm8k|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|gsm8k|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hellaswag|10_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hellaswag|10_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-31T06-52-06.364927.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-31T06-53-39.615413.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-31T06-53-39.615413.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- '**/details_harness|winogrande|5_2024-03-31T06-52-06.364927.parquet'
- split: 2024_03_31T06_53_39.615413
path:
- '**/details_harness|winogrande|5_2024-03-31T06-53-39.615413.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-31T06-53-39.615413.parquet'
- config_name: results
data_files:
- split: 2024_03_31T06_52_06.364927
path:
- results_2024-03-31T06-52-06.364927.parquet
- split: 2024_03_31T06_53_39.615413
path:
- results_2024-03-31T06-53-39.615413.parquet
- split: latest
path:
- results_2024-03-31T06-53-39.615413.parquet
---
# Dataset Card for Evaluation run of TeeZee/GALAXY_v03_slimorca_1_epoch_50k_DPO_1_epoch_30k
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [TeeZee/GALAXY_v03_slimorca_1_epoch_50k_DPO_1_epoch_30k](https://huggingface.co/TeeZee/GALAXY_v03_slimorca_1_epoch_50k_DPO_1_epoch_30k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TeeZee__GALAXY_v03_slimorca_1_epoch_50k_DPO_1_epoch_30k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-31T06:53:39.615413](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__GALAXY_v03_slimorca_1_epoch_50k_DPO_1_epoch_30k/blob/main/results_2024-03-31T06-53-39.615413.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6476001236347543,
"acc_stderr": 0.031649890137086564,
"acc_norm": 0.659414865893038,
"acc_norm_stderr": 0.032504129449161145,
"mc1": 0.37454100367197063,
"mc1_stderr": 0.01694353512840533,
"mc2": 0.534598735977796,
"mc2_stderr": 0.01466419006488303
},
"harness|arc:challenge|25": {
"acc": 0.6160409556313993,
"acc_stderr": 0.01421244498065189,
"acc_norm": 0.6527303754266212,
"acc_norm_stderr": 0.013913034529620446
},
"harness|hellaswag|10": {
"acc": 0.663114917347142,
"acc_stderr": 0.00471679287443321,
"acc_norm": 0.8562039434375622,
"acc_norm_stderr": 0.0035016571073867068
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.562962962962963,
"acc_stderr": 0.04284958639753401,
"acc_norm": 0.562962962962963,
"acc_norm_stderr": 0.04284958639753401
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7302631578947368,
"acc_stderr": 0.03611780560284898,
"acc_norm": 0.7302631578947368,
"acc_norm_stderr": 0.03611780560284898
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6981132075471698,
"acc_stderr": 0.028254200344438665,
"acc_norm": 0.6981132075471698,
"acc_norm_stderr": 0.028254200344438665
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.03614665424180826,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.03614665424180826
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.048786087144669955,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.048786087144669955
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5829787234042553,
"acc_stderr": 0.03223276266711712,
"acc_norm": 0.5829787234042553,
"acc_norm_stderr": 0.03223276266711712
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5087719298245614,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.5087719298245614,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6,
"acc_stderr": 0.040824829046386284,
"acc_norm": 0.6,
"acc_norm_stderr": 0.040824829046386284
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.02563425811555495,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.02563425811555495
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04426266681379909,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04426266681379909
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7903225806451613,
"acc_stderr": 0.023157879349083525,
"acc_norm": 0.7903225806451613,
"acc_norm_stderr": 0.023157879349083525
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4876847290640394,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.4876847290640394,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.806060606060606,
"acc_stderr": 0.03087414513656209,
"acc_norm": 0.806060606060606,
"acc_norm_stderr": 0.03087414513656209
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8585858585858586,
"acc_stderr": 0.02482590979334334,
"acc_norm": 0.8585858585858586,
"acc_norm_stderr": 0.02482590979334334
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9222797927461139,
"acc_stderr": 0.019321805557223157,
"acc_norm": 0.9222797927461139,
"acc_norm_stderr": 0.019321805557223157
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6564102564102564,
"acc_stderr": 0.024078696580635474,
"acc_norm": 0.6564102564102564,
"acc_norm_stderr": 0.024078696580635474
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3814814814814815,
"acc_stderr": 0.0296167189274976,
"acc_norm": 0.3814814814814815,
"acc_norm_stderr": 0.0296167189274976
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6848739495798319,
"acc_stderr": 0.03017680828897434,
"acc_norm": 0.6848739495798319,
"acc_norm_stderr": 0.03017680828897434
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3841059602649007,
"acc_stderr": 0.03971301814719197,
"acc_norm": 0.3841059602649007,
"acc_norm_stderr": 0.03971301814719197
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8422018348623853,
"acc_stderr": 0.015630022970092437,
"acc_norm": 0.8422018348623853,
"acc_norm_stderr": 0.015630022970092437
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6064814814814815,
"acc_stderr": 0.03331747876370312,
"acc_norm": 0.6064814814814815,
"acc_norm_stderr": 0.03331747876370312
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8627450980392157,
"acc_stderr": 0.02415222596280158,
"acc_norm": 0.8627450980392157,
"acc_norm_stderr": 0.02415222596280158
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8227848101265823,
"acc_stderr": 0.024856364184503214,
"acc_norm": 0.8227848101265823,
"acc_norm_stderr": 0.024856364184503214
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7040358744394619,
"acc_stderr": 0.030636591348699813,
"acc_norm": 0.7040358744394619,
"acc_norm_stderr": 0.030636591348699813
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7099236641221374,
"acc_stderr": 0.03980066246467765,
"acc_norm": 0.7099236641221374,
"acc_norm_stderr": 0.03980066246467765
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.036401182719909456,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.036401182719909456
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.03957835471980981,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.03957835471980981
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.49107142857142855,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.49107142857142855,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.039891398595317706,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.039891398595317706
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.020588491316092375,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.020588491316092375
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8301404853128991,
"acc_stderr": 0.013428186370608303,
"acc_norm": 0.8301404853128991,
"acc_norm_stderr": 0.013428186370608303
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7254335260115607,
"acc_stderr": 0.02402774515526502,
"acc_norm": 0.7254335260115607,
"acc_norm_stderr": 0.02402774515526502
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.32849162011173183,
"acc_stderr": 0.01570793539849645,
"acc_norm": 0.32849162011173183,
"acc_norm_stderr": 0.01570793539849645
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7254901960784313,
"acc_stderr": 0.025553169991826517,
"acc_norm": 0.7254901960784313,
"acc_norm_stderr": 0.025553169991826517
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7041800643086816,
"acc_stderr": 0.02592237178881876,
"acc_norm": 0.7041800643086816,
"acc_norm_stderr": 0.02592237178881876
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7716049382716049,
"acc_stderr": 0.023358211840626267,
"acc_norm": 0.7716049382716049,
"acc_norm_stderr": 0.023358211840626267
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5106382978723404,
"acc_stderr": 0.02982074719142244,
"acc_norm": 0.5106382978723404,
"acc_norm_stderr": 0.02982074719142244
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.49934810951760106,
"acc_stderr": 0.012770225252255548,
"acc_norm": 0.49934810951760106,
"acc_norm_stderr": 0.012770225252255548
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7279411764705882,
"acc_stderr": 0.02703304115168146,
"acc_norm": 0.7279411764705882,
"acc_norm_stderr": 0.02703304115168146
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6895424836601307,
"acc_stderr": 0.01871806705262323,
"acc_norm": 0.6895424836601307,
"acc_norm_stderr": 0.01871806705262323
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.726530612244898,
"acc_stderr": 0.028535560337128445,
"acc_norm": 0.726530612244898,
"acc_norm_stderr": 0.028535560337128445
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8507462686567164,
"acc_stderr": 0.02519692987482705,
"acc_norm": 0.8507462686567164,
"acc_norm_stderr": 0.02519692987482705
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.89,
"acc_stderr": 0.03144660377352203,
"acc_norm": 0.89,
"acc_norm_stderr": 0.03144660377352203
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5120481927710844,
"acc_stderr": 0.03891364495835816,
"acc_norm": 0.5120481927710844,
"acc_norm_stderr": 0.03891364495835816
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.02954774168764004,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.02954774168764004
},
"harness|truthfulqa:mc|0": {
"mc1": 0.37454100367197063,
"mc1_stderr": 0.01694353512840533,
"mc2": 0.534598735977796,
"mc2_stderr": 0.01466419006488303
},
"harness|winogrande|5": {
"acc": 0.8271507498026835,
"acc_stderr": 0.01062696452997186
},
"harness|gsm8k|5": {
"acc": 0.000758150113722517,
"acc_stderr": 0.0007581501137225419
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
hugfaceguy0001/LightNovels150kto200k | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 162861301
num_examples: 347
download_size: 102412724
dataset_size: 162861301
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/niiya_serina_alicegearaegisexpansion | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Niiya Serina
This is the dataset of Niiya Serina, containing 27 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 27 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 68 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 70 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 27 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 27 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 27 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 68 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 68 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 65 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 70 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 70 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
|
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-html-110000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 643754
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Robson264/slaaa | ---
license: openrail
---
|
Multimodal-Fatima/VQAv2_test_no_image_split_5 | ---
dataset_info:
features:
- name: question_type
dtype: string
- name: multiple_choice_answer
dtype: string
- name: answers
sequence: string
- name: answers_original
list:
- name: answer
dtype: string
- name: answer_confidence
dtype: string
- name: answer_id
dtype: int64
- name: id_image
dtype: int64
- name: answer_type
dtype: string
- name: question_id
dtype: int64
- name: question
dtype: string
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes
list:
- name: attribute
dtype: string
- name: box
sequence: float32
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float32
- name: size
dtype: string
- name: tag
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_ViT_L_14_wo_openai
sequence: string
- name: clip_tags_ViT_L_14_with_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_wo_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_with_openai
sequence: string
- name: clip_tags_LAION_ViT_bigG_14_2B_wo_openai
sequence: string
- name: clip_tags_LAION_ViT_bigG_14_2B_with_openai
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_bigG_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_module
sequence: string
- name: captions_module_filter
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
splits:
- name: test
num_bytes: 2150328389
num_examples: 44779
download_size: 551567211
dataset_size: 2150328389
---
# Dataset Card for "VQAv2_test_no_image_split_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bigscience-data/roots_indic-gu_wikipedia | ---
language: gu
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_indic-gu_wikipedia
# wikipedia
- Dataset uid: `wikipedia`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 3.2299 % of total
- 4.2071 % of en
- 5.6773 % of ar
- 3.3416 % of fr
- 5.2815 % of es
- 12.4852 % of ca
- 0.4288 % of zh
- 0.4286 % of zh
- 5.4743 % of indic-bn
- 8.9062 % of indic-ta
- 21.3313 % of indic-te
- 4.4845 % of pt
- 4.0493 % of indic-hi
- 11.3163 % of indic-ml
- 22.5300 % of indic-ur
- 4.4902 % of vi
- 16.9916 % of indic-kn
- 24.7820 % of eu
- 11.6241 % of indic-mr
- 9.8749 % of id
- 9.3489 % of indic-pa
- 9.4767 % of indic-gu
- 24.1132 % of indic-as
- 5.3309 % of indic-or
### BigScience processing steps
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: ca
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
#### Filters applied to: zh
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: pt
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ur
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: vi
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: id
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-as
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
#### Filters applied to: indic-or
- filter_wiki_user_titles
- dedup_document
- filter_remove_empty_docs
|
RAVIKUMAR/ddpm-butterflies-128 | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/HuggingFace7/ddpm-butterflies-128/tensorboard?#scalars)
license: mit
---
|
distilled-one-sec-cv12-each-chunk-uniq/chunk_222 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1292596840.0
num_examples: 251870
download_size: 1325398891
dataset_size: 1292596840.0
---
# Dataset Card for "chunk_222"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shangrilar/ko_text2sql | ---
configs:
- config_name: origin
data_files:
- split: train
path: "origin/train.csv"
- split: test
path: "test.csv"
- config_name: clean
data_files:
- split: train
path: "clean/train.csv"
- split: test
path: "test.csv"
---
---
license: cc-by-4.0
---
|
zetavg/wikipedia_random_page_summaries_zh_tw_10k | ---
dataset_info:
features:
- name: page_title
dtype: string
- name: page_summary
dtype: string
splits:
- name: train
num_bytes: 3985664
num_examples: 9997
download_size: 2934142
dataset_size: 3985664
---
# Dataset Card for "wikipedia_random_page_summaries_zh_tw_10k"
`page_title` 是維基百科原始的頁面名稱,因此可能是簡體中文。`page_summary` 則一律是台灣正體版本。
使用了 [vinta/pangu](https://github.com/vinta/pangu.js) 來確保中英文之間都有加上空格。
由 https://github.com/zetavg/LLM-Research/blob/3b79836/Wikipedia_Random_Page_Summaries_Dataset_Generator.ipynb 產生。 |
drwngwn/anime_conditioning_4000 | ---
dataset_info:
features:
- name: input_image
dtype: image
- name: reference_image
dtype: image
- name: target_image
dtype: image
splits:
- name: train
num_bytes: 1549678365.0
num_examples: 4000
download_size: 1480171875
dataset_size: 1549678365.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Thrawn/Jukebox_Thrawns_Rave_Collection | ---
license: mit
language:
- en
- de
tags:
- music
- audio
--- |
CCRss/qqp-Quora_Question_Pairs-kz | ---
license: mit
task_categories:
- text2text-generation
language:
- kk
size_categories:
- 100K<n<1M
---
## Kazakh Question Paraphrasing Dataset
This dataset, designed for paraphrasing tasks in the Kazakh language, is a valuable resource for natural language processing applications. It aids in the development and evaluation of models capable of understanding and generating paraphrased content while preserving the original meaning.
### Source and Translation Process
The dataset was sourced from the Quora Question Pairs and has been expertly translated into Kazakh. This translation process involved initial machine translation followed by thorough revision by native Kazakh speakers, ensuring the nuances and contextual integrity of the language were maintained.
### Usage and Application
This dataset is primarily intended for researchers and developers in computational linguistics, focusing on the Kazakh language. It's an excellent tool for creating and fine-tuning paraphrasing algorithms, enhancing language models' understanding of semantic similarity and variation in Kazakh.
### Acknowledgments and References
Special thanks go to the original dataset providers and the team of linguists who meticulously adapted this dataset to suit the Kazakh linguistic context. Their contributions are invaluable in advancing language technologies for the Kazakh-speaking community.
### Dataset Summary
The dataset "CCRss/qqp-Quora_Question_Pairs-kz" is a rich collection of question pairs translated into Kazakh, suitable for training and evaluating natural language processing models. Each entry in the dataset contains a 'src' (source question) and 'trg' (target or paraphrased question), providing a comprehensive resource for understanding the nuances of question paraphrasing in Kazakh.
### Acknowledgments and References
We extend our gratitude to the original dataset providers at [https://www.kaggle.com/competitions/quora-question-pairs/data?select=test.csv.zip] and the team of linguists and translators who contributed to the adaptation of this dataset for the Kazakh language.
|
Aehus/optimus | ---
dataset_info:
features:
- name: new_input
dtype: string
- name: new_output
dtype: string
- name: new_instruction
dtype: string
splits:
- name: train
num_bytes: 9154
num_examples: 10
download_size: 11488
dataset_size: 9154
---
# Dataset Card for "optimus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
communityai/apt-openchat-micro-dataset-llm-v2-714k | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int64
- name: source
dtype: string
- name: system
dtype: string
- name: items
list:
- name: content
dtype: string
- name: role
dtype: string
- name: weight
dtype: int64
splits:
- name: train
num_bytes: 1726941274.2272484
num_examples: 713591
- name: test
num_bytes: 1210035.7727516522
num_examples: 500
download_size: 873623460
dataset_size: 1728151310.0
---
# Dataset Card for "apt-openchat-micro-dataset-llm-v2-714k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Narya-ai/summarization-dataset-update | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 1694231
num_examples: 267
download_size: 864149
dataset_size: 1694231
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "summarization-dataset-update"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
neuralspace/NSME-COM | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
expert-generated license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- question-answering
- text-retrieval
- text2text-generation
- other
- translation
- conversational
task_ids:
- extractive-qa
- closed-domain-qa
- utterance-retrieval
- document-retrieval
- closed-domain-qa
- open-book-qa
- closed-book-qa
paperswithcode_id: acronym-identification
pretty_name: Massive E-commerce Dataset for Retail and Insurance domain.
train-eval-index:
- config: nsds
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
col_mapping:
sentence: text
label: target
metrics:
- type: nsme-com
name: NSME-COM
config:
nsds
tags:
- chatbots
- e-commerce
- retail
- insurance
- consumer
- consumer goods
configs:
- nsds
---
# Dataset Card for NSME-COM
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
### Dataset Description
- **Homepage**: [NeuralSpace Homepage](https://huggingface.co/neuralspace)
- **Repository:** [NSME-COM Dataset](https://huggingface.co/datasets/neuralspace/NSME-COM)
- **Point of Contact:** [Ankur Saxena](mailto:ankursaxena@neuralspace.ai)
- **Point of Contact:** [Ayushman Dash](mailto:ayushman@neuralspace.ai)
- **Size of downloaded dataset files:** 10.86 KB
### Dataset Summary
In this digital age, the E-Commerce industry has increasingly become a vital component of business strategy and development. To streamline, enhance and take the customer experience to the highest level, NLP can help create surprisingly massive value in the E-Commerce industry.
One of the most popular NLP use-cases is a chatbot. With a chatbot you can automate your customer engagement saving yourself time and other resources. Offering an enhanced and simplified customer experience you can increase your sales and also offer your website visitors personalized recommendations.
The NSME-COM dataset (NeuralSpace Massive E-Comm) is a manually curated dataset by data engineers at [NeuralSpace](https://www.neuralspace.ai/) for the insurance and retail domain. The dataset contains intents (the action users want to execute) and examples (anything that a user sends to the chatbot) that can be used to build a chatbot. The files in this dataset are available in JSON format.
### Supported Tasks
#### nsme-com
### Languages
The language data in NSME-COM is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 10.86 KB
An example of 'test' looks as follows.
``` {
"text": "is it good to add roadside assistance?",
"intent": "Add",
"type": "Test"
}
```
An example of 'train' looks as follows.
```{
"text": "how can I add my spouse as a nominee?",
"intent": "Add",
"type": "Train"
},
```
### Data Fields
The data fields are the same among all splits.
#### nsme-com
- `text`: a `string` feature.
- `intent`: a `string` feature.
- `type`: a classification label, with possible values including `train` or `test`.
### Data Splits
#### nsme-com
| |train|test|
|----|----:|---:|
|nsme-com| 1725| 406|
### Contributions
Ankur Saxena (ankursaxena@neuralspace.ai) |
AbishekPalle/train | ---
license: openrail
---
|
ashokpoudel/personal | ---
license: unknown
---
|
zhangshuoming/final_c_x86_O0_exebench_non_numeric_full_20k | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 19697117
num_examples: 20000
download_size: 5855242
dataset_size: 19697117
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
imoxto/prompt_injection_hackaprompt_gpt35 | ---
dataset_info:
features:
- name: labels
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 271856355
num_examples: 227042
download_size: 35972535
dataset_size: 271856355
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "prompt_injection_hackaprompt_gpt35"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlexFierro9/imagenet-1k_test | ---
license: bsd-2-clause
---
|
valentinwerner/cameo_news | ---
task_categories:
- text-classification
- question-answering
- conversational
language:
- en
size_categories:
- 1K<n<10K
---
Dataset used in my thesis (https://github.com/valentinwerner1/Thesis_RelationExtraction_PoliticsNews)
Reformatted for training with LLMs, experimenting whether these can improve performance |
alexandreduplessis/LatexCorrection | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 133468.8510638298
num_examples: 93
- name: test
num_bytes: 320
num_examples: 1
download_size: 87916
dataset_size: 133788.8510638298
---
# Dataset Card for "LatexCorrection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SauravMaheshkar/tags-math-sx | ---
license: unknown
task_categories:
- graph-ml
tags:
- chemistry
configs:
- config_name: transductive
data_files:
- split: train
path: "processed/transductive/train_df.csv"
- split: valid
path: "processed/transductive/val_df.csv"
- split: test
path: "processed/transductive/test_df.csv"
- config_name: inductive
data_files:
- split: train
path: "processed/inductive/train_df.csv"
- split: valid
path: "processed/inductive/val_df.csv"
- split: test
path: "processed/inductive/test_df.csv"
- config_name: raw
data_files: "raw/*.txt"
---
Source Paper: https://arxiv.org/abs/1802.06916
### Usage
```
from torch_geometric.datasets.cornell import CornellTemporalHyperGraphDataset
dataset = CornellTemporalHyperGraphDataset(root = "./", name="tags-math-sx", split="train")
```
### Citation
```misc
@article{Benson-2018-simplicial,
author = {Benson, Austin R. and Abebe, Rediet and Schaub, Michael T. and Jadbabaie, Ali and Kleinberg, Jon},
title = {Simplicial closure and higher-order link prediction},
year = {2018},
doi = {10.1073/pnas.1800683115},
publisher = {National Academy of Sciences},
issn = {0027-8424},
journal = {Proceedings of the National Academy of Sciences}
}
``` |
Wxlisson/vozzz | ---
license: openrail
---
|
infoslack/mistral-7b-arxiv-paper-chunked | ---
license: mit
language:
- en
---
This dataset contains chunked extracts from the [Mistral 7B research paper](https://arxiv.org/abs/2310.06825). |
kaleemWaheed/twitter_dataset_1713072456 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 14421
num_examples: 33
download_size: 9931
dataset_size: 14421
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yzhuang/metatree_BNG_sonar_ | ---
dataset_info:
features:
- name: id
dtype: int64
- name: X
sequence: float64
- name: y
dtype: int64
splits:
- name: train
num_bytes: 349985000
num_examples: 699970
- name: validation
num_bytes: 150015000
num_examples: 300030
download_size: 568705383
dataset_size: 500000000
---
# Dataset Card for "metatree_BNG_sonar_"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
freshpearYoon/train_free_25 | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 9604560968
num_examples: 10000
download_size: 1367816261
dataset_size: 9604560968
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tner/conll2003 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: CoNLL-2003
---
# Dataset Card for "tner/conll2003"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/)
- **Dataset:** CoNLL 2003
- **Domain:** News
- **Number of Entity:** 3
### Dataset Summary
CoNLL-2003 NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `ORG`, `PER`, `LOC`, `MISC`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tags': ['SOCCER','-', 'JAPAN', 'GET', 'LUCKY', 'WIN', ',', 'CHINA', 'IN', 'SURPRISE', 'DEFEAT', '.'],
'tokens': [0, 0, 5, 0, 0, 0, 0, 3, 0, 0, 0, 0]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/conll2003/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-ORG": 1,
"B-MISC": 2,
"B-PER": 3,
"I-PER": 4,
"B-LOC": 5,
"I-ORG": 6,
"I-MISC": 7,
"I-LOC": 8
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|conll2003|14041| 3250|3453|
### Licensing Information
From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page:
> The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.
The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html):
> The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
>
> [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html)
>
> This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
>
> [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html)
>
> This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
### Citation Information
```
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F. and De Meulder, Fien",
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
year = "2003",
url = "https://www.aclweb.org/anthology/W03-0419",
pages = "142--147",
}
``` |
Multimodal-Fatima/OxfordFlowers_test_facebook_opt_1.3b_Visclues_ns_6149_random | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_1_bs_16
num_bytes: 270233527.375
num_examples: 6149
- name: fewshot_3_bs_16
num_bytes: 274949398.375
num_examples: 6149
download_size: 534137349
dataset_size: 545182925.75
---
# Dataset Card for "OxfordFlowers_test_facebook_opt_1.3b_Visclues_ns_6149_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
one-sec-cv12/chunk_263 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 18295895376.125
num_examples: 190487
download_size: 16168462092
dataset_size: 18295895376.125
---
# Dataset Card for "chunk_263"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mask-distilled-one-sec-cv12/chunk_221 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1157834236
num_examples: 227383
download_size: 1182633818
dataset_size: 1157834236
---
# Dataset Card for "chunk_221"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chintagunta85/bc2gm_test | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: null
pretty_name: Bc2GmCorpus
---
# Dataset Card for bc2gm_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/spyysalo/bc2gm-corpus/)
- **Repository:** [Github](https://github.com/spyysalo/bc2gm-corpus/)
- **Paper:** [NCBI](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2559986/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `id`: Sentence identifier.
- `tokens`: Array of tokens composing a sentence.
- `ner_tags`: Array of tags, where `0` indicates no disease mentioned, `1` signals the first token of a disease and `2` the subsequent disease tokens.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@mahajandiwakar](https://github.com/mahajandiwakar) for adding this dataset.
|
hackathon-pln-es/MESD | ---
license: cc-by-4.0
Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5
---
# Dataset Card for MESD
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://data.mendeley.com/datasets/cy34mh68j9/5
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Contiene los datos de la base MESD procesados para hacer 'finetuning' de un modelo 'Wav2Vec' en el Hackaton organizado por 'Somos NLP'.
Ejemplo de referencia:
https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/audio_classification.ipynb
Hemos accedido a la base MESD para obtener ejemplos.
Breve descripción de los autores de la base MESD:
"La Base de Datos del Discurso Emocional Mexicano (MESD en inglés) proporciona enunciados de una sola palabra para las prosodias afectivas de ira, asco, miedo, felicidad, neutro y tristeza con conformación cultural mexicana. El MESD ha sido pronunciado por actores adultos y niños no profesionales: Se dispone de 3 voces femeninas, 2 masculinas y 6 infantiles. Las palabras de los enunciados emocionales y neutros proceden de dos corpus: (corpus A) compuesto por sustantivos y adjetivos que se repiten a través de prosodias emocionales y tipos de voz (femenina, masculina, infantil), y (corpus B) que consiste en palabras controladas por edad de adquisición, frecuencia de uso, familiaridad, concreción, valencia, excitación y clasificaciones de dimensionalidad de emociones discretas.
Las grabaciones de audio se realizaron en un estudio profesional con los siguientes materiales (1) un micrófono Sennheiser e835 con una respuesta de frecuencia plana (100 Hz a 10 kHz), (2) una interfaz de audio Focusrite Scarlett 2i4 conectada al micrófono con un cable XLR y al ordenador, y (3) la estación de trabajo de audio digital REAPER (Rapid Environment for Audio Production, Engineering, and Recording). Los archivos de audio se almacenaron como una secuencia de 24 bits con una frecuencia de muestreo de 48000Hz. La amplitud de las formas de onda acústicas se reescaló entre -1 y 1.
Se crearon dos versiones con reducción de la naturalidad de los locutores a partir de expresiones emocionales humanas para voces femeninas del corpus B. En concreto, la naturalidad se redujo progresivamente de las voces humanas al nivel 1 al nivel 2. En particular, se editaron la duración y el tono medio en las sílabas acentuadas para reducir la diferencia entre las sílabas acentuadas y las no acentuadas. En los enunciados completos, se redujeron las relaciones F2/F1 y F3/F1 editando las frecuencias F2 y F3. También se redujo la intensidad de los armónicos 1 y 4. "
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Español
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
Origen: texto que indica si se trata del conjunto de datos MESD original o los casos 'Speaker-embedded naturalness-reduced female voices' donde los autores han generado de forma sintética nuevos datos transformando algunas de las instancias de los audios originales.
Palabra: texto de la palabra que se ha leído.
Emoción: texto de la emoción a la que representa: Valores: 'Enojo', 'Felicidad', 'Miedo', 'Neutral', 'Disgusto', 'Tristeza'.
InfoActor: texto que indica si la voz es de 'Niño', 'Hombre', 'Mujer'.
AudioArray: audio array, remuestreado a 16 Khz.
### Data Splits
Train: 891 ejemplos, mezcla de casos MESD y 'Speaker-embedded naturalness-reduced female voices'.
Validation: 130 ejemplos, todos casos MESD.
Test: 129 ejemplos, todos casos MESD.
## Dataset Creation
### Curation Rationale
Unir los tres subconjuntos de datos y procesarlos para la tarea de finetuning, acorde al input esperado por el modelo Wav2Vec.
### Source Data
#### Initial Data Collection and Normalization
Acceso a los datos en bruto:
https://data.mendeley.com/datasets/cy34mh68j9/5
Conversión a audio arra y remuestreo a 16 Khz.
#### Who are the source language producers?
Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons, [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5
```
|
patruff/toxicForMistral | ---
dataset_info:
features:
- name: original
dtype: string
- name: chucklebot
dtype: string
splits:
- name: train
num_bytes: 16314446
num_examples: 5492
- name: test
num_bytes: 4091000
num_examples: 1374
download_size: 11348542
dataset_size: 20405446
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
mncai/Fake_or_Real_Competition_Dataset | ---
license: apache-2.0
task_categories:
- image-classification
language:
- en
pretty_name: aiconnect_fake_or_real
---

2023 Fake or Real: AI-generated Image Discrimination Competition dataset is now available on Hugging Face!
---
Hello🖐️
We are excited to announce the release of the dataset for the 2023 Fake or Real: AI-generated Image Discrimination Competition. The competition was held on AI CONNECT(https://aiconnect.kr/) from June 26th to July 6th, 2023, with 768 participants.
If you're interested in evaluating the performance of your model on the test dataset, we encourage you to visit the [competition page](https://aiconnect.kr/competition/detail/227/task/295/taskInfo) on AI CONNECT and submit your results. Please note that it supports only Korean yet. Of course we data scientists can always use Chrome translate, and/or even better translation models🥳. Plus, multilingual service will be provided in the (hopefully near) future, so please stay tuned!
# Background
As the advancement of generative AI technology has enabled the easy creation of indistinguishable fake information from genuine content, concerns regarding its misuse have surfaced. Image generation AI, in particular, has raised significant alarm due to its potential risks such as identity theft, revenge porn, and political manipulation. In response, it has become imperative to develop technologies that can effectively discern between real and AI-generated fake images.
The training dataset consists of diffusiondb (https://huggingface.co/datasets/poloclub/diffusiondb) and Flickr images, with the inclusion of some low-quality fake images. For the test dataset, we took measures to construct it in a manner that closely resembles real-world scenarios involving image misuse. We utilized multiple generative AI models, fine-tuned on diverse photorealistic datasets, and applied negative prompt keywords like 'cartoon' and 'too many fingers' to generate realistic images.
We hope this dataset encourages the development of robust solutions and stimulates discussions on tackling the challenges associated with AI-generated fake images.
Best Regards,
AI CONNECT |
pvduy/dpo_data_ultra | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: train_prefs
path: data/train_prefs-*
- split: test_prefs
path: data/test_prefs-*
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 160175363
num_examples: 38037
- name: test
num_bytes: 8556760
num_examples: 1964
- name: train_prefs
num_bytes: 160175363
num_examples: 38037
- name: test_prefs
num_bytes: 8556760
num_examples: 1964
download_size: 189460772
dataset_size: 337464246
---
# Dataset Card for "dpo_data_ultra"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sharathhebbar24/Evol-Instruct-Code-80k-v1 | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 123241726
num_examples: 78264
download_size: 52294178
dataset_size: 123241726
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- conversational
- text-generation
language:
- en
tags:
- code
pretty_name: code
size_categories:
- 10K<n<100K
---
# Evol-Instruct-Code-80k-v1
This is a cleansed version of [nickrosh/Evol-Instruct-Code-80k-v1](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1)
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("Sharathhebbar24/Evol-Instruct-Code-80k-v1", split="train")
``` |
CarPeAs/first_dataset_iabd | ---
size_categories:
- 1K<n<10K
---
Extraído de <https://github.com/anthony-wang/BestPractices/tree/master/data>.
Campos:
* Formula (`string`)
* T (`float64`): Temperatura (K)
* CP (`float64`): Capacidad calorífica (J/mol K)
|
zh-tw-llm-dv/zh-tw-pythia-ta8000-v1-e1-tr_wiki_sg-001-c1024 | ---
dataset_info:
dataset_size: 1639035396.6266758
download_size: 549430210
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- dtype: string
name: preview
- dtype: int64
name: length
- dtype: int64
name: messages_count
splits:
- name: train
num_bytes: 1637688841.0831976
num_examples: 305956
- name: test
num_bytes: 1346555.543478261
num_examples: 225
---
# zh-tw-pythia-ta8000-v1-e1-tr_wiki_sg-001-c1024
This dataset is a part of the `zh-tw-llm` project.
* Tokenizer: `zh-tw-pythia-tokenizer-a8000-v1`
* Built with: `translations`, `wikipedia`, `sharegpt`
* Rows: `train` `305956`, `test` `225`
* Max length: `1024`
* Full config:
```json
{"build_with": ["translations", "wikipedia", "sharegpt"], "preview_length": 128, "translations_settings": {"source_dataset": "zetavg/coct-en-zh-tw-translations-twp-300k", "lang_1_key": "en", "lang_2_key": "ch", "templates": ["English: {lang_1}\nChinese: {lang_2}", "Chinese: {lang_2}\nEnglish: {lang_1}"], "use_template": "random", "rows_limit": 200000, "test_size": 100, "test_split_seed": 42}, "sharegpt_settings": {"source_dataset": "zetavg/ShareGPT-Processed", "train_on_inputs": false, "languages": [{"en": 0.4}, "zh_Hant"], "rows_limit": 8000, "test_size": 0.02, "test_split_seed": 42, "test_rows_limit": 100}, "wikipedia_settings": {"source_dataset": "zetavg/zh-tw-wikipedia", "exclude": [{"content_length_longer_than": 1024}, {"match": "小行星", "in": "markdown", "in_range": [0, 40]}, {"match": ",是中國", "in": "markdown", "in_range": [0, 20]}, {"match": "中華人民共和國", "in": "markdown", "in_range": [0, 20]}, {"match": "是中華人民共和國", "in": "markdown", "in_range": [0, 40]}], "rows_limit": 100000, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 30}}
``` |
quangcodecode/mbs-data-demo-RAG | ---
license: gpl
---
|
scribe-project/nbtale3 | ---
dataset_info:
features:
- name: speaker_id
dtype: string
- name: gender
dtype: string
- name: utterance_id
dtype: string
- name: language
dtype: string
- name: raw_text
dtype: string
- name: full_audio_file
dtype: string
- name: original_data_split
dtype: string
- name: region
dtype: string
- name: duration
dtype: float64
- name: start
dtype: float64
- name: end
dtype: float64
- name: utterance_audio_file
dtype: audio
- name: standardized_text
dtype: string
splits:
- name: train
num_bytes: 1233495883.99
num_examples: 8033
download_size: 1287266972
dataset_size: 1233495883.99
---
# Dataset Card for NB Tale, module 3 (< 15 sec. segments)
## Dataset Description
- **Homepage:**
- **Repository:** <https://github.com/scribe-project/nodalida_2023_combined_training>
- **Paper:**
```
@inproceedings{
solberg2023improving,
title={Improving Generalization of Norwegian {ASR} with Limited Linguistic Resources},
author={Per Erik Solberg and Pablo Ortiz and Phoebe Parsons and Torbj{\o}rn Svendsen and Giampiero Salvi},
booktitle={The 24rd Nordic Conference on Computational Linguistics},
year={2023}
}
```
- **Point of Contact:** [Per Erik Solberg](mailto:per.solberg@nb.no)
### Dataset Summary
This is the version of the Bokmål segments of module 3 of NB Tale used for testing the models
in the paper *Improving Generalization of Norwegian ASR with Limited Linguistic Resources* presented at NoDaLiDa 2023.
It only contains segments of a length < 15 sec. This dataset contains both native and non-native speakers.
Speakers with `region` set to `foreign` were filtered out [when analyzing the data in the paper](https://github.com/scribe-project/nodalida_2023_combined_training/blob/main/analysis/analysis.ipynb).
### Languages
Norwegian Bokmål
## Dataset Creation
### Source Data
The full version of this dataset is found in [the repository of the Norwegian Language Bank](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-31/)
#### Initial Data Collection and Normalization
The data was retrieved using the [Spraakbanken downloader](https://pypi.org/project/spraakbanken-downloader/) and standardized
using the [combined dataset standardization scripts](https://github.com/scribe-project/asr-standardized-combined). Bokmål segments with a duration < 15 seconds were
extracted using [this code](https://github.com/scribe-project/nodalida_2023_combined_training/blob/main/make_datasets/make_nbtale_csvs.ipynb).
## Licensing Information
[CC0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{
solberg2023improving,
title={Improving Generalization of Norwegian {ASR} with Limited Linguistic Resources},
author={Per Erik Solberg and Pablo Ortiz and Phoebe Parsons and Torbj{\o}rn Svendsen and Giampiero Salvi},
booktitle={The 24rd Nordic Conference on Computational Linguistics},
year={2023}
}
``` |
rdmpage/autotrain-data-pagex | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: pagex
## Dataset Description
This dataset has been automatically processed by AutoTrain for project pagex.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<235x313 RGB PIL image>",
"target": 1
},
{
"image": "<235x313 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['content', 'end', 'start'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 117 |
| valid | 30 |
|
batterydata/battery-device-data-qa | ---
language:
- en
license:
- apache-2.0
task_categories:
- question-answering
pretty_name: 'Battery Device Question Answering Dataset'
---
# Battery Device QA Data
Battery device records, including anode, cathode, and electrolyte.
Examples of the question answering evaluation dataset:
\{'question': 'What is the cathode?', 'answer': 'Al foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight.', 'start index': 645\}
\{'question': 'What is the anode?', 'answer': 'Cu foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight. Finally, the obtained electrodes were cut into desired shapes on demand. It should be noted that the electrode mass ratio of cathode/anode is set to about 4, thus achieving the battery balance.', 'start index': 673\}
\{'question': 'What is the cathode?', 'answer': 'SiC/RGO nanocomposite', 'context': 'In conclusion, the SiC/RGO nanocomposite, integrating the synergistic effect of SiC flakes and RGO, was synthesized by an in situ gas–solid fabrication method. Taking advantage of the enhanced photogenerated charge separation, large CO2 adsorption, and numerous exposed active sites, SiC/RGO nanocomposite served as the cathode material for the photo-assisted Li–CO2 battery.', 'start index': 284\}
# Usage
```
from datasets import load_dataset
dataset = load_dataset("batterydata/battery-device-data-qa")
```
Note: in the original BatteryBERT paper, 272 data records were used for evaluation after removing redundant records as well as paragraphs with character length >= 1500. Code is shown below:
```
import json
with open("answers.json", "r", encoding='utf-8') as f:
data = json.load(f)
evaluation = []
for point in data['data']:
paragraphs = point['paragraphs'][0]['context']
if len(paragraphs)<1500:
qas = point['paragraphs'][0]['qas']
for indiv in qas:
try:
question = indiv['question']
answer = indiv['answers'][0]['text']
pairs = (paragraphs, question, answer)
evaluation.append(pairs)
except:
continue
```
# Citation
```
@article{huang2022batterybert,
title={BatteryBERT: A Pretrained Language Model for Battery Database Enhancement},
author={Huang, Shu and Cole, Jacqueline M},
journal={J. Chem. Inf. Model.},
year={2022},
doi={10.1021/acs.jcim.2c00035},
url={DOI:10.1021/acs.jcim.2c00035},
pages={DOI: 10.1021/acs.jcim.2c00035},
publisher={ACS Publications}
}
``` |
tyzhu/find_sent_after_sent_train_200_eval_40_recite | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 2328090
num_examples: 1263
- name: validation
num_bytes: 398145
num_examples: 203
download_size: 534849
dataset_size: 2726235
---
# Dataset Card for "find_sent_after_sent_train_200_eval_40_recite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cj-mills/hagrid-classification-512p-no-gesture-150k | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': call
'1': dislike
'2': fist
'3': four
'4': like
'5': mute
'6': no_gesture
'7': ok
'8': one
'9': palm
'10': peace
'11': peace_inverted
'12': rock
'13': stop
'14': stop_inverted
'15': three
'16': three2
'17': two_up
'18': two_up_inverted
splits:
- name: train
num_bytes: 3805782529
num_examples: 153735
download_size: 3808743954
dataset_size: 3805782529
license: cc-by-sa-4.0
language:
- en
pretty_name: HaGRID Classification 512p no_gesture 150k
size_categories:
- 100K<n<1M
---
# Dataset Card for "hagrid-classification-512p-no-gesture-150k"
This dataset contains 153,735 training images from [HaGRID](https://github.com/hukenovs/hagrid) (HAnd Gesture Recognition Image Dataset) modified for image classification instead of object detection. The original dataset is 716GB. I created this sample for a tutorial so readers can use the dataset in the free tiers of Google Colab and Kaggle Notebooks.
### Original Authors:
* [Alexander Kapitanov](https://www.linkedin.com/in/hukenovs)
* [Andrey Makhlyarchuk](https://www.linkedin.com/in/makhliarchuk)
* [Karina Kvanchiani](https://www.linkedin.com/in/kvanchiani)
### Original Dataset Links
* [GitHub](https://github.com/hukenovs/hagrid)
* [Kaggle Datasets Page](https://www.kaggle.com/datasets/kapitanov/hagrid)
|
vwxyzjn/ultrachat_200k_filtered_1707920811 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: query_reference_response
list:
- name: content
dtype: string
- name: role
dtype: string
- name: query_reference_response_token
sequence: int64
- name: query_reference_response_token_len
dtype: int64
- name: query
list:
- name: content
dtype: string
- name: role
dtype: string
- name: query_token
sequence: int64
- name: query_token_len
dtype: int64
- name: reference_response
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: reference_response_token
sequence: int64
- name: reference_response_token_len
dtype: int64
splits:
- name: test_gen
num_bytes: 30484069
num_examples: 1000
- name: test_sft
num_bytes: 39592502
num_examples: 1000
- name: train_gen
num_bytes: 29613744
num_examples: 1000
- name: train_sft
num_bytes: 39521233
num_examples: 1000
download_size: 50859072
dataset_size: 139211548
---
# Args
```python
{'base_model': 'mistralai/Mistral-7B-v0.1',
'check_length_correctness': True,
'debug': True,
'hf_entity': 'vwxyzjn',
'params': TaskQueryHParams(length=3000,
format_str='SUBREDDIT: r/{subreddit}\n'
'\n'
'TITLE: {title}\n'
'\n'
'POST: {post}\n'
'\n'
'TL;DR:',
truncate_field='post',
truncate_text='\n',
padding='pad_token',
pad_token=[32000],
pad_side='left',
max_sft_response_length=1500,
max_sft_query_response_length=4500,
max_rm_response_length=169,
max_rm_query_response_length=638),
'push_to_hub': True}
```
|
Malvinan/bloom_shuffled_language_modeling | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: language
dtype: string
- name: image_list
sequence: string
- name: annotations
sequence: string
- name: input_token_ids
sequence:
sequence: int64
- name: output_token_ids
sequence:
sequence: int64
splits:
- name: train
num_bytes: 45003135433
num_examples: 2448313
- name: validation
num_bytes: 192416778
num_examples: 10941
download_size: 5761079059
dataset_size: 45195552211
---
# Dataset Card for "bloom_shuffled_language_modeling"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wanyu/IteraTeR_human_doc | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: IteraTeR-human-doc
language_bcp47:
- en-US
tags:
- conditional-text-generation
- text-editing
---
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802)
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
|
Vinnyyw/Belinda | ---
license: openrail
---
|
coref-data/phrase_detectives_raw | ---
license: other
configs:
- config_name: conll
data_files:
- split: train
path: conll/train-*
- split: validation
path: conll/validation-*
- config_name: conll_singletons
data_files:
- split: train
path: conll_singletons/train-*
- split: validation
path: conll_singletons/validation-*
- config_name: masxml
data_files:
- split: train
path: masxml/train-*
- split: validation
path: masxml/validation-*
---
# Phrase Detectives Version 3
- Project: https://github.com/dali-ambiguity/Phrase-Detectives-Corpus-3.0
- Data source: https://drive.google.com/file/d/1R72bY6gHyC3amy9VxLjKrougJUxhY_HK/view?usp=sharing
## Details
The Phrase Detectives Corpus v3. Publicly distributed. License: LDC User Agreement for Non-Members (v1 and v2)
## Citation
```
@inproceedings{yu-etal-2023-aggregating,
title = "Aggregating Crowdsourced and Automatic Judgments to Scale Up a Corpus of Anaphoric Reference for Fiction and {W}ikipedia Texts",
author = "Yu, Juntao and
Paun, Silviu and
Camilleri, Maris and
Garcia, Paloma and
Chamberlain, Jon and
Kruschwitz, Udo and
Poesio, Massimo",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.eacl-main.54",
doi = "10.18653/v1/2023.eacl-main.54",
pages = "767--781",
abstract = "Although several datasets annotated for anaphoric reference / coreference exist, even the largest such datasets have limitations in term of size, range of domains, coverage of anaphoric phenomena, and size of documents included. Yet, the approaches proposed to scale up anaphoric annotation haven{'}t so far resulted in datasets overcoming these limitations. In this paper, we introduce a new release of a corpus for anaphoric reference labelled via a game-with-a-purpose. This new release is comparable in size to the largest existing corpora for anaphoric reference due in part to substantial activity by the players, in part thanks to the use of a new resolve-and-aggregate paradigm to {`}complete{'} markable annotations through the combination of an anaphoric resolver and an aggregation method for anaphoric reference. The proposed method could be adopted to greatly speed up annotation time in other projects involving games-with-a-purpose. In addition, the corpus covers genres for which no comparable size datasets exist (Fiction and Wikipedia); it covers singletons and non-referring expressions; and it includes a substantial number of long documents ( 2K in length).",
}
``` |
autoevaluate/autoeval-staging-eval-project-cestwc__cnn_dailymail-test50-b9fb5faf-11395515 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cestwc/cnn_dailymail-test50
eval_info:
task: summarization
model: facebook/bart-large-cnn
metrics: []
dataset_name: cestwc/cnn_dailymail-test50
dataset_config: cestwc--cnn_dailymail-test50
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: cestwc/cnn_dailymail-test50
* Config: cestwc--cnn_dailymail-test50
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Buckeyes2019](https://huggingface.co/Buckeyes2019) for evaluating this model. |
haisonle001/cmc_dedup | ---
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8266460422
num_examples: 429350
download_size: 2814231645
dataset_size: 8266460422
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Mike0307/simclue-zh-tw | ---
dataset_info:
features:
- name: text1
dtype: string
- name: text2
dtype: string
- name: label
dtype: int64
- name: similarity
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 262368640
num_examples: 1307687
- name: test
num_bytes: 29409397
num_examples: 147115
- name: validate
num_bytes: 36060224
num_examples: 179807
download_size: 244981269
dataset_size: 327838261
---
# Dataset Card for "simclue-zh-tw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
skeskinen/TinyStories-GPT3.5 | ---
dataset_info:
features:
- name: story
dtype: string
- name: summary
dtype: string
- name: source
dtype: string
- name: prompt
dtype: string
- name: words
sequence: string
- name: features
sequence: string
splits:
- name: train
num_bytes: 2837432460
num_examples: 2222513
download_size: 1125071371
dataset_size: 2837432460
---
# Dataset Card for "TinyStories-GPT3.5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
howard-hou/COCO-Text | ---
dataset_info:
features:
- name: image
dtype: image
- name: coco_file_name
dtype: string
- name: image_id
dtype: string
- name: caption
sequence: string
- name: ocr_tokens
sequence: string
- name: ocr_info
list:
- name: word
dtype: string
- name: bounding_box
struct:
- name: width
dtype: float64
- name: height
dtype: float64
- name: top_left_x
dtype: float64
- name: top_left_y
dtype: float64
- name: image_width
dtype: int64
- name: image_height
dtype: int64
splits:
- name: train
num_bytes: 2230879987.67
num_examples: 13097
- name: validation
num_bytes: 526583286.88
num_examples: 3074
download_size: 259904361
dataset_size: 2757463274.55
---
# Dataset Card for "COCO-Text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DewiBrynJones/banc-trawsgrifiadau-bangor-translations | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
splits:
- name: test
num_bytes: 392450804.0
num_examples: 500
download_size: 381222474
dataset_size: 392450804.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
CyberHarem/kitashirakawa_chiyuri_touhou | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kitashirakawa_chiyuri/北白河ちゆり (Touhou)
This is the dataset of kitashirakawa_chiyuri/北白河ちゆり (Touhou), containing 151 images and their tags.
The core tags of this character are `blonde_hair, twintails, hat, sailor_hat, yellow_eyes, white_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 151 | 130.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitashirakawa_chiyuri_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 151 | 86.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitashirakawa_chiyuri_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 299 | 171.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitashirakawa_chiyuri_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 151 | 118.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitashirakawa_chiyuri_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 299 | 225.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitashirakawa_chiyuri_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kitashirakawa_chiyuri_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, blue_sailor_collar, solo, white_shorts, midriff, navel, smile, open_mouth |
| 1 | 7 |  |  |  |  |  | 2girls, blue_sailor_collar, midriff, red_hair, short_hair, shorts, navel, folding_chair, smile |
| 2 | 7 |  |  |  |  |  | 1girl, blue_sailor_collar, medium_hair, sailor_shirt, solo, white_shirt, bangs, blue_neckerchief, blush, upper_body, looking_at_viewer, simple_background, anchor_symbol, happy, white_background, closed_mouth, grin, puffy_short_sleeves |
| 3 | 7 |  |  |  |  |  | 1girl, blue_sailor_collar, midriff, open_mouth, puffy_short_sleeves, sailor_shirt, solo, white_shirt, white_shorts, anchor_symbol, medium_hair, blue_neckerchief, navel, smile, stomach, blush, folding_chair, happy, looking_at_viewer |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_sailor_collar | solo | white_shorts | midriff | navel | smile | open_mouth | 2girls | red_hair | short_hair | shorts | folding_chair | medium_hair | sailor_shirt | white_shirt | bangs | blue_neckerchief | blush | upper_body | looking_at_viewer | simple_background | anchor_symbol | happy | white_background | closed_mouth | grin | puffy_short_sleeves | stomach |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------------|:-------|:---------------|:----------|:--------|:--------|:-------------|:---------|:-----------|:-------------|:---------|:----------------|:--------------|:---------------|:--------------|:--------|:-------------------|:--------|:-------------|:--------------------|:--------------------|:----------------|:--------|:-------------------|:---------------|:-------|:----------------------|:----------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | | X | | | X | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | |
| 3 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | X | X | X | X | | X | X | | X | | X | X | | | | X | X |
|
dmrau/cqudubstack-programmers | ---
configs:
- config_name: default
data_files:
- split: queries
path: data/queries-*
- split: corpus
path: data/corpus-*
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: queries
num_bytes: 63785
num_examples: 876
- name: corpus
num_bytes: 32727262
num_examples: 32176
download_size: 19360000
dataset_size: 32791047
---
# Dataset Card for "cqudubstack-programmers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pixel-coping/pubmed_derived | ---
configs:
- config_name: default
data_files:
- split: pubmed
path: data/pubmed-*
- split: nonbiomedical
path: data/nonbiomedical-*
- split: counterfactual
path: data/counterfactual-*
- split: casual
path: data/casual-*
- split: rap
path: data/rap-*
dataset_info:
features:
- name: PubmedData
struct:
- name: ArticleIdList
sequence:
- name: ArticleId
sequence: string
- name: PublicationStatus
dtype: string
- name: History
struct:
- name: PubMedPubDate
sequence:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: ReferenceList
sequence:
- name: Citation
dtype: string
- name: CitationId
dtype: int32
- name: text
dtype: string
splits:
- name: pubmed
num_bytes: 1166668
num_examples: 1000
- name: nonbiomedical
num_bytes: 1141909
num_examples: 1000
- name: counterfactual
num_bytes: 1179347
num_examples: 991
- name: casual
num_bytes: 1205949
num_examples: 1000
- name: rap
num_bytes: 1252260
num_examples: 1000
download_size: 3357032
dataset_size: 5946133
language:
- en
---
# A corpus of rewritten pubmed abstracts
This corpus contains a 1k example subset from the [pubmed](https://huggingface.co/datasets/pubmed) corpus and various rewritten versions. The rewritten versions change one aspect of the orginal text and keeps other aspects unchanged as much as possible.
- **Paper:** [Dissecting learning and forgetting in language model finetuning](link pending)
Another corpus of rewritten general text is provided here: [c4_derived](https://huggingface.co/datasets/pixel-coping/c4_derived)
### Data Splits
- pubmed: a 1k example subset from the original pubmed corpus
- nonbiomedical: main topic of text changed to nonbiomedical topic
- counerfactual: factuals knowledge in text replaced by incorrect factuals
- casual: style of text changed to a casual style
- rap: style of text changed to a rap style
## Dataset Creation
Text is generated by ChatGPT with corresponding prompts. Refer to the paper for the instructions used to generate text in each derived subsets.
Please check the terms and conditions of pubmed data [here](https://www.nlm.nih.gov/databases/download/terms_and_conditions.html).
### Citation Information
```
pending
``` |
SEACrowd/id_hatespeech | ---
license: unknown
tags:
- sentiment-analysis
language:
- ind
---
# id_hatespeech
The ID Hatespeech dataset is collection of 713 tweets related to a political event, the Jakarta Governor Election 2017
designed for hate speech detection NLP task. This dataset is crawled from Twitter, and then filtered
and annotated manually. The dataset labelled into two; HS if the tweet contains hate speech and Non_HS if otherwise
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{inproceedings,
author = {Alfina, Ika and Mulia, Rio and Fanany, Mohamad Ivan and Ekanata, Yudo},
year = {2017},
month = {10},
pages = {},
title = {Hate Speech Detection in the Indonesian Language: A Dataset and Preliminary Study},
doi = {10.1109/ICACSIS.2017.8355039}
}
```
## License
Unknown
## Homepage
[https://www.researchgate.net/publication/320131169_Hate_Speech_Detection_in_the_Indonesian_Language_A_Dataset_and_Preliminary_Study](https://www.researchgate.net/publication/320131169_Hate_Speech_Detection_in_the_Indonesian_Language_A_Dataset_and_Preliminary_Study)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) |
quarel | ---
language:
- en
paperswithcode_id: quarel
pretty_name: QuaRel
dataset_info:
features:
- name: id
dtype: string
- name: answer_index
dtype: int32
- name: logical_forms
sequence: string
- name: logical_form_pretty
dtype: string
- name: world_literals
sequence:
- name: world1
dtype: string
- name: world2
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 1072874
num_examples: 1941
- name: test
num_bytes: 307588
num_examples: 552
- name: validation
num_bytes: 154308
num_examples: 278
download_size: 631370
dataset_size: 1534770
---
# Dataset Card for "quarel"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/quarel](https://allenai.org/data/quarel)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 0.63 MB
- **Size of the generated dataset:** 1.53 MB
- **Total amount of disk used:** 2.17 MB
### Dataset Summary
QuaRel is a crowdsourced dataset of 2771 multiple-choice story questions, including their logical forms.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 0.63 MB
- **Size of the generated dataset:** 1.53 MB
- **Total amount of disk used:** 2.17 MB
An example of 'train' looks as follows.
```
{
"answer_index": 0,
"id": "QuaRel_V1_B5_1403",
"logical_form_pretty": "qrel(time, lower, world1) -> qrel(distance, higher, world2) ; qrel(distance, higher, world1)",
"logical_forms": ["(infer (time lower world1) (distance higher world2) (distance higher world1))", "(infer (time lower world2) (distance higher world1) (distance higher world2))"],
"question": "John and Rita are going for a run. Rita gets tired and takes a break on the park bench. After twenty minutes in the park, who has run farther? (A) John (B) Rita",
"world_literals": {
"world1": ["Rita"],
"world2": ["John"]
}
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `answer_index`: a `int32` feature.
- `logical_forms`: a `list` of `string` features.
- `logical_form_pretty`: a `string` feature.
- `world_literals`: a dictionary feature containing:
- `world1`: a `string` feature.
- `world2`: a `string` feature.
- `question`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 1941| 278| 552|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{quarel_v1,
title={QuaRel: A Dataset and Models for Answering Questions about Qualitative Relationships},
author={Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau Yih, Ashish Sabharwal},
year={2018},
journal={arXiv:1805.05377v1}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
MikkelONielsen/neuro_patents_bds | ---
license: mit
dataset_info:
features:
- name: appln_id
dtype: int64
- name: appln_filing_date
dtype: string
- name: docdb_family_id
dtype: int64
- name: granted
dtype: string
- name: appln_abstract
dtype: string
- name: appln_abstract_lg
dtype: string
- name: appln_title
dtype: string
- name: applt_coun
dtype: string
- name: invt_coun
dtype: string
- name: cpc
dtype: string
- name: ipc
sequence: string
- name: __index_level_0__
dtype: int64
- name: input
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 13256.4
num_examples: 6
download_size: 31103
dataset_size: 13256.4
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
FutureMa/realhouse | ---
license: apache-2.0
---
|
yejeekang/ko_legal_instruction | ---
license: afl-3.0
---
|
xianbao/test-dataset-1 | ---
license: apache-2.0
---
|
CyberHarem/hayashio_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of hayashio (Kantai Collection)
This is the dataset of hayashio (Kantai Collection), containing 180 images and their tags.
The core tags of this character are `black_hair, long_hair, brown_eyes, mole, mole_under_eye, blue_ribbon, ribbon, neck_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 180 | 166.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hayashio_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 180 | 107.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hayashio_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 418 | 224.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hayashio_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 180 | 153.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hayashio_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 418 | 298.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hayashio_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hayashio_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 16 |  |  |  |  |  | 1girl, black_vest, short_sleeves, solo, white_shirt, black_skirt, pleated_skirt, school_uniform, simple_background, white_background, white_gloves, looking_at_viewer, cowboy_shot, smile, blush, collared_shirt, red_eyes |
| 1 | 10 |  |  |  |  |  | 1girl, black_skirt, black_vest, kneehighs, pleated_skirt, school_uniform, short_sleeves, white_shirt, brown_footwear, loafers, white_gloves, black_socks, full_body, solo, red_eyes, smile, cannon, collared_shirt, simple_background, standing, machinery, turret, white_background |
| 2 | 7 |  |  |  |  |  | 1girl, black_vest, looking_at_viewer, solo, upper_body, white_shirt, short_sleeves, orange_eyes, blush, grin, school_uniform, dress_shirt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_vest | short_sleeves | solo | white_shirt | black_skirt | pleated_skirt | school_uniform | simple_background | white_background | white_gloves | looking_at_viewer | cowboy_shot | smile | blush | collared_shirt | red_eyes | kneehighs | brown_footwear | loafers | black_socks | full_body | cannon | standing | machinery | turret | upper_body | orange_eyes | grin | dress_shirt |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:----------------|:-------|:--------------|:--------------|:----------------|:-----------------|:--------------------|:-------------------|:---------------|:--------------------|:--------------|:--------|:--------|:-----------------|:-----------|:------------|:-----------------|:----------|:--------------|:------------|:---------|:-----------|:------------|:---------|:-------------|:--------------|:-------|:--------------|
| 0 | 16 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | X | | X | X | X | X | X | X | X | X | X | X | X | | | | |
| 2 | 7 |  |  |  |  |  | X | X | X | X | X | | | X | | | | X | | | X | | | | | | | | | | | | X | X | X | X |
|
inkoziev/paraphrases | ---
license: cc-by-nc-4.0
language:
- ru
language_creators:
- expert-generated
task_categories:
- sentence-similarity
- text2text-generation
task_ids:
- semantic-similarity-classification
---
# Датасет перефразировок коротких фраз (читчат+поэзия)
В датасете содержатся правильные и некорректные перефразировки коротких диалоговых реплик ([проект диалоговой системы](https://github.com/Koziev/chatbot))
и фрагментов стихов ([проект генеративной поэзии](https://github.com/Koziev/verslibre)).
Датасет представляет из себя список сэмплов-кортежей. Каждый сэмпл состоит из двух списков:
```paraphrases``` - примеры правильных перефразировок
```distractors``` - примеры неправильных перефразировок
Датасет используется для создания моделей [детектора перефразировок sbert_synonymy](https://huggingface.co/inkoziev/sbert_synonymy)
и [генеративного поэтического перефразировщика](https://huggingface.co/inkoziev/paraphraser).
## Disclaimer
В датасете целенаправленно допускалась неконсервативность семантики перефразировок в определенных пределах.
К примеру, правильными перефразировками считаются пары "_Помолчи_" и "_Дружище, не говори ни слова!_". Так как перефразировщик
используется в проекте генеративной поэзии для создания датасетов, в нем есть некоторое количество метафоричных
и достаточно вольных перефразировок. Эти особенности датасета могут сделать невозможным использование датасета и моделей
на его основе в Ваших проектах.
## Другие датасеты перефразировок
При обучении моделей вы можете совмещать этот датасет с данными из других датасетов перефразировок, например [tapaco](https://huggingface.co/datasets/tapaco).
|
Eloquent/Voight-Kampff | ---
license: cc-by-nc-sa-4.0
---
|
johannes-garstenauer/embeddings_from_distilbert_masking_heaps_and_eval_part0 | ---
dataset_info:
features:
- name: struct
dtype: string
- name: label
dtype: int64
- name: pred
dtype: int64
- name: cls_layer_6
sequence: float32
- name: cls_layer_5
sequence: float32
- name: cls_layer_4
sequence: float32
splits:
- name: train
num_bytes: 1282993344
num_examples: 134592
download_size: 1493342036
dataset_size: 1282993344
---
# Dataset Card for "embeddings_from_distilbert_masking_heaps_and_eval_part0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wecover/OPUS_Tatoeba | ---
configs:
- config_name: default
data_files:
- split: train
path: '*/*/train.parquet'
- split: valid
path: '*/*/valid.parquet'
- config_name: af
data_files:
- split: train
path: '*/*af*/train.parquet'
- split: valid
path: '*/*af*/valid.parquet'
- config_name: ar
data_files:
- split: train
path: '*/*ar*/train.parquet'
- split: valid
path: '*/*ar*/valid.parquet'
- config_name: ca
data_files:
- split: train
path: '*/*ca*/train.parquet'
- split: valid
path: '*/*ca*/valid.parquet'
- config_name: cs
data_files:
- split: train
path: '*/*cs*/train.parquet'
- split: valid
path: '*/*cs*/valid.parquet'
- config_name: de
data_files:
- split: train
path: '*/*de*/train.parquet'
- split: valid
path: '*/*de*/valid.parquet'
- config_name: en
data_files:
- split: train
path: '*/*en*/train.parquet'
- split: valid
path: '*/*en*/valid.parquet'
- config_name: eo
data_files:
- split: train
path: '*/*eo*/train.parquet'
- split: valid
path: '*/*eo*/valid.parquet'
- config_name: es
data_files:
- split: train
path: '*/*es*/train.parquet'
- split: valid
path: '*/*es*/valid.parquet'
- config_name: fi
data_files:
- split: train
path: '*/*fi*/train.parquet'
- split: valid
path: '*/*fi*/valid.parquet'
- config_name: fr
data_files:
- split: train
path: '*/*fr*/train.parquet'
- split: valid
path: '*/*fr*/valid.parquet'
- config_name: ga
data_files:
- split: train
path: '*/*ga*/train.parquet'
- split: valid
path: '*/*ga*/valid.parquet'
- config_name: it
data_files:
- split: train
path: '*/*it*/train.parquet'
- split: valid
path: '*/*it*/valid.parquet'
- config_name: ja
data_files:
- split: train
path: '*/*ja*/train.parquet'
- split: valid
path: '*/*ja*/valid.parquet'
- config_name: la
data_files:
- split: train
path: '*/*la*/train.parquet'
- split: valid
path: '*/*la*/valid.parquet'
- config_name: nl
data_files:
- split: train
path: '*/*nl*/train.parquet'
- split: valid
path: '*/*nl*/valid.parquet'
- config_name: pl
data_files:
- split: train
path: '*/*pl*/train.parquet'
- split: valid
path: '*/*pl*/valid.parquet'
- config_name: pt
data_files:
- split: train
path: '*/*pt*/train.parquet'
- split: valid
path: '*/*pt*/valid.parquet'
- config_name: ro
data_files:
- split: train
path: '*/*ro*/train.parquet'
- split: valid
path: '*/*ro*/valid.parquet'
- config_name: ru
data_files:
- split: train
path: '*/*ru*/train.parquet'
- split: valid
path: '*/*ru*/valid.parquet'
- config_name: sv
data_files:
- split: train
path: '*/*sv*/train.parquet'
- split: valid
path: '*/*sv*/valid.parquet'
- config_name: tr
data_files:
- split: train
path: '*/*tr*/train.parquet'
- split: valid
path: '*/*tr*/valid.parquet'
- config_name: uk
data_files:
- split: train
path: '*/*uk*/train.parquet'
- split: valid
path: '*/*uk*/valid.parquet'
- config_name: xh
data_files:
- split: train
path: '*/*xh*/train.parquet'
- split: valid
path: '*/*xh*/valid.parquet'
- config_name: yi
data_files:
- split: train
path: '*/*yi*/train.parquet'
- split: valid
path: '*/*yi*/valid.parquet'
- config_name: am
data_files:
- split: train
path: '*/*am*/train.parquet'
- split: valid
path: '*/*am*/valid.parquet'
- config_name: bg
data_files:
- split: train
path: '*/*bg*/train.parquet'
- split: valid
path: '*/*bg*/valid.parquet'
- config_name: da
data_files:
- split: train
path: '*/*da*/train.parquet'
- split: valid
path: '*/*da*/valid.parquet'
- config_name: el
data_files:
- split: train
path: '*/*el*/train.parquet'
- split: valid
path: '*/*el*/valid.parquet'
- config_name: he
data_files:
- split: train
path: '*/*he*/train.parquet'
- split: valid
path: '*/*he*/valid.parquet'
- config_name: hu
data_files:
- split: train
path: '*/*hu*/train.parquet'
- split: valid
path: '*/*hu*/valid.parquet'
- config_name: ko
data_files:
- split: train
path: '*/*ko*/train.parquet'
- split: valid
path: '*/*ko*/valid.parquet'
- config_name: ku
data_files:
- split: train
path: '*/*ku*/train.parquet'
- split: valid
path: '*/*ku*/valid.parquet'
- config_name: lt
data_files:
- split: train
path: '*/*lt*/train.parquet'
- split: valid
path: '*/*lt*/valid.parquet'
- config_name: mk
data_files:
- split: train
path: '*/*mk*/train.parquet'
- split: valid
path: '*/*mk*/valid.parquet'
- config_name: ug
data_files:
- split: train
path: '*/*ug*/train.parquet'
- split: valid
path: '*/*ug*/valid.parquet'
- config_name: ur
data_files:
- split: train
path: '*/*ur*/train.parquet'
- split: valid
path: '*/*ur*/valid.parquet'
- config_name: as
data_files:
- split: train
path: '*/*as*/train.parquet'
- split: valid
path: '*/*as*/valid.parquet'
- config_name: bn
data_files:
- split: train
path: '*/*bn*/train.parquet'
- split: valid
path: '*/*bn*/valid.parquet'
- config_name: hi
data_files:
- split: train
path: '*/*hi*/train.parquet'
- split: valid
path: '*/*hi*/valid.parquet'
- config_name: az
data_files:
- split: train
path: '*/*az*/train.parquet'
- split: valid
path: '*/*az*/valid.parquet'
- config_name: kk
data_files:
- split: train
path: '*/*kk*/train.parquet'
- split: valid
path: '*/*kk*/valid.parquet'
- config_name: be
data_files:
- split: train
path: '*/*be*/train.parquet'
- split: valid
path: '*/*be*/valid.parquet'
- config_name: et
data_files:
- split: train
path: '*/*et*/train.parquet'
- split: valid
path: '*/*et*/valid.parquet'
- config_name: sl
data_files:
- split: train
path: '*/*sl*/train.parquet'
- split: valid
path: '*/*sl*/valid.parquet'
- config_name: sr
data_files:
- split: train
path: '*/*sr*/train.parquet'
- split: valid
path: '*/*sr*/valid.parquet'
- config_name: vi
data_files:
- split: train
path: '*/*vi*/train.parquet'
- split: valid
path: '*/*vi*/valid.parquet'
- config_name: id
data_files:
- split: train
path: '*/*id*/train.parquet'
- split: valid
path: '*/*id*/valid.parquet'
- config_name: br
data_files:
- split: train
path: '*/*br*/train.parquet'
- split: valid
path: '*/*br*/valid.parquet'
- config_name: bs
data_files:
- split: train
path: '*/*bs*/train.parquet'
- split: valid
path: '*/*bs*/valid.parquet'
- config_name: hr
data_files:
- split: train
path: '*/*hr*/train.parquet'
- split: valid
path: '*/*hr*/valid.parquet'
- config_name: gl
data_files:
- split: train
path: '*/*gl*/train.parquet'
- split: valid
path: '*/*gl*/valid.parquet'
- config_name: fy
data_files:
- split: train
path: '*/*fy*/train.parquet'
- split: valid
path: '*/*fy*/valid.parquet'
- config_name: ka
data_files:
- split: train
path: '*/*ka*/train.parquet'
- split: valid
path: '*/*ka*/valid.parquet'
- config_name: tl
data_files:
- split: train
path: '*/*tl*/train.parquet'
- split: valid
path: '*/*tl*/valid.parquet'
- config_name: cy
data_files:
- split: train
path: '*/*cy*/train.parquet'
- split: valid
path: '*/*cy*/valid.parquet'
- config_name: is
data_files:
- split: train
path: '*/*is*/train.parquet'
- split: valid
path: '*/*is*/valid.parquet'
- config_name: eu
data_files:
- split: train
path: '*/*eu*/train.parquet'
- split: valid
path: '*/*eu*/valid.parquet'
- config_name: gd
data_files:
- split: train
path: '*/*gd*/train.parquet'
- split: valid
path: '*/*gd*/valid.parquet'
- config_name: ha
data_files:
- split: train
path: '*/*ha*/train.parquet'
- split: valid
path: '*/*ha*/valid.parquet'
- config_name: hy
data_files:
- split: train
path: '*/*hy*/train.parquet'
- split: valid
path: '*/*hy*/valid.parquet'
- config_name: km
data_files:
- split: train
path: '*/*km*/train.parquet'
- split: valid
path: '*/*km*/valid.parquet'
- config_name: ky
data_files:
- split: train
path: '*/*ky*/train.parquet'
- split: valid
path: '*/*ky*/valid.parquet'
- config_name: mn
data_files:
- split: train
path: '*/*mn*/train.parquet'
- split: valid
path: '*/*mn*/valid.parquet'
- config_name: mr
data_files:
- split: train
path: '*/*mr*/train.parquet'
- split: valid
path: '*/*mr*/valid.parquet'
- config_name: my
data_files:
- split: train
path: '*/*my*/train.parquet'
- split: valid
path: '*/*my*/valid.parquet'
- config_name: th
data_files:
- split: train
path: '*/*th*/train.parquet'
- split: valid
path: '*/*th*/valid.parquet'
- config_name: uz
data_files:
- split: train
path: '*/*uz*/train.parquet'
- split: valid
path: '*/*uz*/valid.parquet'
- config_name: jv
data_files:
- split: train
path: '*/*jv*/train.parquet'
- split: valid
path: '*/*jv*/valid.parquet'
- config_name: kn
data_files:
- split: train
path: '*/*kn*/train.parquet'
- split: valid
path: '*/*kn*/valid.parquet'
- config_name: lo
data_files:
- split: train
path: '*/*lo*/train.parquet'
- split: valid
path: '*/*lo*/valid.parquet'
- config_name: mg
data_files:
- split: train
path: '*/*mg*/train.parquet'
- split: valid
path: '*/*mg*/valid.parquet'
- config_name: ml
data_files:
- split: train
path: '*/*ml*/train.parquet'
- split: valid
path: '*/*ml*/valid.parquet'
- config_name: or
data_files:
- split: train
path: '*/*or*/train.parquet'
- split: valid
path: '*/*or*/valid.parquet'
- config_name: pa
data_files:
- split: train
path: '*/*pa*/train.parquet'
- split: valid
path: '*/*pa*/valid.parquet'
- config_name: ps
data_files:
- split: train
path: '*/*ps*/train.parquet'
- split: valid
path: '*/*ps*/valid.parquet'
- config_name: sa
data_files:
- split: train
path: '*/*sa*/train.parquet'
- split: valid
path: '*/*sa*/valid.parquet'
- config_name: sd
data_files:
- split: train
path: '*/*sd*/train.parquet'
- config_name: si
data_files:
- split: train
path: '*/*si*/train.parquet'
- split: valid
path: '*/*si*/valid.parquet'
- config_name: so
data_files:
- split: train
path: '*/*so*/train.parquet'
- split: valid
path: '*/*so*/valid.parquet'
- config_name: sq
data_files:
- split: train
path: '*/*sq*/train.parquet'
- split: valid
path: '*/*sq*/valid.parquet'
- config_name: su
data_files:
- split: train
path: '*/*su*/train.parquet'
- split: valid
path: '*/*su*/valid.parquet'
- config_name: ta
data_files:
- split: train
path: '*/*ta*/train.parquet'
- split: valid
path: '*/*ta*/valid.parquet'
- config_name: te
data_files:
- split: train
path: '*/*te*/train.parquet'
- split: valid
path: '*/*te*/valid.parquet'
---
|
jlbaker361/actstu-gsdf-counterfeit-50 | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
- name: seed
dtype: int64
- name: steps
dtype: int64
splits:
- name: train
num_bytes: 11797749.0
num_examples: 28
download_size: 11799351
dataset_size: 11797749.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bhavnicksm/PokemonCardsPlus | ---
dataset_info:
features:
- name: id
dtype: string
- name: name
dtype: string
- name: card_image
dtype: string
- name: pokemon_image
dtype: string
- name: caption
dtype: string
- name: pokemon_intro
dtype: string
- name: pokedex_text
dtype: string
- name: hp
dtype: int64
- name: set_name
dtype: string
- name: blip_caption
dtype: string
splits:
- name: train
num_bytes: 39075923
num_examples: 13139
download_size: 8210056
dataset_size: 39075923
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "PokemonCardsPlus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AdapterOcean/augmentatio-standardized_cluster_6 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 27911818
num_examples: 2753
download_size: 7524422
dataset_size: 27911818
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "augmentatio-standardized_cluster_6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
maxidl/Capybara-de | ---
dataset_info:
features:
- name: source
dtype: string
- name: messages_en
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages_de
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 156495658
num_examples: 15991
download_size: 80194829
dataset_size: 156495658
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-generation
language:
- de
- en
size_categories:
- 10K<n<100K
---
German version of [LDJnr/Capybara](https://huggingface.co/datasets/LDJnr/Capybara). Translated using DeepL (informal style).
|lang|#chars|
|---|---|
|en|71_102_832|
|de|81_422_005| |
herisan/mental_health_counseling_conversations | ---
dataset_info:
features:
- name: Context
dtype: string
- name: Response
dtype: string
splits:
- name: train
num_bytes: 4643156
num_examples: 3512
download_size: 2451127
dataset_size: 4643156
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Nexdata/5147_Images_Japanese_Handwriting_OCR_data | ---
license: cc-by-nc-nd-4.0
---
## Description
5,147 Images Japanese Handwriting OCR Data. The text carrier are A4 paper, lined paper, quadrille paper, etc. The device is cellphone, the collection angle is eye-level angle. The dataset content includes Japanese composition, poetry, prose, news, stories, etc. For annotation, line-level quadrilateral bounding box annotation and transcription for the texts were annotated in the data.The dataset can be used for tasks such as Japanese handwriting OCR.
For more details, please refer to the link: https://www.nexdata.ai/dataset/1296?source=Huggingface
## Data size
5,147 images
## Population distribution
gender distribution: 244 males, 304 females; age distribution: 2 people under 18 years old, 494 people aged from 18 to 45 years old, 50 people aged from 46 to 60, 2 people over 60 years old; nationality distribution: Japan
## Collecting environment
A4 paper, lined paper, quadrille paper, etc.
## Device
cellphone
## Photographic angle
eye-level angle
## Data format
the image data format is .jpg, the annotation file format is .json
## Data content
including Japanese composition, poetry, prose, news, stories, etc.
## Annotation content
line-level quadrilateral bounding box annotation and transcription for the texts
## Accuracy
the collection content accuracy is not less than 97%; the texts transcription accuracy is not less than 97%
# Licensing Information
Commercial License
|
FunDialogues/customer-service-grocery-cashier | ---
license: apache-2.0
task_categories:
- question-answering
- conversational
language:
- en
tags:
- fictitious dialogues
- prototyping
- customer service
pretty_name: customer-service-grocery-cashier
size_categories:
- n<1K
---
# This Dialogue
Comprised of fictitious examples of dialogues between a customer at a grocery store and the cashier. Check out the example below:
```
"id": 1,
"description": "Price inquiry",
"dialogue": "Customer: Excuse me, could you tell me the price of the apples per pound? Cashier: Certainly! The price for the apples is $1.99 per pound."
```
# How to Load Dialogues
Loading dialogues can be accomplished using the fun dialogues library or Hugging Face datasets library.
## Load using fun dialogues
1. Install fun dialogues package
`pip install fundialogues`
2. Use loader utility to load dataset as pandas dataframe. Further processing might be required for use.
```
from fundialogues import dialoader
# load as pandas dataframe
bball_coach = dialoader('"FunDialogues/customer-service-grocery-cashier")
```
## Loading using Hugging Face datasets
1. Install datasets package
2. Load using datasets
```
from datasets import load_dataset
dataset = load_dataset("FunDialogues/customer-service-grocery-cashier")
```
## How to Contribute
If you want to contribute to this project and make it better, your help is very welcome. Contributing is also a great way to learn more about social coding on Github, new technologies and and their ecosystems and how to make constructive, helpful bug reports, feature requests and the noblest of all contributions: a good, clean pull request.
### Contributing your own Lifecycle Solution
If you want to contribute to an existing dialogue or add a new dialogue, please open an issue and I will follow up with you ASAP!
### Implementing Patches and Bug Fixes
- Create a personal fork of the project on Github.
- Clone the fork on your local machine. Your remote repo on Github is called origin.
- Add the original repository as a remote called upstream.
- If you created your fork a while ago be sure to pull upstream changes into your local repository.
- Create a new branch to work on! Branch from develop if it exists, else from master.
- Implement/fix your feature, comment your code.
- Follow the code style of the project, including indentation.
- If the component has tests run them!
- Write or adapt tests as needed.
- Add or change the documentation as needed.
- Squash your commits into a single commit with git's interactive rebase. Create a new branch if necessary.
- Push your branch to your fork on Github, the remote origin.
- From your fork open a pull request in the correct branch. Target the project's develop branch if there is one, else go for master!
If the maintainer requests further changes just push them to your branch. The PR will be updated automatically.
Once the pull request is approved and merged you can pull the changes from upstream to your local repo and delete your extra branch(es).
And last but not least: Always write your commit messages in the present tense. Your commit message should describe what the commit, when applied, does to the code – not what you did to the code.
# Disclaimer
The dialogues contained in this repository are provided for experimental purposes only. It is important to note that these dialogues are assumed to be original work by a human and are entirely fictitious, despite the possibility of some examples including factually correct information. The primary intention behind these dialogues is to serve as a tool for language modeling experimentation and should not be used for designing real-world products beyond non-production prototyping.
Please be aware that the utilization of fictitious data in these datasets may increase the likelihood of language model artifacts, such as hallucinations or unrealistic responses. Therefore, it is essential to exercise caution and discretion when employing these datasets for any purpose.
It is crucial to emphasize that none of the scenarios described in the fun dialogues dataset should be relied upon to provide advice or guidance to humans. These scenarios are purely fictitious and are intended solely for demonstration purposes. Any resemblance to real-world situations or individuals is entirely coincidental.
The responsibility for the usage and application of these datasets rests solely with the individual or entity employing them. By accessing and utilizing these dialogues and all contents of the repository, you acknowledge that you have read and understood this disclaimer, and you agree to use them at your own discretion and risk. |
dvijay/guanaco-oa-formatted | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 24308056
num_examples: 9846
download_size: 14243346
dataset_size: 24308056
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Nikutka/L1_scraped_korpus_wzorcowy | ---
dataset_info:
features:
- name: content
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 4838134
num_examples: 29488
- name: test
num_bytes: 1207567
num_examples: 7372
download_size: 4332711
dataset_size: 6045701
---
# Dataset Card for "L1_scraped_korpus_wzorcowy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YunqiLI/test | ---
license: bigscience-openrail-m
language:
- en
tags:
- finance
--- |
abideen/lex-dpooo | ---
dataset_info:
features:
- name: input
dtype: string
- name: generation_model
sequence: string
- name: generation_prompt
sequence: string
- name: raw_generation_responses
sequence: string
- name: generations
sequence: string
splits:
- name: train
num_bytes: 156338514
num_examples: 20000
download_size: 77283552
dataset_size: 156338514
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ej94/dataset_repository_name | ---
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
amilmshaji/hp_sql | ---
license: mit
---
|
acloudfan/newsgroups-mini | ---
language:
- en
license: mit
size_categories:
- 1K<n<10K
task_categories:
- text-classification
- sentence-similarity
pretty_name: scikit_20newsgroups
tags:
- 20newsgroups
- scikit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: class
dtype: string
splits:
- name: train
num_bytes: 493413
num_examples: 450
download_size: 300272
dataset_size: 493413
---
The data in this dataset is a subset of 20newsgroups/SciKit dataset:
https://scikit-learn.org/0.19/modules/generated/sklearn.datasets.fetch_20newsgroups.html#sklearn.datasets.fetch_20newsgroups
---
license: mit
dataset_info:
pretty_name: 'SciKit newsgroup20 subset'
features:
- name: index
dtype: int64
- name: Text
dtype: string
- name: Label
dtype: int32
- name: Class Name
dtype: string
task_categories:
-text classification
-sentence similarity
tags:
-text classification
-sentence similarity
splits:
- name: train
num_bytes: 799164
num_examples: 750
download_size: 477299
dataset_size: 799164
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
--- |
itamarcard/dataset | ---
license: openrail
---
|
ziq/RSNA-ATD2023 | ---
annotations_creators:
- other
language:
- en
language_creators:
- found
- expert-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: RSNA-ATD2023
size_categories:
- 10K<n<100K
source_datasets:
- extended|other
tags: []
task_categories:
- image-segmentation
task_ids:
- semantic-segmentation
---
# 📁 Dataset
This dataset only comprised of 205 series of CT scans in `.png` file with raw images and raw mask.
Data source: [Kaggle RSNA 2023 Abdominal Trauma Detection](https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/data)
# 🚀 Setup
```bash
pip install datasets
```
# 🤩 Feel the Magic
### Load Dataset
```python
from datasets import load_dataset
data = load_dataset('ziq/RSNA-ATD2023')
print(data)
```
```bash
DatasetDict({
train: Dataset({
features: ['patient_id', 'series_id', 'frame_id', 'image', 'mask'],
num_rows: 70291
})
})
```
### Set Labels
```python
labels = ["background", "liver", "spleen", "right_kidney", "left_kidney", "bowel"]
```
### Train Test Split
```python
data = data['train'].train_test_split(test_size=0.2)
```
```python
train, test = data['train'], data['test']
# train[0]['patient_id']
# train[0]['image'] -> PIL Image
# train[0]['mask'] -> PIL Image
```
### Get Image & Segmentation Mask
```python
ids = 3
image, mask = train[ids]['image'], \ # shape: (512, 512)
train[ids]['mask'] # shape: (512, 512)
```
### Convert mask into np.ndarray
```python
mask = np.array(mask)
```
### Visualize Image & Mask
```python
fig = plt.figure(figsize=(16,16))
ax1 = fig.add_subplot(131)
plt.axis('off')
ax1.imshow(image, cmap='gray')
ax2 = fig.add_subplot(132)
plt.axis('off')
ax2.imshow(mask, cmap='gray')
ax3 = fig.add_subplot(133)
ax3.imshow(image*np.where(mask>0,1,0), cmap='gray')
plt.axis('off')
plt.show()
```

### Write Custom Plotting Function
```python
from matplotlib.colors import ListedColormap, BoundaryNorm
colors = ['#02020e', '#520e6d', '#c13a50', '#f57d15', '#fac62c', '#f4f88e'] # inferno
bounds = range(0, len(colors) + 1)
# Define the boundaries for each class in the colormap
cmap, norm = ListedColormap(colors), BoundaryNorm(bounds, len(colors))
# Plot the segmentation mask with the custom colormap
def plot_mask(mask, alpha=1.0):
_, ax = plt.subplots()
cax = ax.imshow(mask, cmap=cmap, norm=norm, alpha=alpha)
cbar = plt.colorbar(cax, cmap=cmap, norm=norm, boundaries=bounds, ticks=bounds)
cbar.set_ticks([])
_labels = [""] + labels
for i in range(1, len(_labels)):
cbar.ax.text(2, -0.5 + i, _labels[i], ha='left', color=colors[i - 1], fontsize=8)
plt.axis('off')
plt.show()
```
### Custom Color
```python
plot_mask(mask)
```

### Plot only one class (e.g. liver)
```python
liver, spleen, right_kidney, left_kidney, bowel = [(mask == i,1,0)[0] * i for i in range(1, len(labels))]
plot_mask(liver)
```

|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.