id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
piyush23111991/Clinical_Trial_V2 | 2023-10-01T07:34:36.000Z | [
"region:us"
] | piyush23111991 | null | null | null | 0 | 3 | Entry not found |
codys12/MergeLlama | 2023-10-09T21:43:13.000Z | [
"license:cc-by-4.0",
"region:us"
] | codys12 | null | null | null | 1 | 3 | ---
license: cc-by-4.0
---
MergeLlama is a unique dataset that encapsulates real-world merge conflicts alongside their corresponding resolutions. Developed from the foundational dataset shared in "Anonymous. (2022). Data set for FSE 2022 Submission Program Merge Conflict Resolution via Neural Transformers", MergeLlama provides a comprehensive collection of conflict scenarios and how they were resolved. With potential multiple conflicts in a single entry followed by its respective resolution, this dataset serves as a rich resource for understanding merge conflicts and developing automated resolution strategies.
For those using this dataset, please cite as follows:
"MergeLlama Dataset. (2023). Merge Conflicts Fused with Their Resolutions. Based on: Anonymous. (2022). Data set for FSE 2022 Submission Program Merge Conflict Resolution via Neural Transformers (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.6366908".
|
anirudh-sub/debate_dataset_practice | 2023-09-30T00:10:48.000Z | [
"region:us"
] | anirudh-sub | null | null | null | 0 | 3 | Entry not found |
frncscp/patacon-730-redux | 2023-09-30T06:04:43.000Z | [
"region:us"
] | frncscp | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Patacon-False
'1': Patacon-True
- name: pca
sequence:
sequence: float64
- name: index
dtype: int64
splits:
- name: train
num_bytes: 2109516792.0
num_examples: 874
- name: validation
num_bytes: 345897375.0
num_examples: 143
- name: test
num_bytes: 1068105458.0
num_examples: 442
download_size: 2084100119
dataset_size: 3523519625.0
---
# Dataset Card for "patacon-730-redux"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
marcus2000/HSE_project_VK_NLP | 2023-09-30T11:12:24.000Z | [
"region:us"
] | marcus2000 | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: sentiment
dtype: string
splits:
- name: train
num_bytes: 425667.1102204409
num_examples: 848
- name: test
num_bytes: 75294.88977955912
num_examples: 150
download_size: 274658
dataset_size: 500962.0
---
# Dataset Card for "HSE_project_VK_NLP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dhanushreddy29/save_images | 2023-09-30T13:09:16.000Z | [
"region:us"
] | dhanushreddy29 | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 13418427.0
num_examples: 47
download_size: 13419330
dataset_size: 13418427.0
---
# Dataset Card for "save_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hcho22/codealpaka_20k_filtered | 2023-09-30T14:47:51.000Z | [
"license:apache-2.0",
"region:us"
] | hcho22 | null | null | null | 0 | 3 | ---
license: apache-2.0
---
|
manu/theses_fr_2013_2023 | 2023-09-30T16:45:34.000Z | [
"region:us"
] | manu | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: title_fr
dtype: string
- name: abstract_fr
dtype: string
- name: title_en
dtype: string
- name: abstract_en
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 392127399
num_examples: 97320
download_size: 224948329
dataset_size: 392127399
---
# Dataset Card for "theses_fr_2013_2023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
peterschmidt85/samsum | 2023-09-30T17:06:11.000Z | [
"region:us"
] | peterschmidt85 | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 10789305
num_examples: 14732
download_size: 5844166
dataset_size: 10789305
---
# Dataset Card for "samsum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
blockplacer4/hobby-dataset | 2023-09-30T19:09:47.000Z | [
"region:us"
] | blockplacer4 | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: Input
dtype: string
- name: Output
dtype: string
- name: Text
dtype: string
splits:
- name: train
num_bytes: 217380
num_examples: 512
download_size: 39563
dataset_size: 217380
---
annotations_creators:
- expert-generated
language:
- de
language_creators:
- expert-generated
- machine-generated
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: Hobby-KI
size_categories:
- n<1K
source_datasets:
- original
tags: []
task_categories:
- text-generation
task_ids:
- dialogue-modeling
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction |
RealTimeData/wikitext_alltime | 2023-09-30T21:45:42.000Z | [
"license:cc-by-2.0",
"region:us"
] | RealTimeData | This dataset contains Wikipedia articles of 419 selected pages from 2017 to 2022. The articles are arraged by month. Access the specific month by using the format "YYYY-MM" as config. Such as load_dataset("RealTimeData/wikitext_alltime", "2021-1"). | @misc{li2023estimating,
title={Estimating Contamination via Perplexity: Quantifying Memorisation in Language Model Evaluation},
author={Yucheng Li},
year={2023},
eprint={2309.10677},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 0 | 3 | ---
license: cc-by-2.0
---
# Wikipedia for All Times
You could find the history of 419 selected Wikipedia pages for every month between 2017 to 2022.
Use this to download the historical version of Wikipedia articles in a specific month:
```
ds = datasets.load_dataset('RealTimeData/wikitext_alltime', '2017-8')
```
The time stamp follows the format of "YYYY-MM". |
geraldng01/guanaco-llama2-200 | 2023-10-01T12:19:20.000Z | [
"region:us"
] | geraldng01 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 338808
num_examples: 200
download_size: 0
dataset_size: 338808
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Arabic-Clip/Arabic_dataset_1M_translated_jsonl_format | 2023-10-01T07:53:41.000Z | [
"region:us"
] | Arabic-Clip | null | null | null | 0 | 3 | Entry not found |
sitloboi2012/rvl_cdip_large_dataset | 2023-10-01T08:20:47.000Z | [
"region:us"
] | sitloboi2012 | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validate
path: data/validate-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': letter
'1': form
'2': email
'3': handwritten
'4': advertisement
'5': scientific report
'6': scientific publication
'7': specification
'8': file folder
'9': news article
'10': budget
'11': invoice
'12': presentation
'13': questionnaire
'14': resume
'15': memo
splits:
- name: train
num_bytes: 3694582118.36
num_examples: 30400
- name: test
num_bytes: 388902596.88
num_examples: 3200
- name: validate
num_bytes: 388902596.88
num_examples: 3200
download_size: 4204560106
dataset_size: 4472387312.12
---
# Dataset Card for "rvl_cdip_large_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pphuc25/vlsp-2023-no-label | 2023-10-01T10:35:57.000Z | [
"region:us"
] | pphuc25 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 28620668433.8
num_examples: 284550
download_size: 34466395053
dataset_size: 28620668433.8
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "vlsp-2023-no-label"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Vishal24/small_data_2 | 2023-10-01T11:41:10.000Z | [
"region:us"
] | Vishal24 | null | null | null | 0 | 3 | Entry not found |
learn3r/SDG_math | 2023-10-01T11:46:13.000Z | [
"region:us"
] | learn3r | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: jargon
dtype: string
- name: definition
dtype: string
splits:
- name: train
num_bytes: 38022
num_examples: 200
download_size: 23657
dataset_size: 38022
---
# Dataset Card for "SDG_math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
learn3r/SDG_phy | 2023-10-01T11:46:26.000Z | [
"region:us"
] | learn3r | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: jargon
dtype: string
- name: definition
dtype: string
splits:
- name: train
num_bytes: 38449
num_examples: 200
download_size: 26322
dataset_size: 38449
---
# Dataset Card for "SDG_phy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
goldpulpy/Image-human-mask | 2023-10-01T16:03:13.000Z | [
"task_categories:image-to-image",
"size_categories:1K<n<10K",
"language:en",
"language:ru",
"license:odbl",
"mask",
"human",
"image",
"cv",
"region:us"
] | goldpulpy | null | null | null | 1 | 3 | ---
license: odbl
task_categories:
- image-to-image
language:
- en
- ru
tags:
- mask
- human
- image
- cv
pretty_name: Image human mask dataset
size_categories:
- 1K<n<10K
---

The dataset contains **500** by **500** pixel images with a green border around them. Each image is accompanied by a mask represented as a black and white image. In this mask, white color highlights the regions where the person in the image is present and black color denotes the rest of the regions.
The dataset provides an opportunity to investigate the detection and segmentation of people in the images. The images have been selected and processed to have a uniform size and to be framed in green color. This provides a convenient basis for developing and testing object detection algorithms.
Each image has a corresponding mask associated with it, which helps in identifying the pixels belonging to the person in the photograph. This is useful for object segmentation tasks such as selecting regions containing a person for further image analysis and processing. |
hodgesz/covid_qa_llama2 | 2023-10-01T20:16:13.000Z | [
"license:apache-2.0",
"region:us"
] | hodgesz | null | null | null | 0 | 3 | ---
license: apache-2.0
---
|
SuodhanJ6/Query_Domain_Classification | 2023-10-01T20:41:37.000Z | [
"license:mit",
"region:us"
] | SuodhanJ6 | null | null | null | 0 | 3 | ---
license: mit
---
|
Dloring1/Chat-Orca-custom-400 | 2023-10-02T01:20:03.000Z | [
"region:us"
] | Dloring1 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 203192
num_examples: 606
download_size: 91415
dataset_size: 203192
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Chat-Orca-custom-400"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ShinDC/important_dataset | 2023-10-02T08:24:27.000Z | [
"region:us"
] | ShinDC | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 8618263476
num_examples: 16702061
- name: valid
num_bytes: 48072624
num_examples: 93164
download_size: 3804670316
dataset_size: 8666336100
---
# Dataset Card for "important_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jaredthejelly/daniel_dataset | 2023-10-02T11:52:53.000Z | [
"region:us"
] | jaredthejelly | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 148498407
num_examples: 36636
download_size: 70484621
dataset_size: 148498407
---
# Dataset Card for "daniel_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Prabhjot410/chatbot-dataset | 2023-10-02T12:44:33.000Z | [
"license:apache-2.0",
"region:us"
] | Prabhjot410 | null | null | null | 0 | 3 | ---
license: apache-2.0
---
|
BaorBaor/14k_data_multichoice | 2023-10-03T02:09:27.000Z | [
"region:us"
] | BaorBaor | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: token_type_ids
sequence:
sequence: int8
- name: attention_mask
sequence:
sequence: int8
- name: label
dtype: int64
splits:
- name: train
num_bytes: 412680494
num_examples: 14467
download_size: 66160105
dataset_size: 412680494
---
# Dataset Card for "14k_data_multichoice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PericlesSavio/test2 | 2023-10-02T14:56:08.000Z | [
"region:us"
] | PericlesSavio | null | null | null | 0 | 3 | Entry not found |
manu/code-20b | 2023-10-02T17:00:45.000Z | [
"region:us"
] | manu | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: dataset_id
dtype: string
splits:
- name: train
num_bytes: 66209111592
num_examples: 11692337
- name: test
num_bytes: 276152957
num_examples: 48689
download_size: 25204013393
dataset_size: 66485264549
---
# Dataset Card for "code_20b2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arbitropy/ProcessedTextGen1 | 2023-10-02T21:32:43.000Z | [
"region:us"
] | arbitropy | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 515825625.7185176
num_examples: 2973192
download_size: 293360996
dataset_size: 515825625.7185176
---
# Dataset Card for "ProcessedTextGen1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nlplabtdtu/edu_eof | 2023-10-03T02:03:59.000Z | [
"region:us"
] | nlplabtdtu | null | null | null | 0 | 3 | Entry not found |
Haary/train_usk | 2023-10-03T04:07:11.000Z | [
"region:us"
] | Haary | null | null | null | 0 | 3 | Entry not found |
shivanikerai/review_prompts_9.0.1 | 2023-10-03T04:50:56.000Z | [
"region:us"
] | shivanikerai | null | null | null | 0 | 3 | Entry not found |
vishal0719/infogen-2 | 2023-10-03T07:13:14.000Z | [
"region:us"
] | vishal0719 | null | null | null | 0 | 3 | Entry not found |
Algoroxyolo/squadForLLM | 2023-10-03T17:37:23.000Z | [
"region:us"
] | Algoroxyolo | null | null | null | 0 | 3 | Entry not found |
csolheim/HealthBeautyClassifier | 2023-10-03T13:18:46.000Z | [
"region:us"
] | csolheim | null | null | null | 0 | 3 | Entry not found |
SebRincon/elm | 2023-10-03T13:25:31.000Z | [
"license:mit",
"region:us"
] | SebRincon | null | null | null | 0 | 3 | ---
license: mit
---
|
NAB1108/StockNews | 2023-10-03T21:38:07.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"region:us"
] | NAB1108 | null | null | null | 0 | 3 | ---
task_categories:
- text-classification
size_categories:
- n<1K
--- |
yejeekang/legal_instruction_token-1200 | 2023-10-03T16:28:35.000Z | [
"license:afl-3.0",
"region:us"
] | yejeekang | null | null | null | 0 | 3 | ---
license: afl-3.0
---
|
gorkaartola/ZS-train_S1-SDGdescriptions-AURORA05_S2-SDGdescriptions-SDGtitle_Negative_Sample_Filter-AURORA05 | 2023-10-03T21:04:24.000Z | [
"region:us"
] | gorkaartola | null | null | null | 0 | 3 | Entry not found |
warleagle/1t_chat_bot_data_v2 | 2023-10-03T23:07:16.000Z | [
"region:us"
] | warleagle | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 890558
num_examples: 2083
download_size: 398939
dataset_size: 890558
---
# Dataset Card for "1t_chat_bot_data_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Anis1123/guip-unfined | 2023-10-04T06:10:27.000Z | [
"region:us"
] | Anis1123 | null | null | null | 0 | 3 | Entry not found |
wozniakclub/compendio-anahuac | 2023-10-04T07:00:26.000Z | [
"region:us"
] | wozniakclub | null | null | null | 0 | 3 | Entry not found |
renumics/emodb-enrichment | 2023-10-04T07:14:23.000Z | [
"region:us"
] | renumics | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio.embedding
sequence: float32
length: 768
splits:
- name: train
num_bytes: 1643520
num_examples: 535
download_size: 2269156
dataset_size: 1643520
---
# Dataset Card for "emodb-enrichment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sai0720/Java_to_Go_dataset_new | 2023-10-04T07:14:34.000Z | [
"license:unknown",
"region:us"
] | Sai0720 | null | null | null | 0 | 3 | ---
license: unknown
---
|
priyash7/nypd-crime-complaint-data-historic-2006-2019 | 2023-10-04T10:15:34.000Z | [
"license:cc",
"region:us"
] | priyash7 | null | null | null | 0 | 3 | ---
license: cc
---
|
Falah/logo_prompts | 2023-10-04T09:57:32.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 271034
num_examples: 1000
download_size: 34969
dataset_size: 271034
---
# Dataset Card for "logo_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/verbalist_prompts | 2023-10-08T01:40:30.000Z | [
"arxiv:2305.11206",
"region:us"
] | dim | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: dim_oasst_en
path: data/dim_oasst_en-*
- split: dim_oasst_ru
path: data/dim_oasst_ru-*
- split: dim_lima
path: data/dim_lima-*
- split: dim_logic_tasks_ru
path: data/dim_logic_tasks_ru-*
- split: dim_wikihow_en
path: data/dim_wikihow_en-*
- split: dim_wikihow_ru
path: data/dim_wikihow_ru-*
- split: dim_essayforum_writing_prompts_6k
path: data/dim_essayforum_writing_prompts_6k-*
- split: dim_sharegpt_short_ru
path: data/dim_sharegpt_short_ru-*
- split: dim_openreview_prompts_65
path: data/dim_openreview_prompts_65-*
- split: dim_roleplay_instruct_v2_final
path: data/dim_roleplay_instruct_v2_final-*
- split: dim_kinomania_scripts
path: data/dim_kinomania_scripts-*
- split: dim_bugurt_thread_prompts
path: data/dim_bugurt_thread_prompts-*
- split: dim_russian_lyrics_prompts
path: data/dim_russian_lyrics_prompts-*
- split: dim_ru_instruct_gpt4
path: data/dim_ru_instruct_gpt4-*
- split: dim_gpt_roleplay_realm
path: data/dim_gpt_roleplay_realm-*
- split: dim_ultrachat_ru
path: data/dim_ultrachat_ru-*
- split: dim_scitldr
path: data/dim_scitldr-*
- split: dim_linux_man_pages_tldr_summarized
path: data/dim_linux_man_pages_tldr_summarized-*
- split: dim_dolphin_ru_3k
path: data/dim_dolphin_ru_3k-*
- split: dim_runne_prompts
path: data/dim_runne_prompts-*
- split: dim_lurk_prompts
path: data/dim_lurk_prompts-*
- split: dim_panorama_prompts_10k
path: data/dim_panorama_prompts_10k-*
- split: dim_resh_edu_short_prompts
path: data/dim_resh_edu_short_prompts-*
- split: dim_databricks_dolly_15k_ru
path: data/dim_databricks_dolly_15k_ru-*
- split: dim_databricks_dolly_15k_en
path: data/dim_databricks_dolly_15k_en-*
- split: dim_grammarly_coedit
path: data/dim_grammarly_coedit-*
- split: dim_kinopoisk_prompts
path: data/dim_kinopoisk_prompts-*
- split: dim_medical_qa_ru_prompts
path: data/dim_medical_qa_ru_prompts-*
- split: dim_joke_explaination_prompts
path: data/dim_joke_explaination_prompts-*
- split: dim_oa_stackexchange_200k
path: data/dim_oa_stackexchange_200k-*
- split: dim_scale_helpful_no_math
path: data/dim_scale_helpful_no_math-*
- split: dim_law_stackexchange_prompts
path: data/dim_law_stackexchange_prompts-*
- split: dim_ficbook_prompts_best_10k
path: data/dim_ficbook_prompts_best_10k-*
- split: dim_azbyka_logic_ru
path: data/dim_azbyka_logic_ru-*
- split: dim_povarenok
path: data/dim_povarenok-*
- split: dim_AO3_fandom_chatbot_1to1
path: data/dim_AO3_fandom_chatbot_1to1-*
- split: dim_habr_prompts_5k
path: data/dim_habr_prompts_5k-*
- split: dim_what_where_when_50k
path: data/dim_what_where_when_50k-*
- split: dim_competition_math
path: data/dim_competition_math-*
- split: dim_sharegpt_short_en_30k
path: data/dim_sharegpt_short_en_30k-*
- split: dim_ru_turbo_alpaca_evol_instruct
path: data/dim_ru_turbo_alpaca_evol_instruct-*
- split: dim_ru_turbo_saiga
path: data/dim_ru_turbo_saiga-*
- split: dim_bugurt_completion_prompts
path: data/dim_bugurt_completion_prompts-*
- split: dim_tldr_17_50k
path: data/dim_tldr_17_50k-*
- split: dim_grade_school_math_instructions
path: data/dim_grade_school_math_instructions-*
- split: dim_tldr_news
path: data/dim_tldr_news-*
- split: dim_grade_school_math_instructions_ru
path: data/dim_grade_school_math_instructions_ru-*
- split: dim_dialogsum
path: data/dim_dialogsum-*
- split: dim_HC3_ru
path: data/dim_HC3_ru-*
- split: dim_horoscopes_ru_10k
path: data/dim_horoscopes_ru_10k-*
- split: dim_yandex_q_200k
path: data/dim_yandex_q_200k-*
- split: dim_leetcodesolutions_en_2k
path: data/dim_leetcodesolutions_en_2k-*
- split: dim_forum_uristov_rf_prompts
path: data/dim_forum_uristov_rf_prompts-*
- split: dim_dialogsum_ru
path: data/dim_dialogsum_ru-*
- split: dim_huggingartists_prompts
path: data/dim_huggingartists_prompts-*
dataset_info:
features:
- name: conversation_text
sequence: string
splits:
- name: dim_oasst_en
num_bytes: 4335500
num_examples: 2289
- name: dim_oasst_ru
num_bytes: 6206378
num_examples: 2220
- name: dim_lima
num_bytes: 2892267
num_examples: 1030
- name: dim_logic_tasks_ru
num_bytes: 76915
num_examples: 86
- name: dim_wikihow_en
num_bytes: 16008199
num_examples: 1995
- name: dim_wikihow_ru
num_bytes: 24451573
num_examples: 2058
- name: dim_essayforum_writing_prompts_6k
num_bytes: 22326330
num_examples: 6361
- name: dim_sharegpt_short_ru
num_bytes: 808319
num_examples: 253
- name: dim_openreview_prompts_65
num_bytes: 6739952
num_examples: 150
- name: dim_roleplay_instruct_v2_final
num_bytes: 4389286
num_examples: 7188
- name: dim_kinomania_scripts
num_bytes: 238731
num_examples: 27
- name: dim_bugurt_thread_prompts
num_bytes: 302191
num_examples: 223
- name: dim_russian_lyrics_prompts
num_bytes: 18676
num_examples: 43
- name: dim_ru_instruct_gpt4
num_bytes: 18351658
num_examples: 14222
- name: dim_gpt_roleplay_realm
num_bytes: 20163429
num_examples: 8700
- name: dim_ultrachat_ru
num_bytes: 4495105
num_examples: 500
- name: dim_scitldr
num_bytes: 4049209
num_examples: 3229
- name: dim_linux_man_pages_tldr_summarized
num_bytes: 3006631
num_examples: 481
- name: dim_dolphin_ru_3k
num_bytes: 7976776
num_examples: 3000
- name: dim_runne_prompts
num_bytes: 2686148
num_examples: 537
- name: dim_lurk_prompts
num_bytes: 92012533
num_examples: 5671
- name: dim_panorama_prompts_10k
num_bytes: 28964132
num_examples: 11024
- name: dim_resh_edu_short_prompts
num_bytes: 12380000
num_examples: 2106
- name: dim_databricks_dolly_15k_ru
num_bytes: 21900617
num_examples: 14914
- name: dim_databricks_dolly_15k_en
num_bytes: 11973713
num_examples: 15011
- name: dim_grammarly_coedit
num_bytes: 18500223
num_examples: 82466
- name: dim_kinopoisk_prompts
num_bytes: 136323982
num_examples: 36591
- name: dim_medical_qa_ru_prompts
num_bytes: 75634717
num_examples: 80101
- name: dim_joke_explaination_prompts
num_bytes: 196224
num_examples: 364
- name: dim_oa_stackexchange_200k
num_bytes: 192535277
num_examples: 200000
- name: dim_scale_helpful_no_math
num_bytes: 85610911
num_examples: 17095
- name: dim_law_stackexchange_prompts
num_bytes: 64544963
num_examples: 24343
- name: dim_ficbook_prompts_best_10k
num_bytes: 75867114
num_examples: 10000
- name: dim_azbyka_logic_ru
num_bytes: 173101
num_examples: 480
- name: dim_povarenok
num_bytes: 93518909
num_examples: 46500
- name: dim_AO3_fandom_chatbot_1to1
num_bytes: 1162058
num_examples: 614
- name: dim_habr_prompts_5k
num_bytes: 40224997
num_examples: 5000
- name: dim_what_where_when_50k
num_bytes: 38385243
num_examples: 50000
- name: dim_competition_math
num_bytes: 5808689
num_examples: 7500
- name: dim_sharegpt_short_en_30k
num_bytes: 86599862
num_examples: 29597
- name: dim_ru_turbo_alpaca_evol_instruct
num_bytes: 105340901
num_examples: 47793
- name: dim_ru_turbo_saiga
num_bytes: 79875722
num_examples: 37699
- name: dim_bugurt_completion_prompts
num_bytes: 5471066
num_examples: 5000
- name: dim_tldr_17_50k
num_bytes: 81185070
num_examples: 50000
- name: dim_grade_school_math_instructions
num_bytes: 4655452
num_examples: 8792
- name: dim_tldr_news
num_bytes: 4014718
num_examples: 7138
- name: dim_grade_school_math_instructions_ru
num_bytes: 6845510
num_examples: 7473
- name: dim_dialogsum
num_bytes: 11176807
num_examples: 12460
- name: dim_HC3_ru
num_bytes: 43395731
num_examples: 24322
- name: dim_horoscopes_ru_10k
num_bytes: 9489348
num_examples: 10000
- name: dim_yandex_q_200k
num_bytes: 292443135
num_examples: 200000
- name: dim_leetcodesolutions_en_2k
num_bytes: 4708692
num_examples: 2048
- name: dim_forum_uristov_rf_prompts
num_bytes: 2757263
num_examples: 1849
- name: dim_dialogsum_ru
num_bytes: 18657989
num_examples: 12460
- name: dim_huggingartists_prompts
num_bytes: 121909835
num_examples: 64006
download_size: 0
dataset_size: 2023767777
---
# Verbalist (буквоед) - русскоязычный ассистент.
Проект во многом вдохновленный [Saiga](https://huggingface.co/IlyaGusev/saiga2_7b_lora).
Мною были собраны все самые качественные датасеты с [huggingface.datasets](https://huggingface.co/datasets), а также собраны дополнительно с тех сайтов, которые я посчитал весьма полезными для создания аналога ChatGPT. Лицензии у всех датасетов отличаются, какие-то по типу [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) были созданы специально для обучения подобных моделей, какие-то являются прямой выгрузкой диалогов с ChatGPT ([RyokoAI/ShareGPT52K](https://huggingface.co/datasets/RyokoAI/ShareGPT52K)).
Вклад данного репозитория состоит в систематизации и стандартизации уже имеющихся датасетов, добавлении новых. А также тренировке моделей на этих данных.
- [google sheets таблица с датасетами и описанием](https://docs.google.com/spreadsheets/d/10xcsINF_c_zUZchT8p-8xIuHDgcuwg63jjl2ortBP9I/edit?usp=sharing)
### Датасеты
- **[Объединенный датасет где все данные уже подготовлены для тренировки диалоговой модели](https://huggingface.co/datasets/dim/verbalist_prompts)**
|name |link |description |original_name |original_source |preparation_script |language|amount_examples|mean_llama_tokens|std |min_llama_tokens|25% |50% |75% |max_llama_tokens|
|-------------------------------------|---------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------|-------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|--------|---------------|-----------------|-----------|----------------|-------|-------|-------|----------------|
|dim/oasst_en |https://huggingface.co/datasets/dim/oasst_en |OpenAssistant Conversations Dataset на английском языке, который был вручную отфильтрован мной. В исходном датасете около 30% диалогов оказались не корректными. Иногда пользователь, играющий роль ассистента, использовал грубый тон в общении с пользователем, иногда люди просто отвечали "не знаю" на вопросы, и некоторые из вопросов были недостаточно научными или слишком краткими. Вы можете ознакомиться с этой разметкой по следующей ссылке: https://docs.google.com/spreadsheets/d/117t5-Tr-dxdODpyFBkBg5R8GklYBlsvBfeDyjqwz2pA/edit?usp=sharing|2023-04-12_oasst_ready.messages.jsonl.gz |https://huggingface.co/datasets/OpenAssistant/oasst1/blob/main/2023-04-12_oasst_ready.messages.jsonl.gz|https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/oasst |en |2289 |468.6788991 |295.0864391|17 |264 |410 |618 |2332 |
|dim/oasst_ru |https://huggingface.co/datasets/dim/oasst_ru |OpenAssistant Conversations Dataset на русском языке, который был вручную отфильтрован мной. В исходном датасете около 30% диалогов оказались не корректными. Иногда пользователь, играющий роль ассистента, использовал грубый тон в общении с пользователем, иногда люди просто отвечали "не знаю" на вопросы, и некоторые из вопросов были недостаточно научными или слишком краткими. Вы можете ознакомиться с этой разметкой по следующей ссылке: https://docs.google.com/spreadsheets/d/1uiOnqxiytuxrB6u6q2pMSdnMfqjT3arfg8DlT-OWlb0/edit?usp=sharing |2023-04-12_oasst_ready.messages.jsonl.gz |https://huggingface.co/datasets/OpenAssistant/oasst1/blob/main/2023-04-12_oasst_ready.messages.jsonl.gz|https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/oasst |ru |2220 |589.6112613 |479.835392 |7 |278 |465 |763.5 |5028 |
|dim/lima |https://huggingface.co/datasets/dim/lima |Данный датасет включает в себя 1000 высококачественных обучающих примеров на английском языке. Он собран из различных источников, включая Stack Exchange (STEM), Stack Exchange (Other), wikiHow, Pushshift r/WritingPrompts, Natural Instructions, а также уникальные инструкции, созданные авторами статей. Более подробную информацию о датасете можно найти в [соответствующей статье](https://arxiv.org/pdf/2305.11206.pdf). |GAIR/lima |https://huggingface.co/datasets/GAIR/lima |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/lima |en |1030 |712.9456311 |671.179319 |29 |312.75 |488.5 |825 |3920 |
|dim/logic_tasks_ru |https://huggingface.co/datasets/dim/logic_tasks_ru |Данный набор задач по логике для детей взят с веб-сайта https://www.potehechas.ru/zadachi/zadachi.shtml. |Логические задачи - Логика и нестандартное мышление |https://www.potehechas.ru/zadachi/zadachi.shtml |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/logic_tasks_ru |ru |86 |193.0697674 |76.69048422|58 |133.75 |185 |243.5 |432 |
|dim/wikihow_en |https://huggingface.co/datasets/dim/wikihow_en |Данный датасет содержит англоязычные статьи, извлеченные с веб-сайта Wikihow. |0x22almostEvil/multilingual-wikihow-qa-16k |https://huggingface.co/datasets/0x22almostEvil/multilingual-wikihow-qa-16k |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/wiki_how |en |1995 |2037.86416 |870.1910713|265 |1463 |1913 |2461.5 |8988 |
|dim/wikihow_ru |https://huggingface.co/datasets/dim/wikihow_ru |Данный датасет включает в себя русскоязычные статьи, полученные с веб-сайта Wikihow. |0x22almostEvil/multilingual-wikihow-qa-16k |https://huggingface.co/datasets/0x22almostEvil/multilingual-wikihow-qa-16k |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/wiki_how |ru |2058 |2498.119534 |1587.851549|139 |1236.25|2264 |3421.75|10217 |
|dim/essayforum_writing_prompts_6k |https://huggingface.co/datasets/dim/essayforum_writing_prompts_6k |Данный датасет включает в себя запросы на помощь с написанием небольших эссе, размещенные на данном сайте. Ответы в датасете предоставлены исключительно главным администратором сайта. Его ответы были отобраны, поскольку чаще всего они являются наиболее качественными и вдумчивыми. |EssayForum |https://essayforum.com/writing/ |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/essayforum |en |6361 |783.1760729 |285.4314176|258 |629 |742 |879 |4966 |
|dim/sharegpt_short_ru |https://huggingface.co/datasets/dim/sharegpt_short_ru |Очищенная версия русская версия sharegpt. Я попытался вырезать из текста все промпты, где модель извиняется что что-то не может сделать, что она не имеет доступа в интернет. Диалоги, которые противоречат морали модели я просто исключил. Постарался убрать упоминания о том что она модель AI, так как за ролеплейные характеристики отвечают другие датасеты. |RyokoAI/ShareGPT52K |https://huggingface.co/datasets/RyokoAI/ShareGPT52K |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/sharegpt |ru |253 |706.6521739 |494.7437584|13 |310 |628 |1078 |1861 |
|dim/openreview_prompts_65 |https://huggingface.co/datasets/dim/openreview_prompts_65 |Датасет рецензий на реальные научные статьи с сайта openreview. Вышло на самом деле не так много, так как многие статьи не выложенны на arxiv или просто не имеют рецензий. Плюс я собрал только малую часть данного сайта, а не все что там было. |https://openreview.net/ |https://openreview.net/ |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/openreview |en |150 |13531.51333 |6966.623686|4893 |8279 |12648.5|15833.5|41494 |
|dim/roleplay_instruct_v2_final |https://huggingface.co/datasets/dim/roleplay_instruct_v2_final |Датасет ролеплея от GPT-4 на различных персонажей на английском языке. |roleplay-instruct-v2-final |https://github.com/teknium1/GPTeacher |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/gpt_roleplay_realm |en |7188 |155.1413467 |97.71215667|14 |88 |125 |192 |1291 |
|dim/kinomania_scripts |https://huggingface.co/datasets/dim/kinomania_scripts |Небольшой датасет, который содержит в себе сценарии фильмов целиком и их краткое содержание |https://www.kinomania.ru/scripts |https://www.kinomania.ru/scripts |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/kinomania_scripts |ru\en |27 |2603.407407 |510.375447 |1887 |2175 |2370 |3069 |3616 |
|dim/bugurt_thread_prompts |https://huggingface.co/datasets/dim/bugurt_thread_prompts |Небольшой набор размеченных бугуртов вместе с моим другом, для того чтобы модель научилась писать бугурты на конкретную ситуацию. Собраны из телеграм паблика БУГУРТ ТРЕД(https://t.me/bugurtthread) |https://t.me/bugurtthread |https://t.me/bugurtthread |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/bugurt_thread |ru |223 |334.4529148 |271.2557988|48 |148.5 |254 |434.5 |1645 |
|dim/russian_lyrics_prompts |https://huggingface.co/datasets/dim/russian_lyrics_prompts |Небольшой датасет промптов собранный мною из различных учебников по стихосложению, чтобы модель научилась писать стихи, используя необходимый литературный прием на конкретную тему. |Учебник стихосложения |https://stihi.ru/uchebnik/ |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/russian_lyrics_prompts |ru |43 |106.1395349 |71.00220701|45 |71 |83 |96.5 |411 |
|dim/ru_instruct_gpt4 |https://huggingface.co/datasets/dim/ru_instruct_gpt4 |Датасет каких-то инструкций на русском сгенерированных GPT-4 |lksy/ru_instruct_gpt4 |https://huggingface.co/datasets/lksy/ru_instruct_gpt4 |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/ru_instruct_gpt4 |ru |14222 |259.2173393 |237.9433891|16 |109 |175 |271 |1374 |
|dim/gpt_roleplay_realm |https://huggingface.co/datasets/dim/gpt_roleplay_realm |Диалоги выдуманных персонажей при помощи GPT-4, диалоги были сгенерированны при помощи GPT-3.5. Русский и английский. |IlyaGusev/gpt_roleplay_realm |https://huggingface.co/datasets/IlyaGusev/gpt_roleplay_realm |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/gpt_roleplay_realm |ru\en |8700 |504.2424138 |117.6228987|180 |424 |489 |569 |1207 |
|dim/ultrachat_ru |https://huggingface.co/datasets/dim/ultrachat_ru |Какой-то рандомный датасет диалогов от chatgpt, который я нашел на huggingface. Из текста диалогов были вырезаны шаблонные фразы по типу: "я не могу выполнить", "как языковая модель" и тд. Потому что обычно после этого следовало вменяемое решение задачи. |kaleinaNyan/UltraChat_ru |https://huggingface.co/datasets/kaleinaNyan/UltraChat_ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/ultrachat_ru |ru |500 |1781.782 |901.1212735|267 |1113.25|1648 |2250.25|7303 |
|dim/scitldr |https://huggingface.co/datasets/dim/scitldr |Саммаризация научных статей на английском языке, выполненная экспертами. |allenai/scitldr |https://huggingface.co/datasets/allenai/scitldr |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/scitldr |en |3229 |258.748529 |71.41209752|60 |209 |252 |303 |689 |
|dim/linux_man_pages_tldr_summarized |https://huggingface.co/datasets/dim/linux_man_pages_tldr_summarized |Саммаризация мануалов для инструментов линукс в удобный набор команд с их кратким описанием. |tmskss/linux-man-pages-tldr-summarized |https://huggingface.co/datasets/tmskss/linux-man-pages-tldr-summarized |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/linux-man-pages-tldr-summarized |en |481 |1567.727651 |3590.30871 |96 |405 |765 |1386 |49888 |
|dim/dolphin_ru_3k |https://huggingface.co/datasets/dim/dolphin_ru_3k |Подвыборка размера 3000 переведенных заданий dolphin. Примеры из оригинального датасета это промпты из FLANv2 и решения при помощи GPT-4 или GPT-3.5. |d0rj/dolphin-ru |https://huggingface.co/datasets/d0rj/dolphin-ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/dolphin_ru |ru |3000 |556.1133333 |650.0962612|19 |207 |369.5 |720.25 |6787 |
|dim/runne_prompts |https://huggingface.co/datasets/dim/runne_prompts |Промпты составленные из датасета RuNNE. Лично я при обучении сотавил промпт следующим образом. Сначала идет текст "Найди все именованные сущности в данном тексте:", а затем шел сам текст. В качестве выхода модели нужно сгенерировать JSON где содержатся все найденные именованные сущности. К примеру так [{"name": "PERSON", "ent": "Ким Чен Нама", "pos": "0 12"}, {"name": "ORGANIZATION", "ent": "Полиция Малайзии", "pos": "56 72"}] |iluvvatar/RuNNE |https://huggingface.co/datasets/iluvvatar/RuNNE |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/RuNNE |ru |537 |1479.750466 |230.0259174|581 |1337 |1480 |1635 |1988 |
|dim/lurk_prompts |https://huggingface.co/datasets/dim/lurk_prompts |Набор определений различных терминов с сайта lurk. Сами промпты были составлены автоматически следующим образом. напиши определение для (ОПРЕДЕЛЕНИЕ) в стиле lurk |averoo/lurk |https://huggingface.co/datasets/averoo/lurk/viewer/default/train?p=2 |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/lurk |ru |5671 |3450.34262 |4147.897824|35 |710.5 |2010 |4593 |55098 |
|dim/panorama_prompts_10k |https://huggingface.co/datasets/dim/panorama_prompts_10k |Набор юмористических заголовков и текстов новостей с сайта панорама. |its5Q/panorama |https://huggingface.co/datasets/its5Q/panorama |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/panorama |ru |11024 |516.9588171 |191.3774023|36 |422 |498 |585 |3496 |
|dim/resh_edu_short_prompts |https://huggingface.co/datasets/dim/resh_edu_short_prompts |Набор уроков с сайта resh.edu.ru включающих в себя название урока, тему, класс и текст урока с заданиями. |its5Q/resh-edu |https://huggingface.co/datasets/its5Q/resh-edu |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/resh_edu |ru |2106 |1431.510921 |435.7847102|56 |1175.5 |1517 |1777 |2029 |
|dim/databricks_dolly_15k_ru |https://huggingface.co/datasets/dim/databricks_dolly_15k_ru |Переведенный датасет dolly на русский язык. Включает в себя набор инструкций на обширное количество тематик. |dwarf2/databricks-dolly-15k-ru |https://huggingface.co/dwarf2/databricks-dolly-15k-ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/databricks_dolly_15k_ru |ru |14914 |305.4638595 |405.874049 |8 |87 |182 |370 |9268 |
|dim/databricks_dolly_15k_en |https://huggingface.co/datasets/dim/databricks_dolly_15k_en |databricks-dolly-15k — это набор данных с открытым исходным кодом, содержащий записи о выполнении инструкций, созданные тысячами сотрудников Databricks в нескольких поведенческих категориях, изложенных в документе InstructGPT, включая мозговой штурм, классификацию, закрытый контроль качества, генерацию, извлечение информации, открытый контроль качества и обобщение. |databricks/databricks-dolly-15k |https://huggingface.co/datasets/databricks/databricks-dolly-15k |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/databricks_dolly_15k_en |en |15011 |204.7264006 |302.5539423|6 |57 |119 |242 |8883 |
|dim/grammarly_coedit |https://huggingface.co/datasets/dim/grammarly_coedit |Набор промптов, которые просят исправить грамматические, стилистические ошибки на английском. |grammarly/coedit |https://huggingface.co/datasets/grammarly/coedit |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/grammarly_coedit |en |82466 |53.7128271 |26.73822864|10 |35 |46 |64 |694 |
|dim/kinopoisk_prompts |https://huggingface.co/datasets/dim/kinopoisk_prompts |Отзывы с кинопоиска на топ 250 фильмов. В промптах я прошу написать хороший, плохой или нейтральный отзыв на определенный фильм. |blinoff/kinopoisk |https://huggingface.co/datasets/blinoff/kinopoisk |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/kinopoisk |ru |36591 |875.0955973 |565.3212035|48 |484 |733 |1117 |8628 |
|dim/medical_qa_ru_prompts |https://huggingface.co/datasets/dim/medical_qa_ru_prompts |Какие-то вопросы и ответы с какого-то медицинского форума. В данной версии датасета только первый ответ из оригинала. |blinoff/medical_qa_ru_data |https://huggingface.co/datasets/blinoff/medical_qa_ru_data |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/medical_qa_ru_data |ru |80101 |206.710528 |175.4343973|12 |106 |161 |247 |5062 |
|dim/joke_explaination_prompts |https://huggingface.co/datasets/dim/joke_explaination_prompts |Объяснение шуток на английском. От изначального датасета отличается тем, что я убрал последнее предложение из объяснения, так как оно ссылается на видео на сайте. |theblackcat102/joke_explaination |https://huggingface.co/datasets/theblackcat102/joke_explaination |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/joke_explaination |en |364 |143.5741758 |68.90275411|21 |99 |137.5 |189.25 |334 |
|dim/oa_stackexchange_200k |https://huggingface.co/datasets/dim/oa_stackexchange_200k |Вопросы-ответы со stackexchange. Оригинальный датасет был составлен следующим образом: были выбраны только темы с принятым ответом, для которых длина вопроса и ответа составляет менее 1000 символов. Другие ответы, вопросы без принятых ответов или длинные записи были удалены. Так как оригинальный датасет слишком большой, я рандомно выбрал 200k семплов. |donfu/oa-stackexchange |https://huggingface.co/datasets/donfu/oa-stackexchange |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/oa_stackexchange |en |200000 |276.29862 |112.5004436|22 |194 |265 |345 |1226 |
|dim/scale_helpful_no_math |https://huggingface.co/datasets/dim/scale_helpful_no_math |Какой-то набор диалогов с вопросами-ответами на английском, происхождение неизвестно. |HuggingFaceH4/scale_helpful_no_math |https://huggingface.co/datasets/HuggingFaceH4/scale_helpful_no_math/viewer/default/train_rm |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/scale_helpful_no_math |en |17095 |1235.302603 |838.1097885|53 |663 |1063 |1617 |34480 |
|dim/law_stackexchange_prompts |https://huggingface.co/datasets/dim/law_stackexchange_prompts |Вопросы про закон на английском языке со StackExchange. Оригинальный датасет был преобразован в markdown. |ymoslem/Law-StackExchange |https://huggingface.co/datasets/ymoslem/Law-StackExchange |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/law_stackexchange |en |24343 |689.1184324 |565.0316906|43 |354 |540 |836 |8969 |
|dim/ficbook_prompts_best_10k |https://huggingface.co/datasets/dim/ficbook_prompts_best_10k |Топ 10k лучших фанфиков с сайта ficbook.net. Все промпты выглядят следующим образом: напиши фанфик с названием {title} и следующим описанием {description}, с тегами {tags}, Где title это оригинальное название, description оригинальное описание, tags это теги данного произведения. |AlexWortega/FicBook |https://huggingface.co/datasets/AlexWortega/FicBook |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/ficbook |ru |10000 |1737.8214 |402.0748161|166 |1716 |1950 |1950 |1952 |
|dim/azbyka_logic_ru |https://huggingface.co/datasets/dim/azbyka_logic_ru |Небольшой набор детских логических и православных задач, взятых с сайта https://azbyka.ru/deti/logicheskie-i-zanimatelnye-zadachi . Обычно у них почти нет развернутого решения, только ответ. Я пытался расписать решение некоторых задач, но меня хватило только на 35, если кто-то займется подобным буду рад https://docs.google.com/spreadsheets/d/1JRbtppbZCUbV_Eqd0nKbRDQEuPnJIAgJ70cUILEDUI4/edit?usp=sharing . |Логические и занимательные задачи (300 задач) |https://azbyka.ru/deti/logicheskie-i-zanimatelnye-zadachi |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/azbyka_logic_ru |ru |480 |77.4375 |77.56990416|14 |31 |50 |91 |652 |
|dim/povarenok |https://huggingface.co/datasets/dim/povarenok |46k лучших рецептов с сайта povarenok.ru, содержит текст рецепта, список ингридиентов, название блюда |https://www.povarenok.ru/recipes/ |https://www.povarenok.ru/recipes/ |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/povarenok |ru |46500 |488.9118495 |344.8563249|31 |281 |440 |632 |5542 |
|dim/AO3_fandom_chatbot_1to1 |https://huggingface.co/datasets/dim/AO3_fandom_chatbot_1to1 |Какой-то набор ролеплейных диалогов с описанием персонажей и их отыгрышем. Происхождение неизвестно. |ebony59/AO3_fandom_chatbot_1to1 |https://huggingface.co/datasets/ebony59/AO3_fandom_chatbot_1to1 |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/AO3_fandom_chatbot_1to1 |en |614 |493.7166124 |226.3885365|129 |328.25 |432.5 |611.75 |1272 |
|dim/habr_prompts_5k |https://huggingface.co/datasets/dim/habr_prompts_5k |Статьи с хабра. Датасет был составлен с помощью chatgpt, chatgpt преобразовывал заголовки таким образом чтобы они звучали как вопросы от пользователя, в качестве таргета выступала сама статья. |IlyaGusev/habr |https://huggingface.co/datasets/IlyaGusev/habr |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/habr |ru |5000 |1732.892 |454.8418369|19 |1920.75|1950 |1951 |1952 |
|dim/what_where_when_50k |https://huggingface.co/datasets/dim/what_where_when_50k |50k вопросов с решениями с сайта что где когда. В качестве промпта выступает вопрос, в качестве ответа конкатенация объяснения и краткого ответа. Все вопросы-ответы вы можете найти по этой ссылке https://huggingface.co/datasets/dim/what_where_when_ru |https://db.chgk.info |https://db.chgk.info |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/what_where_when |ru |50000 |169.1862 |68.91119898|18 |122 |158 |202 |1167 |
|dim/competition_math |https://huggingface.co/datasets/dim/competition_math |Датасет олимпиадной математики на английском. The Mathematics Aptitude Test of Heuristics (MATH) dataset. |competition_math |https://huggingface.co/datasets/competition_math |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/competition_math |en |7500 |317.5254667 |267.8583731|34 |147 |234 |393 |3029 |
|dim/sharegpt_short_en_30k |https://huggingface.co/datasets/dim/sharegpt_short_en_30k |Короткие диалоги на английском из sharegpt |RyokoAI/ShareGPT52K |https://huggingface.co/datasets/RyokoAI/ShareGPT52K |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/sharegpt |en |29597 |749.3149981 |516.3702473|3 |336 |630 |1095 |2021 |
|dim/ru_turbo_alpaca_evol_instruct |https://huggingface.co/datasets/dim/ru_turbo_alpaca_evol_instruct |Набор инструкций различной тематики на русском языке, сгенерированных при помощи chatgpt. |IlyaGusev/ru_turbo_alpaca_evol_instruct |https://huggingface.co/datasets/IlyaGusev/ru_turbo_alpaca_evol_instruct |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/ru_turbo_alpaca_evol_instruct |ru |47793 |453.0887996 |289.5498356|17 |221 |430 |623 |4647 |
|dim/ru_turbo_saiga |https://huggingface.co/datasets/dim/ru_turbo_saiga |Набор инструкций различной тематики на русском языке, сгенерированных при помощи chatgpt. |IlyaGusev/ru_turbo_saiga |https://huggingface.co/datasets/IlyaGusev/ru_turbo_saiga |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/ru_turbo_saiga |ru |37699 |412.7508687 |113.346917 |87 |339 |398 |466 |1427 |
|dim/bugurt_completion_prompts |https://huggingface.co/datasets/dim/bugurt_completion_prompts |Обрезанные бугурты, где в качестве промпта используется строка вида - продолжи бугурт: первая строчка бугурта |https://t.me/bugurtthread |https://t.me/bugurtthread |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/bugurt_thread |ru |5000 |280.2466 |320.4353681|32 |111 |178 |331 |11333 |
|dim/tldr_17_50k |https://huggingface.co/datasets/dim/tldr_17_50k |Очень вольная абстрактная саммаризация постов с реддита в одну строчку |webis/tldr-17 |https://huggingface.co/datasets/webis/tldr-17 |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/tldr_17 |en |50000 |421.12752 |403.346214 |10 |177 |303 |525 |9592 |
|dim/grade_school_math_instructions |https://huggingface.co/datasets/dim/grade_school_math_instructions |OpenAI's grade-school-math датасет преобразованный в промпты. |qwedsacf/grade-school-math-instructions |https://huggingface.co/datasets/qwedsacf/grade-school-math-instructions |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/grade-school-math-instructions |en |8792 |171.6310282 |63.09232668|50 |124 |161 |206 |511 |
|dim/tldr_news |https://huggingface.co/datasets/dim/tldr_news |Хедлайны и текст новостей на различную тематику. |JulesBelveze/tldr_news |https://huggingface.co/datasets/JulesBelveze/tldr_news |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/tldr_news |en |7138 |133.1004483 |46.48736493|23 |100 |133 |161 |476 |
|dim/grade_school_math_instructions_ru|https://huggingface.co/datasets/dim/grade_school_math_instructions_ru|OpenAI's grade-school-math датасет переведенный на русский. |d0rj/gsm8k-ru |https://huggingface.co/datasets/d0rj/gsm8k-ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/grade_school_math_instructions_ru|ru |7473 |259.8321959 |100.1229127|78 |185 |241 |314 |838 |
|dim/dialogsum |https://huggingface.co/datasets/dim/dialogsum |Саммаризация диалогов на английском языке, разметка выполнялась вручную. |knkarthick/dialogsum |https://huggingface.co/datasets/knkarthick/dialogsum |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/dialogsum |en |12460 |269.6467095 |126.285664 |75 |191 |245 |327 |1725 |
|dim/HC3_ru |https://huggingface.co/datasets/dim/HC3_ru |Вопросы-ответы с реддита, есть ответы сгенерированные chatgpt и реальные ответы пользователей. Я использовал только реальные ответы пользователей. |d0rj/HC3-ru |https://huggingface.co/datasets/d0rj/HC3-ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/HC3_ru |ru |24322 |360.5608503 |330.2285903|15 |168 |267 |435 |10025 |
|dim/horoscopes_ru_10k |https://huggingface.co/datasets/dim/horoscopes_ru_10k |10k гороскопов, с промптами где я прошу сгенерировать гороском для определенного знака зодиака |dkagramanyan/horoscopes_ru |https://huggingface.co/datasets/dkagramanyan/horoscopes_ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/horoscopes_ru |ru |10000 |183.1443 |31.62023184|55 |159 |187 |201 |464 |
|dim/yandex_q_200k |https://huggingface.co/datasets/dim/yandex_q_200k |200k рандомно выбранных вопросов-ответов с сайта yandex q. |its5Q/yandex-q |https://huggingface.co/datasets/its5Q/yandex-q |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/yandex_q |ru |200000 |304.569005 |340.7808288|18 |127 |202 |353 |19294 |
|dim/leetcodesolutions_en_2k |https://huggingface.co/datasets/dim/leetcodesolutions_en_2k |Решения задач с leetcode на разных языках. |TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k |https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/leetcodesolutions_en_2k |en |2048 |740.7441406 |253.2493282|297 |565 |685 |857 |1960 |
|dim/forum_uristov_rf_prompts |https://huggingface.co/datasets/dim/forum_uristov_rf_prompts |Вопросы-ответы с российского юридического форума. |https://xn----dtbrojdkckkfj9k.xn--p1ai/vopros-yuristu?page=560|https://xn----dtbrojdkckkfj9k.xn--p1ai/vopros-yuristu?page=560 |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/forum_uristov_rf |ru |1849 |321.0540833 |429.58896 |31 |134 |210 |349 |6470 |
|dim/dialogsum_ru |https://huggingface.co/datasets/dim/dialogsum_ru |Саммаризация диалогов на русском языке, перевод dialogsum. |d0rj/dialogsum-ru |https://huggingface.co/datasets/d0rj/dialogsum-ru |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/dialogsum-ru |ru |12460 |364.2813804 |178.7117754|98 |250 |329 |446 |2300 |
|dim/huggingartists_prompts |https://huggingface.co/datasets/dim/huggingartists_prompts |Промпты, которые просят продолжить песню в стиле определенного исполнителя. В данном наборе содержатся почти все исполнители, которых вы можете найти в этой организации https://huggingface.co/huggingartists |https://huggingface.co/huggingartists |https://huggingface.co/huggingartists |https://github.com/dmitrymailk/verbalist/tree/master/verbalist/datasets/huggingartists |ru |64006 |561.6732025 |586.18458 |28 |297 |453 |720 |32949 |
### Модели
На данный момент обучаются 3 модели llama2_7b, llama2_13b и llama1_30b.
За графиками их обучения можно следить в прямом эфире https://api.wandb.ai/links/dimweb/7rh0c7iz
### Код обучения
- [общий алгоритм обучения](https://github.com/dmitrymailk/verbalist/blob/master/verbalist/model/src/train.py)
- [формирование датасетов для обучения](https://github.com/dmitrymailk/verbalist/blob/master/verbalist/model/src/dataset.py#L176)
### Оборудование
Все обучение и инференс производится на видеокарте A100, на других видеокартах была обнаружена существенная деградация качества при инференсе, данный аспект требует дополнительного изучения.
- NVIDIA A100-SXM4-40GB
- NVIDIA-SMI 535.54.03
- Driver Version: 535.54.03
- CUDA Version: 12.2
- torch==2.0.1+cu118
### Дальнейшее развитие
Самое простое, что можно сделать это переводить уже имеющиеся хорошие датасеты с английского на русский при помощи GPT-4.
Более сложное это собирать больше разнообразных данных из различных доменов. Я могу лишь подкинуть идеи для того какие датасеты можно собрать еще.
- решебники по литературе, русскому и другим предметам
- задания со всяких бирж труда
- [краткие пересказы произведений, анализ произведений, сочинения по ним](http://www.litra.ru/shortwork/)
- [туториалы с digital ocean (более 7000)](https://www.digitalocean.com/community/tutorials)
- [туториалы с selectel](https://selectel.ru/blog/tutorials/)
- больше форумов на различные тематики
- [бесплатные эссе с ivypanda essays](https://ivypanda.com/essays/) и дальнейший их перевод на русский
- больше стихов и песен
- [олимпиадные русские задачи](https://math.ru/problems/) их очень сложно собирать, так как большинство их них живут только в PDF или docx. Но их довольно много и они довольно отличаются от олимпиадной математики на английском. Но у меня нет времени этим заниматься.
- фанфики на иностранном языке
- исправить текущие автоматические промпты на более разнообразные, при помощи chatgpt
|
derekiya/sql-create-context-llama2-78k | 2023-10-05T00:03:58.000Z | [
"region:us"
] | derekiya | null | null | null | 0 | 3 | This is dataset contain (78k samples) of the excellent [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context/viewer/default/train) and changed to [derekiya/sql-create-context-llama2-78k](https://huggingface.co/datasets/derekiya/sql-create-context-llama2-78k/viewer/default/train) dataset,
processed to match Llama 2's prompt format as described in this article.
Useful if you don't want to reformat it by yourself (e.g., using a script). It was designed for this article about fine-tuning a Llama 2 (chat) |
bjoernp/evol_eval_deu | 2023-10-07T17:37:49.000Z | [
"license:apache-2.0",
"region:us"
] | bjoernp | null | null | null | 0 | 3 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: test
path:
- "difficult_questions.parquet"
- "easy_questions.parquet"
- split: validation
path:
- "difficult_questions_val.parquet"
- "easy_questions_val.parquet"
- config_name: difficult
data_files:
- split: train
path: "difficult_questions.parquet"
- split: test
path: "difficult_questions_val.parquet"
- config_name: easy
data_files:
- split: train
path: "easy_questions_val.parquet"
- split: test
path: "easy_questions.parquet"
- config_name: deutsche_geschichte
data_files: all_questions_deutsche_geschichte.parquet
- config_name: deutsche_kultur
data_files: all_questions_deutsche_kultur.parquet
- config_name: deutsche_sprache
data_files: all_questions_deutsche_sprache.parquet
- config_name: deutsche_geographie
data_files: all_questions_deutsche_geographie.parquet
- config_name: deutsche_politik
data_files: all_questions_deutsche_politik.parquet
- config_name: deutsche_wirtschaft
data_files: all_questions_deutsche_wirtschaft.parquet
- config_name: deutsche_gesellschaft
data_files: all_questions_deutsche_gesellschaft.parquet
- config_name: deutsche_küche
data_files: all_questions_deutsche_küche.parquet
- config_name: deutschland_und_die_eu
data_files: all_questions_deutschland_und_die_eu.parquet
- config_name: deutschland_im_internationalen_kontext
data_files: all_questions_deutschland_im_internationalen_kontext.parquet
- config_name: deutsche_rechtsordnung
data_files: all_questions_deutsche_rechtsordnung.parquet
- config_name: deutsche_traditionen_und_feiertage
data_files: all_questions_deutsche_traditionen_und_feiertage.parquet
- config_name: deutsche_bildung
data_files: all_questions_deutsche_bildung.parquet
- config_name: deutsche_wissenschaft_und_technologie
data_files: all_questions_deutsche_wissenschaft_und_technologie.parquet
---
|
n0w0f/qm9-csv | 2023-10-04T20:18:35.000Z | [
"license:mit",
"region:us"
] | n0w0f | null | null | null | 0 | 3 | ---
license: mit
---
|
msaligane/tinystories_phonology | 2023-10-05T02:17:01.000Z | [
"license:cdla-sharing-1.0",
"region:us"
] | msaligane | null | null | null | 0 | 3 | ---
license: cdla-sharing-1.0
---
|
hanifabdlh/quac-lamini-instruction-indo-40k-50k | 2023-10-05T06:22:37.000Z | [
"region:us"
] | hanifabdlh | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: context
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: instruction_source
dtype: string
splits:
- name: train
num_bytes: 4142661
num_examples: 10000
download_size: 2383297
dataset_size: 4142661
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "quac-lamini-instruction-indo-40k-50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
IBM-AI-SAP-team/llama-2-train-rfp-response-v2 | 2023-10-05T07:22:39.000Z | [
"region:us"
] | IBM-AI-SAP-team | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: messages
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 229378
num_examples: 81
download_size: 118272
dataset_size: 229378
---
# Dataset Card for "llama-2-train-rfp-response-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_beomi__KoAlpaca-KoRWKV-6B | 2023-10-05T07:31:06.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 3 | ---
pretty_name: Evaluation run of beomi/KoAlpaca-KoRWKV-6B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [beomi/KoAlpaca-KoRWKV-6B](https://huggingface.co/beomi/KoAlpaca-KoRWKV-6B) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_beomi__KoAlpaca-KoRWKV-6B\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-10-05T07:29:47.362584](https://huggingface.co/datasets/open-llm-leaderboard/details_beomi__KoAlpaca-KoRWKV-6B/blob/main/results_2023-10-05T07-29-47.362584.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.24863456767245115,\n\
\ \"acc_stderr\": 0.03136051041100631,\n \"acc_norm\": 0.2497583979410816,\n\
\ \"acc_norm_stderr\": 0.03137676281359728,\n \"mc1\": 0.22399020807833536,\n\
\ \"mc1_stderr\": 0.014594964329474205,\n \"mc2\": 0.3982818485484858,\n\
\ \"mc2_stderr\": 0.01538198872167019\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.19283276450511946,\n \"acc_stderr\": 0.011529055465663334,\n\
\ \"acc_norm\": 0.23464163822525597,\n \"acc_norm_stderr\": 0.012383873560768675\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.29197371041625175,\n\
\ \"acc_stderr\": 0.004537410615572944,\n \"acc_norm\": 0.3164708225453097,\n\
\ \"acc_norm_stderr\": 0.0046414842733351076\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932268,\n \
\ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932268\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.32592592592592595,\n\
\ \"acc_stderr\": 0.040491220417025055,\n \"acc_norm\": 0.32592592592592595,\n\
\ \"acc_norm_stderr\": 0.040491220417025055\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.28289473684210525,\n \"acc_stderr\": 0.03665349695640767,\n\
\ \"acc_norm\": 0.28289473684210525,\n \"acc_norm_stderr\": 0.03665349695640767\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.22,\n\
\ \"acc_stderr\": 0.04163331998932268,\n \"acc_norm\": 0.22,\n \
\ \"acc_norm_stderr\": 0.04163331998932268\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.21509433962264152,\n \"acc_stderr\": 0.025288394502891366,\n\
\ \"acc_norm\": 0.21509433962264152,\n \"acc_norm_stderr\": 0.025288394502891366\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.25,\n\
\ \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.25,\n \
\ \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.2,\n \"acc_stderr\": 0.04020151261036845,\n \
\ \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.29,\n\
\ \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \
\ \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.19,\n \"acc_stderr\": 0.03942772444036623,\n \
\ \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.03942772444036623\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.27167630057803466,\n\
\ \"acc_stderr\": 0.0339175032232166,\n \"acc_norm\": 0.27167630057803466,\n\
\ \"acc_norm_stderr\": 0.0339175032232166\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.22549019607843138,\n \"acc_stderr\": 0.041583075330832865,\n\
\ \"acc_norm\": 0.22549019607843138,\n \"acc_norm_stderr\": 0.041583075330832865\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n\
\ \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.20851063829787234,\n \"acc_stderr\": 0.026556982117838742,\n\
\ \"acc_norm\": 0.20851063829787234,\n \"acc_norm_stderr\": 0.026556982117838742\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.22807017543859648,\n\
\ \"acc_stderr\": 0.03947152782669415,\n \"acc_norm\": 0.22807017543859648,\n\
\ \"acc_norm_stderr\": 0.03947152782669415\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.2206896551724138,\n \"acc_stderr\": 0.03455930201924812,\n\
\ \"acc_norm\": 0.2206896551724138,\n \"acc_norm_stderr\": 0.03455930201924812\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.24074074074074073,\n \"acc_stderr\": 0.022019080012217883,\n \"\
acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.022019080012217883\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.18253968253968253,\n\
\ \"acc_stderr\": 0.03455071019102146,\n \"acc_norm\": 0.18253968253968253,\n\
\ \"acc_norm_stderr\": 0.03455071019102146\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.2,\n \"acc_stderr\": 0.04020151261036846,\n \
\ \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n\
\ \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.2645161290322581,\n\
\ \"acc_stderr\": 0.02509189237885928,\n \"acc_norm\": 0.2645161290322581,\n\
\ \"acc_norm_stderr\": 0.02509189237885928\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.2955665024630542,\n \"acc_stderr\": 0.032104944337514575,\n\
\ \"acc_norm\": 0.2955665024630542,\n \"acc_norm_stderr\": 0.032104944337514575\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\
: 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.2909090909090909,\n \"acc_stderr\": 0.03546563019624337,\n\
\ \"acc_norm\": 0.2909090909090909,\n \"acc_norm_stderr\": 0.03546563019624337\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.2474747474747475,\n \"acc_stderr\": 0.030746300742124488,\n \"\
acc_norm\": 0.2474747474747475,\n \"acc_norm_stderr\": 0.030746300742124488\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.22797927461139897,\n \"acc_stderr\": 0.030276909945178256,\n\
\ \"acc_norm\": 0.22797927461139897,\n \"acc_norm_stderr\": 0.030276909945178256\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.21025641025641026,\n \"acc_stderr\": 0.020660597485026924,\n\
\ \"acc_norm\": 0.21025641025641026,\n \"acc_norm_stderr\": 0.020660597485026924\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.25555555555555554,\n \"acc_stderr\": 0.02659393910184407,\n \
\ \"acc_norm\": 0.25555555555555554,\n \"acc_norm_stderr\": 0.02659393910184407\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.22268907563025211,\n \"acc_stderr\": 0.027025433498882364,\n\
\ \"acc_norm\": 0.22268907563025211,\n \"acc_norm_stderr\": 0.027025433498882364\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.24503311258278146,\n \"acc_stderr\": 0.035118075718047245,\n \"\
acc_norm\": 0.24503311258278146,\n \"acc_norm_stderr\": 0.035118075718047245\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.22385321100917432,\n \"acc_stderr\": 0.01787121776779022,\n \"\
acc_norm\": 0.22385321100917432,\n \"acc_norm_stderr\": 0.01787121776779022\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.1712962962962963,\n \"acc_stderr\": 0.025695341643824674,\n \"\
acc_norm\": 0.1712962962962963,\n \"acc_norm_stderr\": 0.025695341643824674\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.2696078431372549,\n \"acc_stderr\": 0.03114557065948678,\n \"\
acc_norm\": 0.2696078431372549,\n \"acc_norm_stderr\": 0.03114557065948678\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.2742616033755274,\n \"acc_stderr\": 0.029041333510598025,\n \
\ \"acc_norm\": 0.2742616033755274,\n \"acc_norm_stderr\": 0.029041333510598025\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.21524663677130046,\n\
\ \"acc_stderr\": 0.027584066602208263,\n \"acc_norm\": 0.21524663677130046,\n\
\ \"acc_norm_stderr\": 0.027584066602208263\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.25190839694656486,\n \"acc_stderr\": 0.03807387116306086,\n\
\ \"acc_norm\": 0.25190839694656486,\n \"acc_norm_stderr\": 0.03807387116306086\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.33884297520661155,\n \"acc_stderr\": 0.043207678075366705,\n \"\
acc_norm\": 0.33884297520661155,\n \"acc_norm_stderr\": 0.043207678075366705\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25925925925925924,\n\
\ \"acc_stderr\": 0.042365112580946315,\n \"acc_norm\": 0.25925925925925924,\n\
\ \"acc_norm_stderr\": 0.042365112580946315\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.294478527607362,\n \"acc_stderr\": 0.03581165790474082,\n\
\ \"acc_norm\": 0.294478527607362,\n \"acc_norm_stderr\": 0.03581165790474082\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.1875,\n\
\ \"acc_stderr\": 0.0370468111477387,\n \"acc_norm\": 0.1875,\n \
\ \"acc_norm_stderr\": 0.0370468111477387\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.22330097087378642,\n \"acc_stderr\": 0.04123553189891431,\n\
\ \"acc_norm\": 0.22330097087378642,\n \"acc_norm_stderr\": 0.04123553189891431\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2777777777777778,\n\
\ \"acc_stderr\": 0.029343114798094462,\n \"acc_norm\": 0.2777777777777778,\n\
\ \"acc_norm_stderr\": 0.029343114798094462\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.26947637292464877,\n\
\ \"acc_stderr\": 0.01586624307321506,\n \"acc_norm\": 0.26947637292464877,\n\
\ \"acc_norm_stderr\": 0.01586624307321506\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.28034682080924855,\n \"acc_stderr\": 0.024182427496577612,\n\
\ \"acc_norm\": 0.28034682080924855,\n \"acc_norm_stderr\": 0.024182427496577612\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n\
\ \"acc_stderr\": 0.014422292204808835,\n \"acc_norm\": 0.24692737430167597,\n\
\ \"acc_norm_stderr\": 0.014422292204808835\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.26143790849673204,\n \"acc_stderr\": 0.025160998214292456,\n\
\ \"acc_norm\": 0.26143790849673204,\n \"acc_norm_stderr\": 0.025160998214292456\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2990353697749196,\n\
\ \"acc_stderr\": 0.026003301117885135,\n \"acc_norm\": 0.2990353697749196,\n\
\ \"acc_norm_stderr\": 0.026003301117885135\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.2932098765432099,\n \"acc_stderr\": 0.02532988817190092,\n\
\ \"acc_norm\": 0.2932098765432099,\n \"acc_norm_stderr\": 0.02532988817190092\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.2801418439716312,\n \"acc_stderr\": 0.026789172351140228,\n \
\ \"acc_norm\": 0.2801418439716312,\n \"acc_norm_stderr\": 0.026789172351140228\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.27183833116036504,\n\
\ \"acc_stderr\": 0.011363135278651411,\n \"acc_norm\": 0.27183833116036504,\n\
\ \"acc_norm_stderr\": 0.011363135278651411\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.15808823529411764,\n \"acc_stderr\": 0.022161462608068512,\n\
\ \"acc_norm\": 0.15808823529411764,\n \"acc_norm_stderr\": 0.022161462608068512\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.3006535947712418,\n \"acc_stderr\": 0.018550634502952957,\n \
\ \"acc_norm\": 0.3006535947712418,\n \"acc_norm_stderr\": 0.018550634502952957\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.20909090909090908,\n\
\ \"acc_stderr\": 0.038950910157241364,\n \"acc_norm\": 0.20909090909090908,\n\
\ \"acc_norm_stderr\": 0.038950910157241364\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.23265306122448978,\n \"acc_stderr\": 0.02704925791589618,\n\
\ \"acc_norm\": 0.23265306122448978,\n \"acc_norm_stderr\": 0.02704925791589618\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.21393034825870647,\n\
\ \"acc_stderr\": 0.028996909693328927,\n \"acc_norm\": 0.21393034825870647,\n\
\ \"acc_norm_stderr\": 0.028996909693328927\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768078,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768078\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2289156626506024,\n\
\ \"acc_stderr\": 0.03270745277352477,\n \"acc_norm\": 0.2289156626506024,\n\
\ \"acc_norm_stderr\": 0.03270745277352477\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.30994152046783624,\n \"acc_stderr\": 0.035469769593931624,\n\
\ \"acc_norm\": 0.30994152046783624,\n \"acc_norm_stderr\": 0.035469769593931624\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.22399020807833536,\n\
\ \"mc1_stderr\": 0.014594964329474205,\n \"mc2\": 0.3982818485484858,\n\
\ \"mc2_stderr\": 0.01538198872167019\n }\n}\n```"
repo_url: https://huggingface.co/beomi/KoAlpaca-KoRWKV-6B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|arc:challenge|25_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hellaswag|10_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-05T07-29-47.362584.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-05T07-29-47.362584.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-05T07-29-47.362584.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-05T07-29-47.362584.parquet'
- config_name: results
data_files:
- split: 2023_10_05T07_29_47.362584
path:
- results_2023-10-05T07-29-47.362584.parquet
- split: latest
path:
- results_2023-10-05T07-29-47.362584.parquet
---
# Dataset Card for Evaluation run of beomi/KoAlpaca-KoRWKV-6B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/beomi/KoAlpaca-KoRWKV-6B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [beomi/KoAlpaca-KoRWKV-6B](https://huggingface.co/beomi/KoAlpaca-KoRWKV-6B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_beomi__KoAlpaca-KoRWKV-6B",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-05T07:29:47.362584](https://huggingface.co/datasets/open-llm-leaderboard/details_beomi__KoAlpaca-KoRWKV-6B/blob/main/results_2023-10-05T07-29-47.362584.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.24863456767245115,
"acc_stderr": 0.03136051041100631,
"acc_norm": 0.2497583979410816,
"acc_norm_stderr": 0.03137676281359728,
"mc1": 0.22399020807833536,
"mc1_stderr": 0.014594964329474205,
"mc2": 0.3982818485484858,
"mc2_stderr": 0.01538198872167019
},
"harness|arc:challenge|25": {
"acc": 0.19283276450511946,
"acc_stderr": 0.011529055465663334,
"acc_norm": 0.23464163822525597,
"acc_norm_stderr": 0.012383873560768675
},
"harness|hellaswag|10": {
"acc": 0.29197371041625175,
"acc_stderr": 0.004537410615572944,
"acc_norm": 0.3164708225453097,
"acc_norm_stderr": 0.0046414842733351076
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932268,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932268
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.32592592592592595,
"acc_stderr": 0.040491220417025055,
"acc_norm": 0.32592592592592595,
"acc_norm_stderr": 0.040491220417025055
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.28289473684210525,
"acc_stderr": 0.03665349695640767,
"acc_norm": 0.28289473684210525,
"acc_norm_stderr": 0.03665349695640767
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932268,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932268
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.21509433962264152,
"acc_stderr": 0.025288394502891366,
"acc_norm": 0.21509433962264152,
"acc_norm_stderr": 0.025288394502891366
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.25,
"acc_stderr": 0.03621034121889507,
"acc_norm": 0.25,
"acc_norm_stderr": 0.03621034121889507
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.19,
"acc_stderr": 0.03942772444036623,
"acc_norm": 0.19,
"acc_norm_stderr": 0.03942772444036623
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.27167630057803466,
"acc_stderr": 0.0339175032232166,
"acc_norm": 0.27167630057803466,
"acc_norm_stderr": 0.0339175032232166
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.22549019607843138,
"acc_stderr": 0.041583075330832865,
"acc_norm": 0.22549019607843138,
"acc_norm_stderr": 0.041583075330832865
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.20851063829787234,
"acc_stderr": 0.026556982117838742,
"acc_norm": 0.20851063829787234,
"acc_norm_stderr": 0.026556982117838742
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.22807017543859648,
"acc_stderr": 0.03947152782669415,
"acc_norm": 0.22807017543859648,
"acc_norm_stderr": 0.03947152782669415
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2206896551724138,
"acc_stderr": 0.03455930201924812,
"acc_norm": 0.2206896551724138,
"acc_norm_stderr": 0.03455930201924812
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.24074074074074073,
"acc_stderr": 0.022019080012217883,
"acc_norm": 0.24074074074074073,
"acc_norm_stderr": 0.022019080012217883
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.18253968253968253,
"acc_stderr": 0.03455071019102146,
"acc_norm": 0.18253968253968253,
"acc_norm_stderr": 0.03455071019102146
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036846,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036846
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.2645161290322581,
"acc_stderr": 0.02509189237885928,
"acc_norm": 0.2645161290322581,
"acc_norm_stderr": 0.02509189237885928
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.2955665024630542,
"acc_stderr": 0.032104944337514575,
"acc_norm": 0.2955665024630542,
"acc_norm_stderr": 0.032104944337514575
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.2909090909090909,
"acc_stderr": 0.03546563019624337,
"acc_norm": 0.2909090909090909,
"acc_norm_stderr": 0.03546563019624337
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.2474747474747475,
"acc_stderr": 0.030746300742124488,
"acc_norm": 0.2474747474747475,
"acc_norm_stderr": 0.030746300742124488
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.22797927461139897,
"acc_stderr": 0.030276909945178256,
"acc_norm": 0.22797927461139897,
"acc_norm_stderr": 0.030276909945178256
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.21025641025641026,
"acc_stderr": 0.020660597485026924,
"acc_norm": 0.21025641025641026,
"acc_norm_stderr": 0.020660597485026924
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.25555555555555554,
"acc_stderr": 0.02659393910184407,
"acc_norm": 0.25555555555555554,
"acc_norm_stderr": 0.02659393910184407
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.22268907563025211,
"acc_stderr": 0.027025433498882364,
"acc_norm": 0.22268907563025211,
"acc_norm_stderr": 0.027025433498882364
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.24503311258278146,
"acc_stderr": 0.035118075718047245,
"acc_norm": 0.24503311258278146,
"acc_norm_stderr": 0.035118075718047245
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.22385321100917432,
"acc_stderr": 0.01787121776779022,
"acc_norm": 0.22385321100917432,
"acc_norm_stderr": 0.01787121776779022
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.1712962962962963,
"acc_stderr": 0.025695341643824674,
"acc_norm": 0.1712962962962963,
"acc_norm_stderr": 0.025695341643824674
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.2696078431372549,
"acc_stderr": 0.03114557065948678,
"acc_norm": 0.2696078431372549,
"acc_norm_stderr": 0.03114557065948678
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.2742616033755274,
"acc_stderr": 0.029041333510598025,
"acc_norm": 0.2742616033755274,
"acc_norm_stderr": 0.029041333510598025
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.21524663677130046,
"acc_stderr": 0.027584066602208263,
"acc_norm": 0.21524663677130046,
"acc_norm_stderr": 0.027584066602208263
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.25190839694656486,
"acc_stderr": 0.03807387116306086,
"acc_norm": 0.25190839694656486,
"acc_norm_stderr": 0.03807387116306086
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.33884297520661155,
"acc_stderr": 0.043207678075366705,
"acc_norm": 0.33884297520661155,
"acc_norm_stderr": 0.043207678075366705
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.042365112580946315,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.042365112580946315
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.294478527607362,
"acc_stderr": 0.03581165790474082,
"acc_norm": 0.294478527607362,
"acc_norm_stderr": 0.03581165790474082
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.1875,
"acc_stderr": 0.0370468111477387,
"acc_norm": 0.1875,
"acc_norm_stderr": 0.0370468111477387
},
"harness|hendrycksTest-management|5": {
"acc": 0.22330097087378642,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.22330097087378642,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.029343114798094462,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.029343114798094462
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.26947637292464877,
"acc_stderr": 0.01586624307321506,
"acc_norm": 0.26947637292464877,
"acc_norm_stderr": 0.01586624307321506
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.28034682080924855,
"acc_stderr": 0.024182427496577612,
"acc_norm": 0.28034682080924855,
"acc_norm_stderr": 0.024182427496577612
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24692737430167597,
"acc_stderr": 0.014422292204808835,
"acc_norm": 0.24692737430167597,
"acc_norm_stderr": 0.014422292204808835
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.26143790849673204,
"acc_stderr": 0.025160998214292456,
"acc_norm": 0.26143790849673204,
"acc_norm_stderr": 0.025160998214292456
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2990353697749196,
"acc_stderr": 0.026003301117885135,
"acc_norm": 0.2990353697749196,
"acc_norm_stderr": 0.026003301117885135
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.2932098765432099,
"acc_stderr": 0.02532988817190092,
"acc_norm": 0.2932098765432099,
"acc_norm_stderr": 0.02532988817190092
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2801418439716312,
"acc_stderr": 0.026789172351140228,
"acc_norm": 0.2801418439716312,
"acc_norm_stderr": 0.026789172351140228
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.27183833116036504,
"acc_stderr": 0.011363135278651411,
"acc_norm": 0.27183833116036504,
"acc_norm_stderr": 0.011363135278651411
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.15808823529411764,
"acc_stderr": 0.022161462608068512,
"acc_norm": 0.15808823529411764,
"acc_norm_stderr": 0.022161462608068512
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.3006535947712418,
"acc_stderr": 0.018550634502952957,
"acc_norm": 0.3006535947712418,
"acc_norm_stderr": 0.018550634502952957
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.20909090909090908,
"acc_stderr": 0.038950910157241364,
"acc_norm": 0.20909090909090908,
"acc_norm_stderr": 0.038950910157241364
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.23265306122448978,
"acc_stderr": 0.02704925791589618,
"acc_norm": 0.23265306122448978,
"acc_norm_stderr": 0.02704925791589618
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.21393034825870647,
"acc_stderr": 0.028996909693328927,
"acc_norm": 0.21393034825870647,
"acc_norm_stderr": 0.028996909693328927
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-virology|5": {
"acc": 0.2289156626506024,
"acc_stderr": 0.03270745277352477,
"acc_norm": 0.2289156626506024,
"acc_norm_stderr": 0.03270745277352477
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.30994152046783624,
"acc_stderr": 0.035469769593931624,
"acc_norm": 0.30994152046783624,
"acc_norm_stderr": 0.035469769593931624
},
"harness|truthfulqa:mc|0": {
"mc1": 0.22399020807833536,
"mc1_stderr": 0.014594964329474205,
"mc2": 0.3982818485484858,
"mc2_stderr": 0.01538198872167019
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
SSEONG/girls-groups | 2023-10-06T02:31:29.000Z | [
"task_categories:text-to-image",
"region:us"
] | SSEONG | This new dataset is designed to learn how to make custom dataset. | @InProceedings{huggingface:dataset,
title = {K-pop girls groups dataset},
author={smwoo, Inc.
},
year={2023}
} | null | 0 | 3 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 8369374
num_examples: 10
- name: validation
num_bytes: 8369374
num_examples: 10
download_size: 8353015
dataset_size: 16738748
task_categories:
- text-to-image
--- |
loubnabnl/test_kaggle | 2023-10-05T12:59:35.000Z | [
"region:us"
] | loubnabnl | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: file_id
dtype: string
- name: content
dtype: string
- name: local_path
dtype: string
- name: kaggle_dataset_name
dtype: string
- name: kaggle_dataset_owner
dtype: string
- name: kversion
dtype: string
- name: kversion_datasetsources
dtype: string
- name: dataset_versions
dtype: string
- name: datasets
dtype: string
- name: users
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 34997756
num_examples: 862
download_size: 14442045
dataset_size: 34997756
---
# Dataset Card for "test_kaggle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingSara/medqa | 2023-10-05T14:12:30.000Z | [
"region:us"
] | HuggingSara | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: meta_info
dtype: string
- name: answer_idx
dtype: string
splits:
- name: train
num_bytes: 9470204
num_examples: 10178
- name: validation
num_bytes: 1184039
num_examples: 1272
- name: test
num_bytes: 1211382
num_examples: 1273
download_size: 6952745
dataset_size: 11865625
---
# Dataset Card for "Med_QA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kye/all-lucidrain-code-python-tokenized-65536-1 | 2023-10-05T16:12:19.000Z | [
"license:mit",
"region:us"
] | kye | null | null | null | 1 | 3 | ---
license: mit
---
|
octa-cba/codigo_procesal_laboral | 2023-10-05T19:26:34.000Z | [
"license:unknown",
"region:us"
] | octa-cba | null | null | null | 0 | 3 | ---
license: unknown
---
|
siddanshchawla/answer_exp_data | 2023-10-08T22:15:46.000Z | [
"region:us"
] | siddanshchawla | null | null | null | 0 | 3 | Entry not found |
kewu93/three_styles_10rand | 2023-10-05T23:47:22.000Z | [
"region:us"
] | kewu93 | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 321384.41333333333
num_examples: 10
- name: val
num_bytes: 2935082.1333333333
num_examples: 100
download_size: 3157886
dataset_size: 3256466.546666667
---
# Dataset Card for "three_styles_10rand"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mharvill23/yugioh-crystal-beast-ready | 2023-10-06T01:09:51.000Z | [
"region:us"
] | mharvill23 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 845968.0
num_examples: 15
download_size: 847374
dataset_size: 845968.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "yugioh-crystal-beast-ready"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
asadkhan0xaf349/dataset | 2023-10-06T07:32:35.000Z | [
"license:mit",
"region:us"
] | asadkhan0xaf349 | null | null | null | 0 | 3 | ---
license: mit
---
|
minh21/COVID-QA-sentence-transformer-biencoder-data-65_25_10 | 2023-10-06T07:47:54.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: positive
dtype: string
- name: negative
dtype: string
- name: document_id
dtype: int64
splits:
- name: train
num_bytes: 4863851
num_examples: 2378
- name: test
num_bytes: 510126
num_examples: 269
download_size: 581674
dataset_size: 5373977
---
# Dataset Card for "COVID-QA-sentence-transformer-biencoder-data-65_25_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SlothBot/common_voice_preprocessed_demo | 2023-10-06T10:26:12.000Z | [
"region:us"
] | SlothBot | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_features
sequence:
sequence:
sequence: float32
- name: labels
sequence: int64
- name: input_length
dtype: float64
splits:
- name: train
num_bytes: 4831492728
num_examples: 5030
- name: test
num_bytes: 145039516
num_examples: 151
download_size: 982830599
dataset_size: 4976532244
---
# Dataset Card for "common_voice_preprocessed_demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
carnival13/massive_5_lang_DA2_tokenized | 2023-10-06T10:38:23.000Z | [
"region:us"
] | carnival13 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 424287645
num_examples: 552890
download_size: 127805722
dataset_size: 424287645
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_5_lang_DA2_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nuph/LDjnr-merged-formatted | 2023-10-06T16:06:06.000Z | [
"region:us"
] | nuph | null | null | null | 0 | 3 | Entry not found |
ContextualAI/nq_open_source | 2023-10-06T22:39:14.000Z | [
"region:us"
] | ContextualAI | null | null | null | 0 | 3 | Entry not found |
ContextualAI/mmlu | 2023-10-07T00:33:19.000Z | [
"region:us"
] | ContextualAI | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: subject
dtype: string
- name: choices
sequence: string
- name: query
dtype: string
- name: responses
sequence: string
- name: gold_generation
dtype: string
- name: configuration
dtype: string
splits:
- name: train
num_bytes: 9417355319
num_examples: 5690994
- name: dev
num_bytes: 828374
num_examples: 1531
- name: test
num_bytes: 7562338
num_examples: 14042
download_size: 2724102502
dataset_size: 9425746031
---
# Dataset Card for "mmlu"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ContextualAI/trivia_qa | 2023-10-07T00:42:28.000Z | [
"region:us"
] | ContextualAI | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: target
dtype: string
- name: query
dtype: string
- name: gold_generation
sequence: string
splits:
- name: train
num_bytes: 29497317
num_examples: 78785
- name: dev
num_bytes: 3349643
num_examples: 8837
- name: test
num_bytes: 4316214
num_examples: 11313
download_size: 22579595
dataset_size: 37163174
---
# Dataset Card for "trivia_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Fraol/DedupedRefDatasetWMetricF | 2023-10-07T01:04:15.000Z | [
"region:us"
] | Fraol | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: source
dtype: string
- name: path_name
dtype: string
- name: file_name
dtype: string
- name: ref_type
dtype: string
- name: ref_status
dtype: string
- name: hash
dtype: string
- name: class_name
dtype: string
- name: method_name
dtype: string
- name: row_number
dtype: int64
- name: cbo
dtype: float64
- name: wmc
dtype: float64
- name: lcom*
dtype: float64
- name: loc
dtype: float64
splits:
- name: train
num_bytes: 2308835214
num_examples: 385811
download_size: 482442415
dataset_size: 2308835214
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "DedupedRefDatasetWMetricF"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RtwC/people | 2023-10-07T02:58:03.000Z | [
"region:us"
] | RtwC | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PE
'2': I-PE
'3': B-OR
'4': I-OR
'5': B-LO
'6': I-LO
splits:
- name: train
num_bytes: 14972408
num_examples: 20865
- name: validation
num_bytes: 1676725
num_examples: 2319
- name: test
num_bytes: 3346959
num_examples: 4637
download_size: 2731946
dataset_size: 19996092
---
# Dataset Card for "people"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
carnival13/massive_val_DA3_tokenized | 2023-10-07T06:45:08.000Z | [
"region:us"
] | carnival13 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 16518310
num_examples: 24160
download_size: 3772737
dataset_size: 16518310
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_val_DA3_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RikoteMaster/llama2_4_translation | 2023-10-07T08:40:43.000Z | [
"region:us"
] | RikoteMaster | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: Spanish
dtype: string
- name: English
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 27623544
num_examples: 118964
download_size: 11129552
dataset_size: 27623544
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama2_4_translation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Tychema/autotrain-data-ceconomysumdataset | 2023-10-07T09:18:23.000Z | [
"task_categories:summarization",
"region:us"
] | Tychema | null | null | null | 0 | 3 | ---
task_categories:
- summarization
---
# AutoTrain Dataset for project: ceconomysumdataset
## Dataset Description
This dataset has been automatically processed by AutoTrain for project ceconomysumdataset.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": "\u9ed8\u6c99\u4e1c\u6536\u8d2d\u5148\u7075\u8446\u96c5\u540e\u5c06\u91cd\u7ec4\u4e3a5\u4e2a\u90e8\u95e8",
"text": "\u65b0\u6d6a\u8d22\u7ecf\u8baf \u5317\u4eac\u65f6\u95f4\u5468\u4e00\u665a\u95f4\u6d88\u606f\uff0c\u9ed8\u6c99\u4e1c\u516c\u53f8(MRK)\u603b\u88c1\u517c\u9996\u5e2d\u6267\u884c\u5b98\u7406\u67e5\u5fb7\u00b7\u514b\u62c9\u514b(Richard T. Clark)\u8868\u793a\uff0c\u5728\u5b8c\u6210\u5bf9\u7ade\u4e89\u5bf9\u624b\u5148\u7075\u8446\u96c5\u516c\u53f8(SGP)411\u4ebf\u7f8e\u5143\u7684\u6536\u8d2d\u540e\uff0c\u8be5\u516c\u53f8\u5c06\u91cd\u7ec4\u4e3a5\u4e2a\u90e8\u95e8\u3002\u514b\u62c9\u514b\u5c06\u7ee7\u7eed\u62c5\u4efb\u65b0\u516c\u53f8\u7684CEO\u3002\u6b64\u9879\u4ea4\u6613\u9884\u8ba1\u5c06\u4e8e\u7b2c\u56db\u5b63\u5ea6\u5b8c\u6210\u3002\u65b0\u516c\u53f8\u5c06\u62e5\u67095\u4e2a\u4e3b\u8981\u90e8\u95e8\uff0c\u5305\u62ec\u5168\u7403\u4eba\u7c7b\u5065\u5eb7(Global Human Health)\u3001\u52a8\u7269\u5065\u5eb7(Animal Health)\u3001\u6d88\u8d39\u8005\u5065\u5eb7\u62a4\u7406(Consumer Health Care)\u3001\u9ed8\u6c99\u4e1c\u7814\u7a76\u5b9e\u9a8c\u5ba4(Merck Research Laboratories)\uff0c\u4ee5\u53ca\u9ed8\u6c99\u4e1c\u5236\u9020\u90e8\u95e8(Merck Manufacturing)\u3002\u6b64\u5916\uff0c\u8fd9\u5bb6\u603b\u90e8\u4f4d\u4e8e\u65b0\u6cfd\u897f\u5ddeWhitehouse Station\u7684\u516c\u53f8\u8868\u793a\uff0c\u5148\u7075\u8446\u96c5\u73b0\u4efb\u9886\u5bfc\u5c42\u5927\u7ea640%\u7684\u6210\u5458\u5c06\u6210\u4e3a\u65b0\u516c\u53f8\u7ba1\u7406\u5c42\u7684\u4e00\u90e8\u5206\uff0c\u800c\u8be5\u516c\u53f8\u5458\u5de5\u4e2d\u7684\u7edd\u5927\u90e8\u5206\u4e5f\u5c06\u7559\u5728\u5408\u5e76\u540e\u7684\u516c\u53f8\u3002\u5168\u7403\u4eba\u7c7b\u5065\u5eb7\u90e8\u95e8\u5c06\u7531\u80af\u5c3c\u65af\u00b7\u5f17\u96f7\u6cfd(Kenneth C. Frazier)\u9886\u5bfc\uff0c\u540e\u8005\u73b0\u4efb\u9ed8\u6c99\u4e1c\u6267\u884c\u526f\u603b\u88c1\u517c\u5168\u7403\u4eba\u7c7b\u5065\u5eb7\u90e8\u95e8\u603b\u88c1\u3002\u5148\u7075\u8446\u96c5\u73b0\u4efb\u9ad8\u7ea7\u526f\u603b\u88c1\u517cIntervet Schering-Plough Animal Health\u90e8\u95e8\u603b\u88c1\u52b3\u5c14\u00b7\u53ef\u6c57(Raul E. Kohan)\u5c06\u9886\u5bfc\u65b0\u7684\u9ed8\u6c99\u4e1c\u52a8\u7269\u5065\u5eb7\u90e8\u95e8\u3002\u6d88\u8d39\u8005\u4fdd\u5065\u90e8\u95e8\u5c06\u6682\u65f6\u7531\u65af\u5766\u5229\u00b7\u5df4\u8c22(Stanley F. Barshay)\u9886\u5bfc\uff0c\u540e\u8005\u73b0\u4efb\u5148\u7075\u8446\u96c5\u6d88\u8d39\u8005\u5065\u5eb7\u90e8\u95e8\u8463\u4e8b\u957f\u3002\u5408\u5e76\u540e\u7684\u516c\u53f8\u5c06\u4e3a\u8be5\u90e8\u95e8\u5bfb\u627e\u4e00\u4f4d\u6b63\u5f0f\u9886\u5bfc\u4eba\u3002\u9ed8\u6c99\u4e1c\u7814\u7a76\u5b9e\u9a8c\u5ba4\u90e8\u95e8\u4ecd\u5c06\u7531\u73b0\u4efb\u603b\u88c1\u5f7c\u5f97\u00b7\u91d1(Peter S. Kim)\u9886\u5bfc\u3002\u9ed8\u6c99\u4e1c\u751f\u4ea7\u90e8\u95e8\u5c06\u7531\u5a01\u5229\u00b7\u8fea\u65af(Willie A. Deese)\u9886\u5bfc\uff0c\u540e\u8005\u73b0\u4efb\u9ed8\u6c99\u4e1c\u751f\u4ea7\u4e1a\u52a1\u603b\u88c1\u3002"
},
{
"target": "\u5927\u76d8\u4e94\u8fde\u9633\u5251\u63072900 \u4e0b\u5468\u8fd0\u884c\u8def\u7ebf\u56fe\u5206\u6790",
"text": "== \u4eca\u65e5\u76d8\u9762\uff1a\u5927\u76d8\u559c\u89c1\u4e94\u8fde\u9633 \u6caa\u6307\u5251\u63072900\u70b9 ==\u5468\u4e94A\u80a1\u7ee7\u7eed\u9707\u8361\u4e0a\u884c\uff0c\u6caa\u6307\u54112900\u70b9\u8fdb\u519b\u3002\u53d7\u9996\u53eaIPO\u843d\u5730\u3001\u56fd\u9645\u6cb9\u4ef7\u7ee7\u7eed\u4e0a\u626c\u3001\u7f8e\u80a1\u9053\u6307\u5fae\u5e45\u6536\u9ad8\u7b49\u56e0\u7d20\u5f71\u54cd\uff0c\u5927\u76d8\u518d\u63a5\u518d\u5389\u53c8\u521b\u53cd\u5f39\u65b0\u9ad8\u3002\u4f46\u80a1\u6307\u4e0a\u884c\u52bf\u5934\u540c\u6bd4\u6628\u65e5\u6709\u6240\u6536\u655b\uff0c\u76d8\u4e2d\u6ce2\u52a8\u52a0\u5267\uff0c\u4e2a\u80a1\u4f9d\u65e7\u662f\u4e24\u6781\u5206\u5316\uff0c\u91d1\u878d\u548c\u751f\u7269\u5236\u836f\u677f\u5757\u7ee7\u7eed\u5145\u5f53\u5e02\u573a\u7684\u9886\u5934\u7f8a\u3002\u800c\u8d44\u6e90\u677f\u5757\u6210\u4e3a\u505a\u7a7a\u7684\u4e3b\u8981\u529b\u91cf\u3002\u622a\u81f3\u6536\u76d8\uff0c\u4e0a\u8bc1\u7efc\u6307\u62a52880.49\u70b9\uff0c\u4e0a\u6da80.93%\uff0c\u76d8\u4e2d\u521b\u51fa2886.50\u70b9\u65b0\u9ad8\uff0c\u6210\u4ea41535\u4ebf\uff1b\u6df1\u8bc1\u6210\u6307\u6536\u5e02\u62a511242.3\u70b9\uff0c\u4e0a\u6da80.81%\uff0c\u6210\u4ea4792.8\u4ebf\u3002\u4e24\u5e02\u5171\u6210\u4ea42327.8\u4ebf\u3002\u540c\u6bd4\u653e\u5927\u7ee7\u7eed\u653e\u5927\u3002== \u76d8\u9762\u5206\u6790\uff1a\u5e02\u573a\u70ed\u70b9\u7ee7\u7eed\u6d3b\u8dc3 \u8d44\u6e90\u7c7b\u677f\u5757\u518d\u6b21\u5012\u6208 ==\u6743\u91cd\u80a1\u4f9d\u7136\u8f83\u4e3a\u5f3a\u52bf\u76d8\u53e3\u663e\u793a\uff0c\u4e2a\u80a1\u5206\u5316\u7684\u8d8b\u52bf\u6ca1\u6709\u6539\u53d8\uff0c\u6743\u91cd\u80a1\u4f9d\u7136\u8f83\u4e3a\u5f3a\u52bf\u3002\u4e07\u79d1\u5927\u6da83.94%\uff0c\u4e2d\u4fe1\u8bc1\u5238\u3001\u6d66\u53d1\u94f6\u884c\u3001\u4e2d\u56fd\u5e73\u5b89\u3001\u4ea4\u901a\u94f6\u884c\u6da8\u5e45\u57281%\u4ee5\u4e0a\uff0c\u77f3\u5316\u53cc\u96c4\u5206\u9053\u626c\u9573\uff0c\u4e2d\u56fd\u77f3\u6cb9\u5fae\u5e45\u6536\u6da80.57%\uff0c\u800c\u4e2d\u56fd\u77f3\u5316\u5219\u4e0b\u8dcc0.48%\u3002\u4e2d\u56fd\u5357\u8f66\u5de8\u91cf\u5c01\u6b7b\u6da8\u505c \u521b\u51fa\u65b0\u9ad8\u4e2d\u56fd\u5357\u8f66\u51fa\u73b0\u660e\u663e\u5f02\u52a8\uff0c\u65e9\u76d8\u8be5\u80a1\u5de8\u91cf\u5c01\u6b7b\u6da8\u505c\uff0c\u521b\u51fa\u4e0a\u5e02\u4ee5\u6765\u7684\u65b0\u9ad8\uff1b\u6d77\u738b\u751f\u7269\u4e34\u8fd1\u6536\u76d8\u65f6\u75af\u72c2\u6253\u5f00\u6da8\u505c\uff0c\u6210\u4ea4\u91cf\u660e\u663e\u653e\u5927\u3002\u5728\u6362\u624b\u7387\u65b9\u9762\uff0c\u7d2b\u946b\u836f\u4e1a\u3001\u6069\u534e\u836f\u4e1a\u3001\u666e\u6d1b\u80a1\u4efd\u7b4990\u591a\u53ea\u4e2a\u80a1\u6362\u624b\u7387\u8d85\u8fc710%\uff0c\u540c\u6bd4\u6709\u6240\u589e\u52a0\u3002\u6574\u4f53\u6765\u770b\uff0c\u4e24\u5e02\u8fd1\u516d\u6210\u4e2a\u80a1\u4e0a\u6da8\uff0c\u6da8\u505c\u7684\u975eST\u4e2a\u80a119\u53ea\uff0c\u6da8\u5e45\u8d85\u8fc75%\u7684\u4e2a\u80a1\u8fd160\u53ea\uff0c\u8dcc\u5e45\u8d85\u8fc75%\u7684\u4e2a\u80a19\u53ea\uff0c\u672a\u89c1\u8dcc\u505c\u7684\u975eST\u4e2a\u80a1\u3002\u91d1\u878d\u548c\u516c\u8def\u6865\u6881\u7ee7\u7eed\u6da8\u5e45\u5c45\u524d\u4eca\u65e5\u5e02\u573a\u70ed\u70b9\u4ecd\u65e7\u8f83\u4e3a\u6d3b\u8dc3\uff0c\u91d1\u878d\u548c\u516c\u8def\u6865\u6881\u677f\u5757\u7ee7\u7eed\u4f4d\u5217\u6da8\u5e45\u699c\u524d\u5217\u3002\u91d1\u878d\u677f\u5757\u53d7\u5238\u5546\u548c\u4fdd\u9669\u80a1\u7684\u5e26\u52a8\u6da8\u52bf\u559c\u4eba\uff0c\u56fd\u91d1\u8bc1\u5238\u6da8\u505c\uff0c\u4e1c\u5317\u8bc1\u5238\u3001\u897f\u5357\u8bc1\u5238\u3001\u5b8f\u6e90\u8bc1\u5238\u6da8\u5e45\u8d85\u8fc73%\uff0c\u4e2d\u56fd\u4eba\u5bff\u3001\u4e2d\u56fd\u5e73\u5b89\u3001\u4e2d\u56fd\u592a\u4fdd\u5927\u6da8\u8d85\u8fc71%\uff0c\u4e2d\u56fd\u94f6\u884c\u4e5f\u5927\u6da84.38%\uff0c\u5176\u4f59\u94f6\u884c\u80a1\u6da8\u5e45\u8d8b\u7f13\u3002\u516c\u8def\u6865\u6881\u677f\u5757\u5348\u540e\u5d1b\u8d77\uff0c\u5c71\u4e1c\u9ad8\u901f\u51b2\u51fb\u6da8\u505c\uff0c\u6df1\u9ad8\u901f\u3001\u5b81\u6caa\u9ad8\u901f\u3001\u8d63\u7ca4\u9ad8\u901f\u6da8\u5e45\u57283%\u4ee5\u4e0a\uff1bIPO\u9996\u5355\u82b1\u843d\u533b\u836f\u677f\u5757\uff0c\u751f\u7269\u533b\u836f\u677f\u5757\u53d7\u6b64\u63d0\u632f\u7ee7\u7eed\u6d3b\u8dc3\uff0c\u4f46\u4e34\u8fd1\u6536\u76d8\u6da8\u5e45\u6709\u6240\u6536\u7a84\uff0c\u7f8e\u7f57\u836f\u4e1a\u3001\u5929\u76ee\u836f\u4e1a\u3001\u5929\u575b\u751f\u7269\u7b4910\u4f59\u53ea\u4e2a\u80a1\u5c01\u6b7b\u6da8\u505c\u3002\u6b64\u5916\uff0c\u5730\u4ea7\u3001\u519c\u6797\u3001\u4ea4\u901a\u8fd0\u8f93\u7b49\u677f\u5757\u8868\u73b0\u4e5f\u76f8\u5bf9\u6d3b\u8dc3\u3002\u8d44\u6e90\u7c7b\u677f\u5757\u518d\u6b21\u5012\u6208\u8d44\u6e90\u7c7b\u677f\u5757\u518d\u6b21\u6389\u5934\u4e0b\u632b\u3002\u5e02\u573a\u4f20\u8a00\u7ba1\u7406\u5c42\u5c06\u4ecb\u5165\u7164\u70ad\u8c08\u5224\uff0c\u7535\u7164\u4ef7\u683c\u4ec5\u4ec5\u4e0a\u6da84%\uff0c\u6b64\u6d88\u606f\u538b\u5236\u7164\u70ad\u677f\u5757\u8d70\u4f4e\uff0c\u5e73\u5e84\u80fd\u6e90\u5927\u8dcc\u8d85\u8fc77%\uff0c\u9f99\u5934\u897f\u5c71\u7164\u7535\u3001\u4e2d\u7164\u80fd\u6e90\u4e5f\u9006\u52bf\u6536\u8dcc\uff0c\u4e2d\u56fd\u795e\u534e\u5c0f\u5e45\u6536\u7ea2\uff1b\u77f3\u6cb9\u677f\u5757\u8d70\u52bf\u4e5f\u5782\u5934\u4e27\u6c14\uff0c\u6dee\u6cb9\u80a1\u4efd\u3001\u9c81\u6da6\u80a1\u4efd\u3001\u8302\u534e\u5b9e\u4e1a\u8dcc\u5e45\u5c45\u524d\uff0c\u677f\u5757\u5185\u53ea\u6709\u4e2d\u56fd\u77f3\u6cb9\u5c0f\u5e45\u6536\u7ea2\uff1b\u6709\u8272\u677f\u5757\u5185\u4e2a\u80a1\u4e5f\u6089\u6570\u4e0b\u8dcc\uff0c\u6d77\u4eae\u80a1\u4efd\u3001\u4e2d\u91d1\u5cad\u5357\u3001\u897f\u90e8\u8d44\u6e90\u8dcc\u5e45\u5c45\u524d\uff0c\u4e2d\u91d1\u9ec4\u91d1\u3001\u5c71\u4e1c\u9ec4\u91d1\u7b49\u8dcc\u5e45\u8d85\u8fc71%\u3002(\u4e2d\u8bc1\u6295\u8d44 \u5f20\u7d22\u6e05)== \u540e\u5e02\u5206\u6790\uff1a\u4e0b\u5468\u80fd\u5426\u653b\u51fb3000\u70b9 \u76d8\u5f80\u9ad8\u5904\u8d70\u94b1\u5411\u4f4e\u5904\u6d41 ==\u4e0b\u5468\u5927\u76d8\u8fd0\u884c\u8def\u7ebf\u56feIPO\u7b2c\u4e00\u5355\u6b63\u5f0f\u53d1\u5e03\uff0c\u5927\u76d8\u4eca\u5929\u7684\u5f00\u76d8\u4e5f\u5f02\u5e38\u5e73\u9759\u3002\u4eca\u5929\u5e02\u573a\u7684\u6838\u5fc3\u52a8\u529b\u4f9d\u7136\u662f\u84dd\u7b79\uff0c\u91d1\u878d\u677f\u5757\u4e0a\u5348\u8d70\u52bf\u5f3a\u52b2\uff0c\u5982\u4e2d\u884c\u3001\u5efa\u884c\u7b49\u51e0\u5927\u94f6\u884c\uff0c\u4ee5\u53ca\u5238\u5546\u80a1\uff0c\u5e26\u9886\u5927\u76d8\u4e0a\u5348\u9707\u8361\u4e0a\u884c\uff1b\u4e0b\u5348\u4e2d\u77f3\u6cb9\u3001\u4e09\u5927\u94f6\u884c\u5219\u7ee7\u7eed\u53d1\u529b\uff0c\u5927\u76d8\u518d\u6b21\u521b\u51fa\u65b0\u9ad8\u3002\u84dd\u7b79\u677f\u5757\u7684\u5f3a\u52bf\u4e5f\u4f7f\u5f975\u6708\u4efd\u4e4b\u524d\u66fe\u5927\u5e45\u7092\u4f5c\u7684\u9898\u6750\u677f\u5757\u518d\u6b21\u9006\u52bf\u4e0b\u8dcc\uff0c\u5176\u4e2d\u56de\u843d\u6700\u660e\u663e\u7684\u5c31\u662f\u65b0\u80fd\u6e90\u6982\u5ff5\u3002\u8b6c\u5982\u91d1\u98ce\u79d1\u6280\u3001\u98ce\u5e06\u80a1\u4efd\u7b49\u524d\u671f\u7684\u65b0\u80fd\u6e90\u9f99\u5934\u4e2a\u80a1\u4eca\u5929\u9006\u52bf\u4e0b\u8dcc\uff0c\u5176\u5b83\u8fd8\u6709\u7279\u53d8\u7535\u5de5\uff0c\u4e5f\u540c\u6837\u8d70\u51fa\u7834\u4f4d\u4e0b\u8dcc\u7684\u8d70\u52bf\u3002\u6628\u5929\u6709\u4fe1\u606f\u663e\u793a\uff0c\u7ba1\u7406\u5c42\u53ef\u80fd\u5c06\u65b0\u80fd\u6e90\u7684\u632f\u5174\u89c4\u5212\u89c4\u6a21\u6269\u5927\u4e00\u500d\uff0c\u6628\u65e5\u65b0\u80fd\u6e90\u677f\u5757\u4e00\u5ea6\u5f3a\u52bf\u53cd\u5f39\u3002\u4e0d\u8fc7\u6211\u5efa\u8bae\uff0c\u7ecf\u6d4e\u590d\u82cf\u3001\u84dd\u7b79\u4e3a\u738b\u7684\u80cc\u666f\u4e0b\uff0c\u65b0\u80fd\u6e90\u9898\u6750\u7684\u7092\u4f5c\u5f88\u96be\u6301\u7eed\uff0c\u5e94\u5f53\u501f\u52a9\u5229\u597d\u7684\u53d1\u5e03\u53cd\u5f39\u51cf\u8f7b\u4ed3\u4f4d\u3002\u540e\u5e02\u8fd9\u4e00\u7b56\u7565\u4f9d\u7136\u4e0d\u53d8\uff0c\u9ad8\u629b\u9898\u6750(\u524d\u51e0\u4e2a\u6708\u5927\u5e45\u7092\u4f5c\u7684)\uff0c\u4f4e\u4e70\u84dd\u7b79\u3002\u6628\u65e5\u5927\u76d8\u521b\u65b0\u9ad8\u800c\u51fa\u73b0\u8865\u6da8\u7684\u7164\u70ad\u3001\u6709\u8272\u91d1\u5c5e\u677f\u5757\u4eca\u5929\u6574\u4f53\u5c0f\u5e45\u56de\u843d\uff0c\u4e3b\u8981\u539f\u56e0\u5728\u4e8e\u5168\u7403\u539f\u6cb9\u3001\u6709\u8272\u91d1\u5c5e\u671f\u8d27\u4ef7\u683c\u8d70\u52bf\u5e73\u6de1\uff0c\u8fd9\u4e24\u4e2a\u84dd\u7b79\u677f\u5757\u4f9d\u7136\u53ef\u4ee5\u7ee7\u7eed\u6301\u6709\uff0c\u5176\u8d70\u52bf\u5c06\u5728\u5f88\u5927\u7a0b\u5ea6\u4e0a\u53d7\u5230\u5546\u54c1\u671f\u8d27\u4ef7\u683c\u7684\u5f71\u54cd\u3002\u5bf9\u4e8e\u7f8e\u56fd\u7684\u7ecf\u6d4e\u6570\u636e\uff0c\u6211\u8ba4\u4e3a\u867d\u7136\u90e8\u5206\u7ecf\u6d4e\u6570\u636e\u4e0d\u4e50\u89c2\uff0c\u4f46\u662f\u81ea4\u6708\u4efd\u4ee5\u6765\uff0c\u6574\u4f53\u7684\u7ecf\u6d4e\u6570\u636e\u503e\u5411\u4e8e\u8fdb\u4e00\u6b65\u597d\u8f6c\u3002\u56e0\u6b64\uff0c\u7f8e\u56fd\u80a1\u5e02\u7684\u9707\u8361\u4e0a\u884c\u7684\u901a\u9053\u4f9d\u7136\u4f1a\u7ef4\u6301\u4e0b\u53bb\u3002\u867d\u7136\u7f8e\u56fd\u80a1\u5e02\u7684\u8d70\u52bf\u5bf9\u4e8eA\u80a1\u6ca1\u6709\u51b3\u5b9a\u6027\u5f71\u54cd\uff0c\u4f46\u662f\u7f8e\u80a1\u8d70\u52bf\u4f1a\u5f71\u54cd\u5168\u7403\uff0c\u8fdb\u800c\u5728\u4e00\u5b9a\u9636\u6bb5\u5185\u5bf9A\u80a1\u5927\u76d8\u4ea7\u751f\u4f5c\u7528\u3002\u56e0\u6b64\u6211\u4eec\u5728\u5206\u6790\u65f6\uff0c\u5e94\u8be5\u5c06\u7f8e\u56fd\u7ecf\u6d4e\u7684\u8d70\u52bf\u548c\u80a1\u5e02\u8d70\u52bf\u4f5c\u4e3a\u4e00\u4e2a\u53c2\u8003\u4f9d\u636e\u3002\u9700\u8981\u8865\u5145\u7684\u4e00\u70b9\u662f\uff0c\u76ee\u524d\u4e2d\u56fd\u7ecf\u6d4e\u4e3b\u8981\u4f9d\u9760\u5927\u529b\u6269\u5927\u6295\u8d44(\u56fa\u5b9a\u8d44\u4ea7\u6295\u8d44)\u548c\u7a33\u5065\u7684\u6d88\u8d39\u523a\u6fc0\uff0c\u800c\u6295\u8d44\u7684\u589e\u957f\u662f\u6709\u9650\u5ea6\u7684\uff0c\u8fc7\u5206\u7684\u589e\u957f\u5c06\u5e26\u6765\u7ecf\u6d4e\u8fc7\u70ed\u901a\u80c0\u7b49\u95ee\u9898\uff0c\u672a\u6765\u8981\u8fdb\u4e00\u6b65\u589e\u957f\uff0c\u8fd8\u9700\u8981\u501f\u52a9\u51fa\u53e3\u3002\u800c\u51fa\u53e3\u7684\u589e\u957f\u5728\u672c\u8d28\u4e0a\u53d6\u51b3\u4e8e\u7f8e\u56fd\u7ecf\u6d4e\u7684\u590d\u82cf\u8fdb\u7a0b\u3002\u4e0b\u5468\u9884\u8ba1\u5927\u76d8\u57282800-2900\u533a\u95f4\u9ad8\u4f4d\u5f3a\u52bf\u9707\u8361\uff0c\u4e50\u89c2\u7684\u8bdd\u57282800-2950\u4e00\u5e26\u9707\u8361\u3002\u4f55\u65f6\u653b\u78342900-3000\u533a\u95f4\u7684\u8d85\u5f3a\u963b\u529b\uff0c\u6211\u4e2a\u4eba\u8ba4\u4e3a\u9700\u8981\u4f9d\u8d56\u4e8e7\u6708\u4e0a\u4e2d\u65ec\u7684\u7ecf\u6d4e\u6570\u636e\u3002 (\u4f55\u7fa4\u8363)\u4e0b\u5468\u76d8\u5f80\u9ad8\u5904\u8d70 \u94b1\u5411\u4f4e\u5904\u6d41\u5927\u76d8\u4e0a\u6da8\u4e00\u5468\u4e4b\u540e\uff0c\u6295\u8d44\u8005\u5173\u5fc3\u7684\u662f\uff0c\u4e0b\u5468\u884c\u60c5\u53c8\u4f1a\u5982\u4f55\uff1f\u6211\u7684\u89c2\u70b9\u662f\uff1a\u5927\u76d8\u5f80\u9ad8\u5904\u8d70\uff0c\u8d44\u91d1\u5411\u4f4e\u5904\u6d41\u3002\u8fd1\u671f\u5efa\u8bbe\u94f6\u884c\uff0c\u5de5\u5546\u94f6\u884c\uff0c\u4e2d\u56fd\u94f6\u884c\u4e3a\u4ee3\u8868\u7684\u4f4e\u4ef7\u56fd\u6709\u94f6\u884c\u51fa\u73b0\u5168\u9762\u8865\u6da8\uff0c\u539f\u56e0\u662f\u65b0\u589e\u8d44\u91d1\u5927\u4e3e\u6d41\u5165\uff0c\u6210\u4e3a\u63a8\u52a8\u5927\u76d8\u5411\u4e0a\u7684\u4e3b\u8981\u529b\u91cf\u3002\u8fd9\u4e9b\u94f6\u884c\u80a1\u7968\uff0c\u4e5f\u662f\u4f4e\u4ef7\u80a1\uff0c\u4e5f\u662f\u4f4e\u5e02\u76c8\u7387\u80a1\uff0c\u6d41\u5165\u8fd9\u4e9b\u80a1\u7968\u4e2d\u7684\u8d44\u91d1\uff0c\u5c31\u5176\u9009\u80a1\u98ce\u683c\u4e0e\u8d44\u91d1\u5b9e\u529b\u6765\u770b\uff0c\u5e94\u662f\u4ee5\u673a\u6784\u8d44\u91d1\u4e3a\u4e3b\uff0c\u5f15\u53d1\u6e38\u8d44\u8ddf\u98ce\u6548\u5e94\u3002\u4e2d\u56fd\u5357\u8f66(601766)\uff0c\u4eca\u5929\u7a81\u7136\u5de8\u91cf\u8d44\u91d1\u4ecb\u5165\u5c01\u6b7b\u6da8\u505c\u3002\u56db\u5143\u591a\u7684\u4e2d\u56fd\u5357\u8f66\uff0c\u4e00\u76f4\u5f88\u5c11\u8fdb\u5165\u6295\u8d44\u8005\u7684\u89c6\u91ce\uff0c\u73b0\u5728\u8865\u6da8\uff0c\u5f3a\u529b\u8d44\u91d1\u4ecb\u5165\uff0c\u5e26\u52a8\u94c1\u8def\uff0c\u9ad8\u901f\u516c\u8def\uff0c\u673a\u573a\u7b49\u76f8\u5173\u80a1\u7968\u8054\u52a8\u4e0a\u6da8\uff0c\u8054\u52a8\u8865\u6da8\u3002\u8fd9\u4e9b\u80a1\u7968\uff0c\u5927\u591a\u662f\u6b64\u524d\u6da8\u5e45\u6ede\u540e\u7684\u4e8c\u7ebf\u4f4e\u4ef7\u84dd\u7b79\u54c1\u79cd\uff0c\u8fd9\u4e9b\u80a1\u7968\u8865\u6da8\uff0c\u4e0e\u94f6\u884c\u677f\u5757\u7684\u8865\u6da8\u4e00\u6837\uff0c\u663e\u793a\u65b0\u589e\u201c\u8d44\u91d1\u5411\u4f4e\u5904\u6d41\u201d\u7684\u8d8b\u5411\u4ecd\u5728\u7ee7\u7eed\uff0c\u5728\u6269\u6563\u3002\u518d\u770b\u770b\u76d8\u9762\uff1a\u8fde\u7eed\u6da8\u505c\u7684\u80a1\u7968\uff0c\u4e3b\u8981\u662f\u4f4e\u4ef7\u80a1\uff1b\u91cd\u5927\u8d44\u4ea7\u91cd\u7ec4\u7684\u80a1\u7968\uff0c\u5927\u591a\u662f\u4f4e\u4ef7\u80a1\uff1b\u6bcf\u5929\u76d8\u9762\u6da8\u5e45\u9760\u524d\u7684\u80a1\u7968\uff0c\u5927\u591a\u662f\u4f4e\u4ef7\u80a1\uff1b\u6bcf\u5929\u6210\u4ea4\u91cf\u6392\u540d\u524d\u51e0\u4f4d\uff0c\u6e05\u4e00\u8272\u7684\u4f4e\u4ef7\u80a1\u3002\u4f4e\u4ef7\u8865\u6da8\uff0c\u6210\u4e3a\u8fd1\u671f\u8d44\u91d1\u7684\u4e3b\u8981\u6d41\u5411\u3002\u6211\u8ba4\u4e3a\uff1a\u4e0b\u5468\u5927\u76d8\uff0c\u4ecd\u7136\u4f1a\u4ee5\u5411\u4e0a\u9707\u8361\u4e3a\u4e3b\u3002\u9009\u80a1\u65b9\u9762\uff0c\u4e0d\u59a8\u91cd\u70b9\u8003\u8651\u4f4e\u4ef7\uff0c\u8865\u6da8\u3002\u4e00\u76f4\u6bd4\u8f83\u770b\u597d\u6caa\u6df1\u4e24\u5e02\u672c\u5730\u4f4e\u4ef7\u80a1\uff0c\u56e0\u4e3a\u5b83\u4eec\u80a1\u6027\u6d3b\uff0c\u9898\u6750\u4e30\u5bcc\uff0c\u5efa\u8bae\u5728\u8be6\u7ec6\u7814\u7a76\u516c\u53f8\u76f8\u5173\u8d44\u6599\u7684\u57fa\u7840\u4e0a\uff0c\u9002\u5f53\u8ddf\u8e2a\u3001\u5173\u6ce8\u3002(\u53f6\u5f18)"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 151575 |
| valid | 37894 |
|
vsarathy/nl-robotics-semantic-parsing-info_structure-2k-context-TEST | 2023-10-07T12:32:38.000Z | [
"region:us"
] | vsarathy | null | null | null | 0 | 3 | Entry not found |
Falah/cyberpunk_photo_prompts2 | 2023-10-07T14:32:19.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 213569
num_examples: 1000
download_size: 24078
dataset_size: 213569
---
# Dataset Card for "cyberpunk_photo_prompts2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
carnival13/massive_5_lang_DA4_tokenized | 2023-10-07T16:07:09.000Z | [
"region:us"
] | carnival13 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 519317955
num_examples: 705250
download_size: 162988938
dataset_size: 519317955
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_5_lang_DA4_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
towhid/aesir-test69 | 2023-10-07T18:20:02.000Z | [
"region:us"
] | towhid | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 22114
num_examples: 10
download_size: 28277
dataset_size: 22114
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "aesir-test69"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jung1230/patient_info_and_summary | 2023-10-07T19:34:21.000Z | [
"region:us"
] | jung1230 | null | null | null | 0 | 3 | Entry not found |
PocketDoc/Choose-Your-Story-Long-Text-Adventures | 2023-10-07T23:31:56.000Z | [
"task_categories:conversational",
"language:en",
"not-for-all-audiences",
"region:us"
] | PocketDoc | null | null | null | 1 | 3 | ---
tags:
- not-for-all-audiences
task_categories:
- conversational
language:
- en
pretty_name: Choose Your Story Novel Format Text Adventures
---
This is the 'CYS' text adventure dataset converted to a chat format with system messages. The system messages were randomly constructed from a table of phrases and templates. The original data can be found in the .7z archive.
**Credits:**
Thank you to VE Forbryderne from KoboldAI for scraping the dataset. |
H4438/thang-edu-date | 2023-10-08T18:13:07.000Z | [
"region:us"
] | H4438 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: title
dtype: string
- name: url
dtype: string
- name: dates
sequence: string
- name: body
dtype: string
- name: est_date
dtype: string
- name: ext_dates
sequence: string
- name: flt_dates
sequence: string
splits:
- name: train
num_bytes: 551928907
num_examples: 126409
download_size: 190841081
dataset_size: 551928907
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "thang-edu-date"
Left: 47461 rows - 0.38 %
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SuodhanJ6/elliptic_txs_edgelist | 2023-10-08T06:18:13.000Z | [
"region:us"
] | SuodhanJ6 | null | null | null | 0 | 3 | |
flytech/llama-python-codes-30k | 2023-10-08T17:34:44.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10M<n<100M",
"language:en",
"license:llama2",
"code",
"python",
"instruct",
"llama",
"flytech",
"region:us"
] | flytech | null | null | null | 1 | 3 | ---
author: FlyTech
license: llama2
task_categories:
- question-answering
- text-generation
- text2text-generation
language:
- en
tags:
- code
- python
- instruct
- llama
- flytech
pretty_name: Llama1/2 Python Codes 30k Tokenized
size_categories:
- 10M<n<100M
---
# Llama1/2 Python Codes 30k Tokenized Dataset



## Author
**FlyTech**
## Overview
This dataset serves as a rich resource for various Natural Language Processing tasks such as:
- Question Answering
- Text Generation
- Text-to-Text Generation
It primarily focuses on instructional tasks in Python, tokenized specifically for the Llama architecture.
The dataset is a blend of GPT-4 generated content, custom codes, and tasks extending beyond Python.
## Dataset Metrics
**Token Count (via LlamaTokenizer)**
- **Maximum**: 508
- **Average**: 158.06
- **Total**: 13,993,984
**Word Count**: 1,890,810
**Number of Examples**: 27,331
## License
This dataset is under the `llama2` license.
## Tags
- `code`
---
For more details, issues, or contributions, please refer to the [contribution guidelines](CONTRIBUTING.md). |
nguyenthanhdo/patent-vi | 2023-10-08T17:41:46.000Z | [
"region:us"
] | nguyenthanhdo | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
- name: source
dtype: string
- name: output_len
dtype: int64
splits:
- name: train
num_bytes: 209264829
num_examples: 75000
download_size: 100050152
dataset_size: 209264829
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "patent-vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
unaidedelf87777/super-instruct | 2023-10-10T19:15:35.000Z | [
"region:us"
] | unaidedelf87777 | null | null | null | 0 | 3 | Entry not found |
lofcz/cs_autotherapy_chat_ml | 2023-10-09T02:49:58.000Z | [
"license:mit",
"region:us"
] | lofcz | null | null | null | 0 | 3 | ---
license: mit
---
|
minh21/COVID-QA-Chunk-64-question-answering-biencoder-data-65_25_10-v2 | 2023-10-09T03:48:25.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
splits:
- name: train
num_bytes: 50185273
num_examples: 1176
- name: validation
num_bytes: 4744842
num_examples: 134
download_size: 13948442
dataset_size: 54930115
---
# Dataset Card for "COVID-QA-Chunk-64-question-answering-biencoder-data-65_25_10-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
minyiche/llm4mol | 2023-10-09T18:01:54.000Z | [
"arxiv:2307.07443",
"region:us"
] | minyiche | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: question
dtype: string
- name: index
dtype: string
- name: answer
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 2584423
num_examples: 2015
download_size: 750078
dataset_size: 2584423
---
# Dataset Card for Dataset Name
## Dataset Description
- **Paper:** [Can Large Language Models Empower Molecular Property Prediction?](https://arxiv.org/abs/2307.07443)
### Dataset Summary
Topic annotation in LLM4Mol is a in-context molecular classification task along with text explanations as molecular representations
### Data Fields
|
Rahi11Anurag/d | 2023-10-09T05:22:00.000Z | [
"region:us"
] | Rahi11Anurag | null | null | null | 0 | 3 | Entry not found |
kelzla/ds_test2 | 2023-10-09T07:14:01.000Z | [
"region:us"
] | kelzla | null | null | null | 0 | 3 | Entry not found |
truebrown22x/try | 2023-10-09T09:33:50.000Z | [
"region:us"
] | truebrown22x | null | null | null | 0 | 3 | Entry not found |
nandyc/ASL_Isolated_Swin_dataset | 2023-10-09T10:30:57.000Z | [
"region:us"
] | nandyc | null | null | null | 1 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
'4': E
'5': F
'6': G
'7': H
'8': I
'9': J
'10': K
'11': L
'12': M
'13': N
'14': O
'15': P
'16': Q
'17': R
'18': S
'19': T
'20': U
'21': V
'22': W
'23': X
'24': Y
'25': Z
splits:
- name: train
num_bytes: 19265862.93533333
num_examples: 1468
- name: test
num_bytes: 3392183.4166666665
num_examples: 260
download_size: 22665194
dataset_size: 22658046.351999998
---
# Dataset Card for "ASL_Isolated_Swin_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CWKSC/common_voice_13_0-ja-whisper-base | 2023-10-09T10:44:20.000Z | [
"region:us"
] | CWKSC | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 11557295928
num_examples: 12032
- name: test
num_bytes: 4765120552
num_examples: 4961
download_size: 2827086166
dataset_size: 16322416480
---
# Dataset Card for "common_voice_13_0-ja-whisper-base"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Manglik-R/PDF-ChatBot-BCS | 2023-10-09T11:03:52.000Z | [
"license:mit",
"region:us"
] | Manglik-R | null | null | null | 0 | 3 | ---
license: mit
---
|
boundless-asura/summary | 2023-10-09T12:05:18.000Z | [
"region:us"
] | boundless-asura | null | null | null | 0 | 3 | Entry not found |
dmrau/cqadupstack-webmasters-qrels | 2023-10-09T12:41:04.000Z | [
"region:us"
] | dmrau | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 35771
num_examples: 1395
download_size: 0
dataset_size: 35771
---
# Dataset Card for "cqadupstack-webmasters-qrels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dmrau/cqadupstack-unix-qrels | 2023-10-09T12:42:01.000Z | [
"region:us"
] | dmrau | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 44636
num_examples: 1693
download_size: 23577
dataset_size: 44636
---
# Dataset Card for "cqadupstack-unix-qrels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dmrau/cqadupstack-wordpress-qrels | 2023-10-09T12:42:11.000Z | [
"region:us"
] | dmrau | null | null | null | 0 | 3 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 19885
num_examples: 744
download_size: 11490
dataset_size: 19885
---
# Dataset Card for "cqadupstack-wordpress-qrels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mychen76/openwebtext-100k | 2023-10-09T13:37:50.000Z | [
"region:us"
] | mychen76 | null | null | null | 0 | 3 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 497257202
num_examples: 100000
download_size: 302557845
dataset_size: 497257202
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "openwebtext-100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.