datasetId
stringlengths
2
117
card
stringlengths
19
1.01M
ibranze/araproje_hellaswag_tr_conf_halfscore
--- dataset_info: features: - name: ind dtype: int32 - name: activity_label dtype: string - name: ctx_a dtype: string - name: ctx_b dtype: string - name: ctx dtype: string - name: endings sequence: string - name: source_id dtype: string - name: split dtype: string - name: split_type dtype: string - name: label dtype: string splits: - name: validation num_bytes: 162703.0 num_examples: 250 download_size: 87138 dataset_size: 162703.0 configs: - config_name: default data_files: - split: validation path: data/validation-* --- # Dataset Card for "araproje_hellaswag_tr_conf_halfscore" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pathikg/DogLLAMA-small
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 367171 num_examples: 2864 - name: test num_bytes: 41198 num_examples: 319 download_size: 135884 dataset_size: 408369 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* task_categories: - question-answering size_categories: - 1K < n < 10K --- # DogLLaMA-small The dataset was created using the GPT-3.5 Turbo model by OpenAI. The process involved providing a system prompt instructing the model to respond as if it were a dog, conveying translations or responses in a playful and enthusiastic manner. The dataset includes a variety of prompts covering different topics to ensure diversity in the training data. The code for the same can be found here: https://github.com/pathikg/DogLLaMA
BrunoHays/UBS
--- language: - fr pretty_name: Accueil UBS size_categories: - n<1K license: cc-by-sa-4.0 --- # Introduction Ce jeu de données rassemble 339 extraits de conversation téléphoniques extraites du jeu de données [Accueil_UBS](https://www.ortolang.fr/market/corpora/sldr000890/v1). L'objectif est de faciliter l'évaluation des systèmes de reconnaissance automatique de la parole dans des situations réelles, spécifiquement dans les centres d'appel et en français. # Accueil UBS Le corpus Accueil_UBS est un corpus pilote de dialogue oral homme-homme finalisé correspondant à une tâche d’accueil téléphonique par le standard d’une université. Il a été enregistré en conditions réelles au sein de l’Université de Bretagne Sud et regroupe un ensemble de dialogues entre un(e) appelant et le personnel d’accueil du standard. Le corpus distribué comprend les fichiers audio enregistrés ainsi qu’une transcription orthographique des dialogues ainsi recueillis. Tous les dialogues sont en français. Il est distribué sous licence CC BY-SA. # Modifications apportées #### 1. Filtres Les échantillons répondant aux critères suivant ont été supprimés: - avec superposition de voix - de moins de 3 mots - contenant une épellation (principalement UBS) - ayant été anonymisés (remplacement des noms et prénoms par "Nom" et "Prénom") #### 2. Standardisation du texte Le texte brut reste disponible sous la clé "raw_sentence". Les transformations suivantes ont été apportées, sous la clé "sentence": - suppressions des caractères ne correspondant pas à du texte parlé ("e", "#", "[]", "()") - les nombres sont écrits avec des chiffres (dix-sept → 17) à l'aide du package [Text2Num](https://github.com/allo-media/text2num) # Citation Jean-Yves Antoine (2016). Accueil_UBS [Corpus]. ORTOLANG (Open Resources and TOols for LANGuage) - www.ortolang.fr, v1, https://hdl.handle.net/11403/sldr000890/v1.
allganize/allganize-instruct-en-generated
--- dataset_info: - config_name: filtered features: - name: conversation_id dtype: string - name: order dtype: int64 - name: role dtype: string - name: content dtype: string - name: __index_level_0__ dtype: int64 - name: generated dtype: string splits: - name: train num_bytes: 1252122404 num_examples: 1116976 download_size: 594882845 dataset_size: 1252122404 - config_name: filtered_v2 features: - name: conversation_id dtype: string - name: order dtype: int64 - name: role dtype: string - name: content dtype: string - name: generated dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 1158908474 num_examples: 1078141 download_size: 557172760 dataset_size: 1158908474 - config_name: filtered_v2_dedup features: - name: conversation_id dtype: string - name: order dtype: int64 - name: role dtype: string - name: content dtype: string - name: generated dtype: string - name: mask dtype: bool - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 964925054 num_examples: 861699 download_size: 480480687 dataset_size: 964925054 - config_name: filtered_v3_dedup features: - name: conversation_id dtype: string - name: order dtype: int64 - name: role dtype: string - name: content dtype: string - name: generated dtype: string - name: mask dtype: bool - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 912284828 num_examples: 805714 download_size: 455946705 dataset_size: 912284828 - config_name: filtered_v4_dedup features: - name: conversation_id dtype: string - name: order dtype: int64 - name: role dtype: string - name: content dtype: string - name: generated dtype: string - name: mask dtype: bool - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 902576686 num_examples: 795303 download_size: 450894433 dataset_size: 902576686 - config_name: filtered_v5_dedup features: - name: conversation_id dtype: string - name: order dtype: int64 - name: role dtype: string - name: content dtype: string - name: generated dtype: string - name: mask dtype: bool - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 901861640 num_examples: 795073 download_size: 450409427 dataset_size: 901861640 configs: - config_name: filtered data_files: - split: train path: filtered/train-* - config_name: filtered_v2 data_files: - split: train path: filtered_v2/train-* - config_name: filtered_v2_dedup data_files: - split: train path: filtered_v2_dedup/train-* - config_name: filtered_v3_dedup data_files: - split: train path: filtered_v3_dedup/train-* - config_name: filtered_v4_dedup data_files: - split: train path: filtered_v4_dedup/train-* - config_name: filtered_v5_dedup data_files: - split: train path: filtered_v5_dedup/train-* ---
Axel578/mydt
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - cc-by-nc-nd-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - summarization task_ids: [] paperswithcode_id: samsum-corpus pretty_name: SAMSum Corpus tags: - conversations-summarization dataset_info: features: - name: id dtype: string - name: dialogue dtype: string - name: summary dtype: string config_name: samsum splits: - name: train num_bytes: 9479141 num_examples: 14732 - name: test num_bytes: 534492 num_examples: 819 download_size: 2944100 dataset_size: 10530064 train-eval-index: - config: samsum task: summarization task_id: summarization splits: eval_split: test col_mapping: dialogue: text summary: target --- # Dataset Card for SAMSum Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://arxiv.org/abs/1911.12237v2 - **Repository:** [Needs More Information] - **Paper:** https://arxiv.org/abs/1911.12237v2 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger convesations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person. The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0). ### Supported Tasks and Leaderboards [Needs More Information] ### Languages English ## Dataset Structure ### Data Instances The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people The first instance in the training set: {'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"} ### Data Fields - dialogue: text of dialogue. - summary: human written summary of the dialogue. - id: unique id of an example. ### Data Splits - train: 14732 - val: 818 - test: 819 ## Dataset Creation ### Curation Rationale In paper: > In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typ- ically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assis- tant and a client buying petrol. As a consequence, we decided to create a chat dialogue dataset by constructing such conversa- tions that would epitomize the style of a messenger app. ### Source Data #### Initial Data Collection and Normalization In paper: > We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora. #### Who are the source language producers? linguists ### Annotations #### Annotation process In paper: > Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one ref- erence summary. #### Who are the annotators? language experts ### Personal and Sensitive Information None, see above: Initial Data Collection and Normalization ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information non-commercial licence: CC BY-NC-ND 4.0 ### Citation Information ``` @inproceedings{gliwa-etal-2019-samsum, title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization", author = "Gliwa, Bogdan and Mochol, Iwona and Biesek, Maciej and Wawer, Aleksander", booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D19-5409", doi = "10.18653/v1/D19-5409", pages = "70--79" } ``` ### Contributions Thanks to [@cccntu](https://github.com/cccntu) for adding this dataset.
JinglesDados/AlejandraOrozco
--- license: openrail ---
liuyanchen1015/MULTI_VALUE_stsb_corr_conjunction_doubling
--- dataset_info: features: - name: sentence1 dtype: string - name: sentence2 dtype: string - name: score dtype: float64 - name: idx dtype: int64 - name: value_score dtype: int64 splits: - name: dev num_bytes: 12473 num_examples: 54 - name: test num_bytes: 9822 num_examples: 49 - name: train num_bytes: 20577 num_examples: 77 download_size: 39306 dataset_size: 42872 --- # Dataset Card for "MULTI_VALUE_stsb_corr_conjunction_doubling" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
davidfant/natural-questions-chunk-30
--- dataset_info: features: - name: id dtype: string - name: document struct: - name: html dtype: string - name: title dtype: string - name: tokens sequence: - name: end_byte dtype: int64 - name: is_html dtype: bool - name: start_byte dtype: int64 - name: token dtype: string - name: url dtype: string - name: question struct: - name: text dtype: string - name: tokens sequence: string - name: long_answer_candidates sequence: - name: end_byte dtype: int64 - name: end_token dtype: int64 - name: start_byte dtype: int64 - name: start_token dtype: int64 - name: top_level dtype: bool - name: annotations sequence: - name: id dtype: string - name: long_answer struct: - name: candidate_index dtype: int64 - name: end_byte dtype: int64 - name: end_token dtype: int64 - name: start_byte dtype: int64 - name: start_token dtype: int64 - name: short_answers sequence: - name: end_byte dtype: int64 - name: end_token dtype: int64 - name: start_byte dtype: int64 - name: start_token dtype: int64 - name: text dtype: string - name: yes_no_answer dtype: class_label: names: '0': 'NO' '1': 'YES' splits: - name: train num_bytes: 3388222961 num_examples: 7373 download_size: 1317590917 dataset_size: 3388222961 --- # Dataset Card for "natural-questions-chunk-30" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CyberHarem/nemo_anemo_mahoushoujoniakogarete
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of Nemo Anemo/姉母ネモ (Mahou Shoujo ni Akogarete) This is the dataset of Nemo Anemo/姉母ネモ (Mahou Shoujo ni Akogarete), containing 185 images and their tags. The core tags of this character are `green_hair, short_hair, hair_over_one_eye, purple_eyes, ahoge`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 185 | 86.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nemo_anemo_mahoushoujoniakogarete/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 1200 | 185 | 86.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nemo_anemo_mahoushoujoniakogarete/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 344 | 152.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nemo_anemo_mahoushoujoniakogarete/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/nemo_anemo_mahoushoujoniakogarete', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, bare_shoulders, closed_mouth, hood_up, solo, looking_at_viewer, upper_body, detached_sleeves, star_(symbol), sleeveless | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, bare_shoulders, elbow_gloves, hood_up, sharp_teeth, sideless_outfit, solo, detached_sleeves, sideboob, aqua_hair, black_background, grin, multicolored_hair, small_breasts, clenched_teeth, pink_hair, simple_background, star_(symbol), upper_body | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, bare_shoulders, elbow_gloves, hood_up, open_mouth, solo, sharp_teeth, upper_body, upper_teeth_only, aqua_gloves, sleeveless, aqua_hair, blurry_background, multicolored_hair | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, hood_up, sharp_teeth, solo, bare_shoulders, clenched_teeth, looking_to_the_side, star_(symbol), black_background, pink_eyes, sweat | | 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | hood_up, standing, elbow_gloves, solo, thighhighs, 1girl, aqua_hair, bare_shoulders, male_focus, sideless_outfit, thigh_boots, 1boy, green_gloves, arms_behind_back, looking_at_viewer, multicolored_hair, tabard | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, elbow_gloves, solo, thighhighs, dress, green_footwear, thigh_boots, aqua_hair, hood_up, full_body, smile, standing | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, solo, arms_behind_head, closed_mouth, long_sleeves, upper_body, arms_up, track_jacket | | 7 | 5 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, open_clothes, sharp_teeth, solo, white_shirt, collared_shirt, grey_skirt, long_sleeves, looking_at_viewer, on_back, arm_up, bags_under_eyes, blue_cardigan, blue_jacket, clenched_teeth, hands_in_pockets, headphones_around_neck | | 8 | 12 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, long_sleeves, solo, upper_body, collared_shirt, looking_at_viewer, blue_shirt, blush, white_shirt, cardigan, closed_mouth, cup, drinking_straw, kindergarten_uniform | | 9 | 6 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 2girls, blush, multicolored_hair, open_mouth, sharp_teeth, yuri, aqua_hair, collarbone, long_hair, looking_at_another, on_back, upper_teeth_only | | 10 | 8 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | open_clothes, white_shirt, green_jacket, 2girls, brown_skirt, green_shirt, holding_handheld_game_console, nintendo_switch, 1girl, blush, closed_mouth, open_mouth, sitting, smile, brown_shorts, clenched_hand, playing_games, sharp_teeth | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | closed_mouth | hood_up | solo | looking_at_viewer | upper_body | detached_sleeves | star_(symbol) | sleeveless | elbow_gloves | sharp_teeth | sideless_outfit | sideboob | aqua_hair | black_background | grin | multicolored_hair | small_breasts | clenched_teeth | pink_hair | simple_background | open_mouth | upper_teeth_only | aqua_gloves | blurry_background | looking_to_the_side | pink_eyes | sweat | standing | thighhighs | male_focus | thigh_boots | 1boy | green_gloves | arms_behind_back | tabard | dress | green_footwear | full_body | smile | arms_behind_head | long_sleeves | arms_up | track_jacket | open_clothes | white_shirt | collared_shirt | grey_skirt | on_back | arm_up | bags_under_eyes | blue_cardigan | blue_jacket | hands_in_pockets | headphones_around_neck | blue_shirt | blush | cardigan | cup | drinking_straw | kindergarten_uniform | 2girls | yuri | collarbone | long_hair | looking_at_another | green_jacket | brown_skirt | green_shirt | holding_handheld_game_console | nintendo_switch | sitting | brown_shorts | clenched_hand | playing_games | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:-----------------|:---------------|:----------|:-------|:--------------------|:-------------|:-------------------|:----------------|:-------------|:---------------|:--------------|:------------------|:-----------|:------------|:-------------------|:-------|:--------------------|:----------------|:-----------------|:------------|:--------------------|:-------------|:-------------------|:--------------|:--------------------|:----------------------|:------------|:--------|:-----------|:-------------|:-------------|:--------------|:-------|:---------------|:-------------------|:---------|:--------|:-----------------|:------------|:--------|:-------------------|:---------------|:----------|:---------------|:---------------|:--------------|:-----------------|:-------------|:----------|:---------|:------------------|:----------------|:--------------|:-------------------|:-------------------------|:-------------|:--------|:-----------|:------|:-----------------|:-----------------------|:---------|:-------|:-------------|:------------|:---------------------|:---------------|:--------------|:--------------|:--------------------------------|:------------------|:----------|:---------------|:----------------|:----------------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | X | X | | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | X | X | | X | | | X | X | X | | | X | | | X | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | X | X | | | | X | | | X | | | | X | | | | X | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | X | X | X | | | | | X | | X | | X | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | | X | X | | | | | | X | | | | X | | | | | | | | | | | | | | | X | X | | X | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | X | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 5 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | | | X | X | | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | 8 | 12 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | X | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | 9 | 6 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | | | | | | | | | | | | X | | | X | | | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | | | | | X | X | X | X | X | | | | | | | | | | | 10 | 8 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | X | | X | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | | | | | X | X | | | | | | | | | | | X | | | | | X | | | | | X | X | X | X | X | X | X | X | X |
autoevaluate/autoeval-eval-futin__guess-vi-d44dbe-2087167153
--- type: predictions tags: - autotrain - evaluation datasets: - futin/guess eval_info: task: text_zero_shot_classification model: bigscience/bloom-1b1 metrics: [] dataset_name: futin/guess dataset_config: vi dataset_split: test col_mapping: text: text classes: classes target: target --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: bigscience/bloom-1b1 * Dataset: futin/guess * Config: vi * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@futin](https://huggingface.co/futin) for evaluating this model.
finepro111/PRI_FAQ1
--- license: mit ---
jonathan-roberts1/SIRI-WHU
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': agriculture '1': commercial '2': harbor '3': idle_land '4': industrial '5': meadow '6': overpass '7': park '8': pond '9': residential '10': river '11': water splits: - name: train num_bytes: 158215614.4 num_examples: 2400 download_size: 147702566 dataset_size: 158215614.4 license: other task_categories: - image-classification - zero-shot-image-classification --- # Dataset Card for "SIRI-WHU" ## Dataset Description - **Paper** [Dirichlet-derived multiple topic scene classification model for high spatial resolution remote sensing imagery](https://ieeexplore.ieee.org/iel7/36/4358825/07329997.pdf) - **Paper** [The Fisher kernel coding framework for high spatial resolution scene classification](https://www.mdpi.com/2072-4292/8/2/157/pdf) - **Paper** [Bag-of-visual-words scene classifier with local and global features for high spatial resolution remote sensing imagery](https://ieeexplore.ieee.org/iel7/8859/7473942/07466064.pdf) ### Licensing Information CC BY-NC-ND ## Citation Information [Dirichlet-derived multiple topic scene classification model for high spatial resolution remote sensing imagery](https://ieeexplore.ieee.org/iel7/36/4358825/07329997.pdf) [The Fisher kernel coding framework for high spatial resolution scene classification](https://www.mdpi.com/2072-4292/8/2/157/pdf) [Bag-of-visual-words scene classifier with local and global features for high spatial resolution remote sensing imagery](https://ieeexplore.ieee.org/iel7/8859/7473942/07466064.pdf) ``` @article{zhao2015dirichlet, title={Dirichlet-derived multiple topic scene classification model for high spatial resolution remote sensing imagery}, author={Zhao, Bei and Zhong, Yanfei and Xia, Gui-Song and Zhang, Liangpei}, journal={IEEE Transactions on Geoscience and Remote Sensing}, volume={54}, number={4}, pages={2108--2123}, year={2015}, publisher={IEEE} } @article{zhao2016fisher, title={The Fisher kernel coding framework for high spatial resolution scene classification}, author={Zhao, Bei and Zhong, Yanfei and Zhang, Liangpei and Huang, Bo}, journal={Remote Sensing}, volume={8}, number={2}, pages={157}, year={2016}, publisher={MDPI} } @article{zhu2016bag, title={Bag-of-visual-words scene classifier with local and global features for high spatial resolution remote sensing imagery}, author={Zhu, Qiqi and Zhong, Yanfei and Zhao, Bei and Xia, Gui-Song and Zhang, Liangpei}, journal={IEEE Geoscience and Remote Sensing Letters}, volume={13}, number={6}, pages={747--751}, year={2016}, publisher={IEEE} } ```
artemsnegirev/dialogs_from_jokes
--- language: - ru multilinguality: - monolingual pretty_name: Dialogs from Jokes size_categories: - 100K<n<1M task_categories: - conversational task_ids: - dialogue-generation license: cc0-1.0 --- Converted to json version of dataset from [Koziev/NLP_Datasets](https://github.com/Koziev/NLP_Datasets/blob/master/Conversations/Data/extract_dialogues_from_anekdots.tar.xz)
CyberHarem/saga_arknights
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of saga/サガ/嵯峨 (Arknights) This is the dataset of saga/サガ/嵯峨 (Arknights), containing 500 images and their tags. The core tags of this character are `black_hair, dog_ears, animal_ears, long_hair, facial_mark, parted_bangs, very_long_hair, brown_eyes, dog_girl`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 909.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saga_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 1200 | 500 | 738.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saga_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1251 | 1.38 GiB | [Download](https://huggingface.co/datasets/CyberHarem/saga_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/saga_arknights', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, fingerless_gloves, forehead_mark, holding_polearm, knee_pads, looking_at_viewer, naginata, purple_pants, solo, white_socks, black_kimono, geta, simple_background, tabi, white_background, full_body, purple_gloves, wide_sleeves, closed_mouth, long_sleeves, beads, hip_vent, infection_monitor_(arknights) | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, fingerless_gloves, knee_pads, maple_leaf, naginata, purple_gloves, purple_pants, solo, tabi, autumn_leaves, full_body, geta, holding_polearm, looking_at_viewer, black_kimono, forehead_mark, hip_vent, white_socks, closed_mouth, long_sleeves, on_one_knee, smile | | 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, black_kimono, fingerless_gloves, forehead_mark, looking_at_viewer, simple_background, solo, white_background, purple_pants, open_mouth, prayer_beads, purple_gloves, holding_polearm, long_sleeves, :d, naginata | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, black_kimono, fingerless_gloves, forehead_mark, maple_leaf, prayer_beads, purple_gloves, solo, autumn_leaves, knee_pads, looking_at_viewer, purple_pants, closed_mouth, holding_polearm, smile | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, autumn_leaves, black_kimono, looking_at_viewer, maple_leaf, purple_gloves, purple_pants, solo, fingerless_gloves, forehead_mark, knee_pads, naginata, :o, hip_vent, holding_polearm, open_mouth, outdoors, tree, yellow_eyes | | 5 | 9 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, forehead_mark, simple_background, solo, upper_body, white_background, blush, cropped_torso, looking_at_viewer, black_kimono, open_mouth, :d, closed_mouth | | 6 | 6 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, black_kimono, fingerless_gloves, looking_at_viewer, simple_background, solo, upper_body, forehead_mark, hand_up, purple_gloves, smile, closed_mouth, long_sleeves, prayer_beads, white_background, wide_sleeves | | 7 | 33 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, forehead_mark, solo, black_kimono, holding_bowl, simple_background, upper_body, white_background, fingerless_gloves, open_mouth, looking_at_viewer, long_sleeves, holding_chopsticks, blush, purple_gloves, :d, rice_bowl, sparkle, wide_sleeves, food_on_face, teeth | | 8 | 12 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, forehead_mark, looking_at_viewer, solo, white_bikini, yellow_jacket, alternate_costume, eyepatch_bikini, off_shoulder, bare_shoulders, eyewear_on_head, medium_breasts, open_jacket, sunglasses, collarbone, long_sleeves, open_mouth, swim_ring, innertube, outdoors, strapless_bikini, :d, blue_sky, braid, day, navel, thigh_strap, holding | | 9 | 5 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1boy, 1girl, blush, collarbone, forehead_mark, hetero, solo_focus, clenched_teeth, closed_eyes, holding_hands, interlocked_fingers, medium_breasts, mosaic_censoring, navel, nipples, on_back, penis, pov, sex, sweat, vaginal, bed_sheet, braid, completely_nude, large_breasts, looking_at_viewer, missionary, open_mouth, pussy, simple_background, spread_legs, stomach, upper_body | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | fingerless_gloves | forehead_mark | holding_polearm | knee_pads | looking_at_viewer | naginata | purple_pants | solo | white_socks | black_kimono | geta | simple_background | tabi | white_background | full_body | purple_gloves | wide_sleeves | closed_mouth | long_sleeves | beads | hip_vent | infection_monitor_(arknights) | maple_leaf | autumn_leaves | on_one_knee | smile | open_mouth | prayer_beads | :d | :o | outdoors | tree | yellow_eyes | upper_body | blush | cropped_torso | hand_up | holding_bowl | holding_chopsticks | rice_bowl | sparkle | food_on_face | teeth | white_bikini | yellow_jacket | alternate_costume | eyepatch_bikini | off_shoulder | bare_shoulders | eyewear_on_head | medium_breasts | open_jacket | sunglasses | collarbone | swim_ring | innertube | strapless_bikini | blue_sky | braid | day | navel | thigh_strap | holding | 1boy | hetero | solo_focus | clenched_teeth | closed_eyes | holding_hands | interlocked_fingers | mosaic_censoring | nipples | on_back | penis | pov | sex | sweat | vaginal | bed_sheet | completely_nude | large_breasts | missionary | pussy | spread_legs | stomach | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:----------------|:------------------|:------------|:--------------------|:-----------|:---------------|:-------|:--------------|:---------------|:-------|:--------------------|:-------|:-------------------|:------------|:----------------|:---------------|:---------------|:---------------|:--------|:-----------|:--------------------------------|:-------------|:----------------|:--------------|:--------|:-------------|:---------------|:-----|:-----|:-----------|:-------|:--------------|:-------------|:--------|:----------------|:----------|:---------------|:---------------------|:------------|:----------|:---------------|:--------|:---------------|:----------------|:--------------------|:------------------|:---------------|:-----------------|:------------------|:-----------------|:--------------|:-------------|:-------------|:------------|:------------|:-------------------|:-----------|:--------|:------|:--------|:--------------|:----------|:-------|:---------|:-------------|:-----------------|:--------------|:----------------|:----------------------|:-------------------|:----------|:----------|:--------|:------|:------|:--------|:----------|:------------|:------------------|:----------------|:-------------|:--------|:--------------|:----------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | | X | | X | X | | X | X | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | | X | X | X | X | | X | | X | | X | | X | | | X | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | X | X | X | | X | X | | X | | | | | | X | | X | | | | | X | X | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | X | X | X | X | X | X | X | | X | | | | | | X | | | | | X | | X | X | | | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 9 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | X | | | X | | | X | | X | | X | | X | | | | X | | | | | | | | | X | | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 6 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | X | | | X | | | X | | X | | X | | X | | X | X | X | X | | | | | | | X | | X | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 33 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | X | X | | | X | | | X | | X | | X | | X | | X | X | | X | | | | | | | | X | | X | | | | | X | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 12 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | | X | | | X | | | X | | | | | | | | | | | X | | | | | | | | X | | X | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | 9 | 5 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | | X | | | X | | | | | | | X | | | | | | | | | | | | | | | X | | | | | | | X | X | | | | | | | | | | | | | | | | X | | | X | | | | | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
Celiroad/celiroad
--- license: afl-3.0 task_categories: - text-classification language: - en tags: - diego lincoln - celi road - celi size_categories: - n<1K ---
ibranze/araproje_hellaswag_tr_w5
--- dataset_info: features: - name: ind dtype: int32 - name: activity_label dtype: string - name: ctx_a dtype: string - name: ctx_b dtype: string - name: ctx dtype: string - name: endings sequence: string - name: source_id dtype: string - name: split dtype: string - name: split_type dtype: string - name: label dtype: string splits: - name: validation num_bytes: 162830.76923076922 num_examples: 250 download_size: 88723 dataset_size: 162830.76923076922 configs: - config_name: default data_files: - split: validation path: data/validation-* --- # Dataset Card for "araproje_hellaswag_tr_w5" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
RUCAIBox/gaokao-bench
--- license: apache-2.0 configs: - config_name: 2010-2013_English_MCQs data_files: - split: test path: "test/2010-2013_English_MCQs.jsonl" - config_name: 2010-2022_Biology_MCQs data_files: - split: test path: "test/2010-2022_Biology_MCQs.jsonl" - config_name: 2010-2022_Chemistry_MCQs data_files: - split: test path: "test/2010-2022_Chemistry_MCQs.jsonl" - config_name: 2010-2022_Chinese_Lang_and_Usage_MCQs data_files: - split: test path: "test/2010-2022_Chinese_Lang_and_Usage_MCQs.jsonl" - config_name: 2010-2022_Chinese_Modern_Lit data_files: - split: test path: "test/2010-2022_Chinese_Modern_Lit.jsonl" - config_name: 2010-2022_English_Fill_in_Blanks data_files: - split: test path: "test/2010-2022_English_Fill_in_Blanks.jsonl" - config_name: 2010-2022_English_Reading_Comp data_files: - split: test path: "test/2010-2022_English_Reading_Comp.jsonl" - config_name: 2010-2022_Geography_MCQs data_files: - split: test path: "test/2010-2022_Geography_MCQs.jsonl" - config_name: 2010-2022_History_MCQs data_files: - split: test path: "test/2010-2022_History_MCQs.jsonl" - config_name: 2010-2022_Math_II_MCQs data_files: - split: test path: "test/2010-2022_Math_II_MCQs.jsonl" - config_name: 2010-2022_Math_I_MCQs data_files: - split: test path: "test/2010-2022_Math_I_MCQs.jsonl" - config_name: 2010-2022_Physics_MCQs data_files: - split: test path: "test/2010-2022_Physics_MCQs.jsonl" - config_name: 2010-2022_Political_Science_MCQs data_files: - split: test path: "test/2010-2022_Political_Science_MCQs.jsonl" - config_name: 2012-2022_English_Cloze_Test data_files: - split: test path: "test/2012-2022_English_Cloze_Test.jsonl" ---
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_6
--- dataset_info: features: - name: logits sequence: float32 - name: mfcc sequence: sequence: float64 splits: - name: train num_bytes: 1134813304.0 num_examples: 222862 download_size: 1153363244 dataset_size: 1134813304.0 --- # Dataset Card for "chunk_6" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
linhd-postdata/pulpo
--- language: - es - en - fr - it - cs - pt - ar - zh - fi - de - hu - ru tags: - poetry pretty_name: Prolific Unannotated Literary Poetry Corpus size_categories: - 10M<n<100M --- ## PULPO PULPO, the Prolific Unannotated Literary Poetry Corpus, is a set of multilingual corpora of verses and stanzas with over 95M words. See https://arxiv.org/abs/2307.01387. The following corpora has been downloaded using the [Averell](https://github.com/linhd-postdata/averell/) tool, developed by the [POSTDATA](https://postdata.linhd.uned.es/) team: ### Spanish - [Disco v3](https://github.com/pruizf/disco) - [Corpus of Spanish Golden-Age Sonnets](https://github.com/bncolorado/CorpusSonetosSigloDeOro) - [Corpus general de poesía lírica castellana del Siglo de Oro](https://github.com/bncolorado/CorpusGeneralPoesiaLiricaCastellanaDelSigloDeOro) - [Gongocorpus](https://github.com/linhd-postdata/gongocorpus) - [source](http://obvil.sorbonne-universite.site/corpus/gongora/gongora_obra-poetica) ### English - [Eighteenth-Century Poetry Archive (ECPA)](https://github.com/alhuber1502/ECPA) - [For better for verse](https://github.com/waynegraham/for_better_for_verse) ### French - [Métrique en Ligne](https://crisco2.unicaen.fr/verlaine/index.php?navigation=accueil) - [source](https://github.com/linhd-postdata/metrique-en-ligne) ### Italian - [Biblioteca italiana](https://github.com/linhd-postdata/biblioteca_italiana) - [source](http://www.bibliotecaitaliana.it/) ### Czech - [Corpus of Czech Verse](https://github.com/versotym/corpusCzechVerse) ### Portuguese - [Stichotheque](https://gitlab.com/stichotheque/stichotheque-pt) Also, we obtained the following corpora from these sources: ### Spanish - [Poesi.as](https://github.com/linhd-postdata/poesi.as) - [source](http://www.poesi.as/) ### English - [A Gutenberg Poetry Corpus](https://github.com/aparrish/gutenberg-poetry-corpus) ### Arabic - [Arabic Poetry dataset](https://www.kaggle.com/ahmedabelal/arabic-poetry) ### Chinese - [THU Chinese Classical Poetry Corpus](https://github.com/THUNLP-AIPoet/Datasets/tree/master/CCPC) ### Finnish - [SKVR](https://github.com/sks190/SKVR) ### German - [TextGrid Poetry Corpus](https://github.com/linhd-postdata/textgrid-poetry) - [source](https://textgrid.de/en/digitale-bibliothek) - [German Rhyme Corpus](https://github.com/tnhaider/german-rhyme-corpus) ### Hungarian - [verskorpusz](https://github.com/ELTE-DH/verskorpusz) ### Portuguese - [Poems in Portuguese](https://www.kaggle.com/oliveirasp6/poems-in-portuguese) ### Russian - [19 000 Russian poems](https://www.kaggle.com/grafstor/19-000-russian-poems)
cl-nagoya/nu-mnli-rev
--- dataset_info: features: - name: premise_ja dtype: string - name: hypothesis_ja dtype: string - name: label dtype: class_label: names: '0': entailment '1': neutral '2': contradiction - name: premise_en dtype: string - name: hypothesis_en dtype: string - name: genre dtype: string - name: premise_en_rev dtype: string - name: hypothesis_en_rev dtype: string - name: label_rev dtype: class_label: names: '0': entailment '1': neutral '2': contradiction splits: - name: train num_bytes: 242885163 num_examples: 392702 download_size: 159087118 dataset_size: 242885163 configs: - config_name: default data_files: - split: train path: data/train-* ---
phi0108/noun-phrase-de
--- language: - de size_categories: - 100K<n<1M ---
najju/sign-psl
--- license: mit dataset_info: features: - name: Text dtype: string - name: Gloss dtype: string splits: - name: train num_bytes: 98710 num_examples: 1000 download_size: 52082 dataset_size: 98710 configs: - config_name: default data_files: - split: train path: data/train-* ---
aaparajit02/punjabi-asr
--- dataset_info: features: - name: audio dtype: audio - name: transcript dtype: string - name: english dtype: string splits: - name: train num_bytes: 10917088956.322 num_examples: 39238 download_size: 10866820110 dataset_size: 10917088956.322 task_categories: - automatic-speech-recognition language: - pa tags: - punjabi - asr - transcription - translation pretty_name: Punjabi ASR size_categories: - 10K<n<100K --- # Dataset for Punjabi ASR Shrutilipi is a labelled ASR corpus obtained by mining parallel audio and text pairs at the document scale from All India Radio news bulletins for 12 Indian languages: Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Odia, Punjabi, Sanskrit, Tamil, Telugu, Urdu. The corpus has over 6400 hours of data across all languages. ``` @misc{https://doi.org/10.48550/arxiv.2208.12666, doi = {10.48550/ARXIV.2208.12666}, url = {https://arxiv.org/abs/2208.12666}, author = {Bhogale, Kaushal Santosh and Raman, Abhigyan and Javed, Tahir and Doddapaneni, Sumanth and Kunchukuttan, Anoop and Kumar, Pratyush and Khapra, Mitesh M.}, title = {Effectiveness of Mining Audio and Text Pairs from Public Data for Improving ASR Systems for Low-Resource Languages}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
shidowake/philschmid_guanaco-sharegpt-style_split_1
--- dataset_info: features: - name: conversations list: - name: from dtype: string - name: value dtype: string splits: - name: train num_bytes: 3494574.0896712057 num_examples: 2258 download_size: 2023545 dataset_size: 3494574.0896712057 configs: - config_name: default data_files: - split: train path: data/train-* ---
ssbuild/alpaca_rlhf
--- license: apache-2.0 ---
antolin/python-150_interduplication
--- dataset_info: features: - name: id_within_dataset dtype: int64 - name: snippet dtype: string - name: tokens sequence: string - name: nl dtype: string - name: split_within_dataset dtype: string - name: is_duplicated dtype: bool splits: - name: train num_bytes: 41621821.20269596 num_examples: 40842 - name: test num_bytes: 13915723.238891663 num_examples: 13655 - name: valid num_bytes: 13864768.55841238 num_examples: 13605 download_size: 30588162 dataset_size: 69402313.0 --- # Dataset Card for "python-150_interduplication" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
johannes-garstenauer/embeddings_from_distilbert_class_heaps_and_eval_part1
--- dataset_info: features: - name: struct dtype: string - name: label dtype: int64 - name: pred dtype: int64 - name: cls_layer_6 sequence: float32 - name: cls_layer_5 sequence: float32 - name: cls_layer_4 sequence: float32 splits: - name: train num_bytes: 1281395185 num_examples: 134495 download_size: 1491856532 dataset_size: 1281395185 --- # Dataset Card for "embeddings_from_distilbert_class_heaps_and_eval_part1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
distilled-from-one-sec-cv12/chunk_21
--- dataset_info: features: - name: logits sequence: float32 - name: mfcc sequence: sequence: float64 splits: - name: train num_bytes: 1344481360 num_examples: 261980 download_size: 1374039050 dataset_size: 1344481360 --- # Dataset Card for "chunk_21" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-llm-leaderboard/details_facebook__galactica-30b
--- pretty_name: Evaluation run of None dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [None](https://huggingface.co/None) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 119 configuration, each one coresponding to one of\ \ the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can\ \ be found as a specific split in each configuration, the split being named using\ \ the timestamp of the run.The \"train\" split is always pointing to the latest\ \ results.\n\nAn additional configuration \"results\" store all the aggregated results\ \ of the run (and is used to compute and display the agregated metrics on the [Open\ \ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_facebook__galactica-30b\"\ ,\n\t\"original_mmlu_world_religions_5\",\n\tsplit=\"train\")\n```\n\n## Latest\ \ results\n\nThese are the [latest results from run 2023-08-28T21:10:05.029353](https://huggingface.co/datasets/open-llm-leaderboard/details_facebook__galactica-30b/blob/main/results_2023-08-28T21%3A10%3A05.029353.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4666487872974609,\n\ \ \"acc_stderr\": 0.036447127375734134\n },\n \"original|mmlu:abstract_algebra|5\"\ : {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446\n },\n\ \ \"original|mmlu:anatomy|5\": {\n \"acc\": 0.5407407407407407,\n \ \ \"acc_stderr\": 0.04304979692464242\n },\n \"original|mmlu:astronomy|5\"\ : {\n \"acc\": 0.506578947368421,\n \"acc_stderr\": 0.040685900502249704\n\ \ },\n \"original|mmlu:business_ethics|5\": {\n \"acc\": 0.46,\n \ \ \"acc_stderr\": 0.05009082659620332\n },\n \"original|mmlu:clinical_knowledge|5\"\ : {\n \"acc\": 0.5471698113207547,\n \"acc_stderr\": 0.030635627957961823\n\ \ },\n \"original|mmlu:college_biology|5\": {\n \"acc\": 0.5694444444444444,\n\ \ \"acc_stderr\": 0.04140685639111502\n },\n \"original|mmlu:college_chemistry|5\"\ : {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504\n },\n\ \ \"original|mmlu:college_computer_science|5\": {\n \"acc\": 0.41,\n \ \ \"acc_stderr\": 0.049431107042371025\n },\n \"original|mmlu:college_mathematics|5\"\ : {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235\n },\n\ \ \"original|mmlu:college_medicine|5\": {\n \"acc\": 0.5028901734104047,\n\ \ \"acc_stderr\": 0.03812400565974834\n },\n \"original|mmlu:college_physics|5\"\ : {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04690650298201943\n\ \ },\n \"original|mmlu:computer_security|5\": {\n \"acc\": 0.65,\n\ \ \"acc_stderr\": 0.0479372485441102\n },\n \"original|mmlu:conceptual_physics|5\"\ : {\n \"acc\": 0.4765957446808511,\n \"acc_stderr\": 0.03265019475033581\n\ \ },\n \"original|mmlu:econometrics|5\": {\n \"acc\": 0.3684210526315789,\n\ \ \"acc_stderr\": 0.04537815354939391\n },\n \"original|mmlu:electrical_engineering|5\"\ : {\n \"acc\": 0.5862068965517241,\n \"acc_stderr\": 0.04104269211806232\n\ \ },\n \"original|mmlu:elementary_mathematics|5\": {\n \"acc\": 0.31216931216931215,\n\ \ \"acc_stderr\": 0.023865206836972585\n },\n \"original|mmlu:formal_logic|5\"\ : {\n \"acc\": 0.2698412698412698,\n \"acc_stderr\": 0.03970158273235172\n\ \ },\n \"original|mmlu:global_facts|5\": {\n \"acc\": 0.31,\n \ \ \"acc_stderr\": 0.04648231987117316\n },\n \"original|mmlu:high_school_biology|5\"\ : {\n \"acc\": 0.5548387096774193,\n \"acc_stderr\": 0.028272410186214906\n\ \ },\n \"original|mmlu:high_school_chemistry|5\": {\n \"acc\": 0.39408866995073893,\n\ \ \"acc_stderr\": 0.034381579670365446\n },\n \"original|mmlu:high_school_computer_science|5\"\ : {\n \"acc\": 0.48,\n \"acc_stderr\": 0.05021167315686781\n },\n\ \ \"original|mmlu:high_school_european_history|5\": {\n \"acc\": 0.5818181818181818,\n\ \ \"acc_stderr\": 0.03851716319398393\n },\n \"original|mmlu:high_school_geography|5\"\ : {\n \"acc\": 0.5353535353535354,\n \"acc_stderr\": 0.03553436368828063\n\ \ },\n \"original|mmlu:high_school_government_and_politics|5\": {\n \ \ \"acc\": 0.5595854922279793,\n \"acc_stderr\": 0.03582724530036093\n \ \ },\n \"original|mmlu:high_school_macroeconomics|5\": {\n \"acc\":\ \ 0.4230769230769231,\n \"acc_stderr\": 0.025049197876042338\n },\n \ \ \"original|mmlu:high_school_mathematics|5\": {\n \"acc\": 0.2518518518518518,\n\ \ \"acc_stderr\": 0.026466117538959905\n },\n \"original|mmlu:high_school_microeconomics|5\"\ : {\n \"acc\": 0.4579831932773109,\n \"acc_stderr\": 0.03236361111951941\n\ \ },\n \"original|mmlu:high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n\ \ \"acc_stderr\": 0.03913453431177258\n },\n \"original|mmlu:high_school_psychology|5\"\ : {\n \"acc\": 0.6293577981651376,\n \"acc_stderr\": 0.02070745816435298\n\ \ },\n \"original|mmlu:high_school_statistics|5\": {\n \"acc\": 0.33796296296296297,\n\ \ \"acc_stderr\": 0.03225941352631295\n },\n \"original|mmlu:high_school_us_history|5\"\ : {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.03484941514429231\n\ \ },\n \"original|mmlu:high_school_world_history|5\": {\n \"acc\":\ \ 0.6033755274261603,\n \"acc_stderr\": 0.03184399873811225\n },\n \ \ \"original|mmlu:human_aging|5\": {\n \"acc\": 0.5515695067264574,\n \ \ \"acc_stderr\": 0.033378837362550984\n },\n \"original|mmlu:human_sexuality|5\"\ : {\n \"acc\": 0.5801526717557252,\n \"acc_stderr\": 0.043285772152629715\n\ \ },\n \"original|mmlu:international_law|5\": {\n \"acc\": 0.6528925619834711,\n\ \ \"acc_stderr\": 0.04345724570292534\n },\n \"original|mmlu:jurisprudence|5\"\ : {\n \"acc\": 0.5185185185185185,\n \"acc_stderr\": 0.04830366024635331\n\ \ },\n \"original|mmlu:logical_fallacies|5\": {\n \"acc\": 0.50920245398773,\n\ \ \"acc_stderr\": 0.03927705600787443\n },\n \"original|mmlu:machine_learning|5\"\ : {\n \"acc\": 0.33035714285714285,\n \"acc_stderr\": 0.04464285714285712\n\ \ },\n \"original|mmlu:management|5\": {\n \"acc\": 0.6019417475728155,\n\ \ \"acc_stderr\": 0.048467482539772386\n },\n \"original|mmlu:marketing|5\"\ : {\n \"acc\": 0.6965811965811965,\n \"acc_stderr\": 0.03011821010694266\n\ \ },\n \"original|mmlu:medical_genetics|5\": {\n \"acc\": 0.58,\n \ \ \"acc_stderr\": 0.049604496374885836\n },\n \"original|mmlu:miscellaneous|5\"\ : {\n \"acc\": 0.4942528735632184,\n \"acc_stderr\": 0.01787878232612923\n\ \ },\n \"original|mmlu:moral_disputes|5\": {\n \"acc\": 0.4479768786127168,\n\ \ \"acc_stderr\": 0.026772990653361826\n },\n \"original|mmlu:moral_scenarios|5\"\ : {\n \"acc\": 0.2435754189944134,\n \"acc_stderr\": 0.014355911964767864\n\ \ },\n \"original|mmlu:nutrition|5\": {\n \"acc\": 0.5196078431372549,\n\ \ \"acc_stderr\": 0.028607893699576073\n },\n \"original|mmlu:philosophy|5\"\ : {\n \"acc\": 0.48231511254019294,\n \"acc_stderr\": 0.02838032284907713\n\ \ },\n \"original|mmlu:prehistory|5\": {\n \"acc\": 0.5401234567901234,\n\ \ \"acc_stderr\": 0.027731022753539277\n },\n \"original|mmlu:professional_accounting|5\"\ : {\n \"acc\": 0.35815602836879434,\n \"acc_stderr\": 0.028602085862759422\n\ \ },\n \"original|mmlu:professional_law|5\": {\n \"acc\": 0.34028683181225555,\n\ \ \"acc_stderr\": 0.012101217610223794\n },\n \"original|mmlu:professional_medicine|5\"\ : {\n \"acc\": 0.44485294117647056,\n \"acc_stderr\": 0.03018753206032939\n\ \ },\n \"original|mmlu:professional_psychology|5\": {\n \"acc\": 0.5130718954248366,\n\ \ \"acc_stderr\": 0.020220920829626916\n },\n \"original|mmlu:public_relations|5\"\ : {\n \"acc\": 0.5272727272727272,\n \"acc_stderr\": 0.04782001791380061\n\ \ },\n \"original|mmlu:security_studies|5\": {\n \"acc\": 0.4,\n \ \ \"acc_stderr\": 0.03136250240935893\n },\n \"original|mmlu:sociology|5\"\ : {\n \"acc\": 0.5422885572139303,\n \"acc_stderr\": 0.035228658640995975\n\ \ },\n \"original|mmlu:us_foreign_policy|5\": {\n \"acc\": 0.49,\n\ \ \"acc_stderr\": 0.05024183937956912\n },\n \"original|mmlu:virology|5\"\ : {\n \"acc\": 0.4397590361445783,\n \"acc_stderr\": 0.03864139923699121\n\ \ },\n \"original|mmlu:world_religions|5\": {\n \"acc\": 0.4327485380116959,\n\ \ \"acc_stderr\": 0.03799978644370607\n }\n}\n```" repo_url: https://huggingface.co/None leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|arc:challenge|25_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hellaswag|10_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-management|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-management|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-07-19T22:22:24.729383.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-international_law|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-management|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-marketing|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-sociology|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-virology|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T22:22:24.729383.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2023_07_19T22_22_24.729383 path: - '**/details_harness|truthfulqa:mc|0_2023-07-19T22:22:24.729383.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2023-07-19T22:22:24.729383.parquet' - config_name: original_mmlu_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:abstract_algebra|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:anatomy|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:astronomy|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:business_ethics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:college_biology|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:college_chemistry|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:college_computer_science|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:college_mathematics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:college_medicine|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:college_physics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:computer_security|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:conceptual_physics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:econometrics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:electrical_engineering|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:formal_logic|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:global_facts|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_biology|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_european_history|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_geography|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_physics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_psychology|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_statistics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_us_history|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_world_history|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:human_aging|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:human_sexuality|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:international_law|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:jurisprudence|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:logical_fallacies|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:machine_learning|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:management|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:marketing|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:medical_genetics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:miscellaneous|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:moral_disputes|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:moral_scenarios|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:nutrition|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:philosophy|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:prehistory|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:professional_accounting|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:professional_law|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:professional_medicine|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:professional_psychology|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:public_relations|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:security_studies|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:sociology|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:virology|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:world_religions|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:abstract_algebra|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:anatomy|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:astronomy|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:business_ethics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:college_biology|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:college_chemistry|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:college_computer_science|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:college_mathematics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:college_medicine|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:college_physics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:computer_security|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:conceptual_physics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:econometrics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:electrical_engineering|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:formal_logic|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:global_facts|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_biology|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_european_history|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_geography|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_physics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_psychology|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_statistics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_us_history|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:high_school_world_history|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:human_aging|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:human_sexuality|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:international_law|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:jurisprudence|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:logical_fallacies|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:machine_learning|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:management|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:marketing|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:medical_genetics|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:miscellaneous|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:moral_disputes|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:moral_scenarios|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:nutrition|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:philosophy|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:prehistory|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:professional_accounting|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:professional_law|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:professional_medicine|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:professional_psychology|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:public_relations|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:security_studies|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:sociology|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:virology|5_2023-08-28T21:10:05.029353.parquet' - '**/details_original|mmlu:world_religions|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_abstract_algebra_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:abstract_algebra|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:abstract_algebra|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_anatomy_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:anatomy|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:anatomy|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_astronomy_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:astronomy|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:astronomy|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_business_ethics_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:business_ethics|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:business_ethics|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_clinical_knowledge_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:clinical_knowledge|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_college_biology_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:college_biology|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:college_biology|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_college_chemistry_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:college_chemistry|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:college_chemistry|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_college_computer_science_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:college_computer_science|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:college_computer_science|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_college_mathematics_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:college_mathematics|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:college_mathematics|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_college_medicine_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:college_medicine|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:college_medicine|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_college_physics_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:college_physics|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:college_physics|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_computer_security_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:computer_security|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:computer_security|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_conceptual_physics_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:conceptual_physics|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:conceptual_physics|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_econometrics_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:econometrics|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:econometrics|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_electrical_engineering_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:electrical_engineering|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:electrical_engineering|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_elementary_mathematics_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:elementary_mathematics|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_formal_logic_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:formal_logic|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:formal_logic|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_global_facts_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:global_facts|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:global_facts|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_high_school_biology_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:high_school_biology|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:high_school_biology|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_high_school_chemistry_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:high_school_chemistry|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_high_school_computer_science_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:high_school_computer_science|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_high_school_european_history_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:high_school_european_history|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:high_school_european_history|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_high_school_geography_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:high_school_geography|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:high_school_geography|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_high_school_government_and_politics_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:high_school_government_and_politics|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_high_school_macroeconomics_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:high_school_macroeconomics|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_high_school_mathematics_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:high_school_mathematics|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_high_school_microeconomics_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:high_school_microeconomics|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_high_school_physics_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:high_school_physics|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:high_school_physics|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_high_school_psychology_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:high_school_psychology|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:high_school_psychology|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_high_school_statistics_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:high_school_statistics|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:high_school_statistics|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_high_school_us_history_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:high_school_us_history|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:high_school_us_history|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_high_school_world_history_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:high_school_world_history|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:high_school_world_history|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_human_aging_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:human_aging|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:human_aging|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_human_sexuality_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:human_sexuality|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:human_sexuality|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_international_law_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:international_law|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:international_law|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_jurisprudence_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:jurisprudence|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:jurisprudence|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_logical_fallacies_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:logical_fallacies|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:logical_fallacies|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_machine_learning_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:machine_learning|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:machine_learning|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_management_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:management|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:management|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_marketing_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:marketing|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:marketing|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_medical_genetics_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:medical_genetics|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:medical_genetics|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_miscellaneous_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:miscellaneous|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:miscellaneous|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_moral_disputes_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:moral_disputes|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:moral_disputes|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_moral_scenarios_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:moral_scenarios|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:moral_scenarios|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_nutrition_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:nutrition|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:nutrition|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_philosophy_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:philosophy|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:philosophy|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_prehistory_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:prehistory|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:prehistory|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_professional_accounting_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:professional_accounting|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:professional_accounting|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_professional_law_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:professional_law|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:professional_law|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_professional_medicine_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:professional_medicine|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:professional_medicine|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_professional_psychology_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:professional_psychology|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:professional_psychology|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_public_relations_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:public_relations|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:public_relations|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_security_studies_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:security_studies|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:security_studies|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_sociology_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:sociology|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:sociology|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_us_foreign_policy_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:us_foreign_policy|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_virology_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:virology|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:virology|5_2023-08-28T21:10:05.029353.parquet' - config_name: original_mmlu_world_religions_5 data_files: - split: 2023_08_28T21_10_05.029353 path: - '**/details_original|mmlu:world_religions|5_2023-08-28T21:10:05.029353.parquet' - split: latest path: - '**/details_original|mmlu:world_religions|5_2023-08-28T21:10:05.029353.parquet' - config_name: results data_files: - split: 2023_07_19T22_22_24.729383 path: - results_2023-07-19T22:22:24.729383.parquet - split: 2023_08_28T21_10_05.029353 path: - results_2023-08-28T21:10:05.029353.parquet - split: latest path: - results_2023-08-28T21:10:05.029353.parquet --- # Dataset Card for Evaluation run of None ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/None - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [None](https://huggingface.co/None) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 119 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_facebook__galactica-30b", "original_mmlu_world_religions_5", split="train") ``` ## Latest results These are the [latest results from run 2023-08-28T21:10:05.029353](https://huggingface.co/datasets/open-llm-leaderboard/details_facebook__galactica-30b/blob/main/results_2023-08-28T21%3A10%3A05.029353.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.4666487872974609, "acc_stderr": 0.036447127375734134 }, "original|mmlu:abstract_algebra|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446 }, "original|mmlu:anatomy|5": { "acc": 0.5407407407407407, "acc_stderr": 0.04304979692464242 }, "original|mmlu:astronomy|5": { "acc": 0.506578947368421, "acc_stderr": 0.040685900502249704 }, "original|mmlu:business_ethics|5": { "acc": 0.46, "acc_stderr": 0.05009082659620332 }, "original|mmlu:clinical_knowledge|5": { "acc": 0.5471698113207547, "acc_stderr": 0.030635627957961823 }, "original|mmlu:college_biology|5": { "acc": 0.5694444444444444, "acc_stderr": 0.04140685639111502 }, "original|mmlu:college_chemistry|5": { "acc": 0.32, "acc_stderr": 0.04688261722621504 }, "original|mmlu:college_computer_science|5": { "acc": 0.41, "acc_stderr": 0.049431107042371025 }, "original|mmlu:college_mathematics|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235 }, "original|mmlu:college_medicine|5": { "acc": 0.5028901734104047, "acc_stderr": 0.03812400565974834 }, "original|mmlu:college_physics|5": { "acc": 0.3333333333333333, "acc_stderr": 0.04690650298201943 }, "original|mmlu:computer_security|5": { "acc": 0.65, "acc_stderr": 0.0479372485441102 }, "original|mmlu:conceptual_physics|5": { "acc": 0.4765957446808511, "acc_stderr": 0.03265019475033581 }, "original|mmlu:econometrics|5": { "acc": 0.3684210526315789, "acc_stderr": 0.04537815354939391 }, "original|mmlu:electrical_engineering|5": { "acc": 0.5862068965517241, "acc_stderr": 0.04104269211806232 }, "original|mmlu:elementary_mathematics|5": { "acc": 0.31216931216931215, "acc_stderr": 0.023865206836972585 }, "original|mmlu:formal_logic|5": { "acc": 0.2698412698412698, "acc_stderr": 0.03970158273235172 }, "original|mmlu:global_facts|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316 }, "original|mmlu:high_school_biology|5": { "acc": 0.5548387096774193, "acc_stderr": 0.028272410186214906 }, "original|mmlu:high_school_chemistry|5": { "acc": 0.39408866995073893, "acc_stderr": 0.034381579670365446 }, "original|mmlu:high_school_computer_science|5": { "acc": 0.48, "acc_stderr": 0.05021167315686781 }, "original|mmlu:high_school_european_history|5": { "acc": 0.5818181818181818, "acc_stderr": 0.03851716319398393 }, "original|mmlu:high_school_geography|5": { "acc": 0.5353535353535354, "acc_stderr": 0.03553436368828063 }, "original|mmlu:high_school_government_and_politics|5": { "acc": 0.5595854922279793, "acc_stderr": 0.03582724530036093 }, "original|mmlu:high_school_macroeconomics|5": { "acc": 0.4230769230769231, "acc_stderr": 0.025049197876042338 }, "original|mmlu:high_school_mathematics|5": { "acc": 0.2518518518518518, "acc_stderr": 0.026466117538959905 }, "original|mmlu:high_school_microeconomics|5": { "acc": 0.4579831932773109, "acc_stderr": 0.03236361111951941 }, "original|mmlu:high_school_physics|5": { "acc": 0.3576158940397351, "acc_stderr": 0.03913453431177258 }, "original|mmlu:high_school_psychology|5": { "acc": 0.6293577981651376, "acc_stderr": 0.02070745816435298 }, "original|mmlu:high_school_statistics|5": { "acc": 0.33796296296296297, "acc_stderr": 0.03225941352631295 }, "original|mmlu:high_school_us_history|5": { "acc": 0.4411764705882353, "acc_stderr": 0.03484941514429231 }, "original|mmlu:high_school_world_history|5": { "acc": 0.6033755274261603, "acc_stderr": 0.03184399873811225 }, "original|mmlu:human_aging|5": { "acc": 0.5515695067264574, "acc_stderr": 0.033378837362550984 }, "original|mmlu:human_sexuality|5": { "acc": 0.5801526717557252, "acc_stderr": 0.043285772152629715 }, "original|mmlu:international_law|5": { "acc": 0.6528925619834711, "acc_stderr": 0.04345724570292534 }, "original|mmlu:jurisprudence|5": { "acc": 0.5185185185185185, "acc_stderr": 0.04830366024635331 }, "original|mmlu:logical_fallacies|5": { "acc": 0.50920245398773, "acc_stderr": 0.03927705600787443 }, "original|mmlu:machine_learning|5": { "acc": 0.33035714285714285, "acc_stderr": 0.04464285714285712 }, "original|mmlu:management|5": { "acc": 0.6019417475728155, "acc_stderr": 0.048467482539772386 }, "original|mmlu:marketing|5": { "acc": 0.6965811965811965, "acc_stderr": 0.03011821010694266 }, "original|mmlu:medical_genetics|5": { "acc": 0.58, "acc_stderr": 0.049604496374885836 }, "original|mmlu:miscellaneous|5": { "acc": 0.4942528735632184, "acc_stderr": 0.01787878232612923 }, "original|mmlu:moral_disputes|5": { "acc": 0.4479768786127168, "acc_stderr": 0.026772990653361826 }, "original|mmlu:moral_scenarios|5": { "acc": 0.2435754189944134, "acc_stderr": 0.014355911964767864 }, "original|mmlu:nutrition|5": { "acc": 0.5196078431372549, "acc_stderr": 0.028607893699576073 }, "original|mmlu:philosophy|5": { "acc": 0.48231511254019294, "acc_stderr": 0.02838032284907713 }, "original|mmlu:prehistory|5": { "acc": 0.5401234567901234, "acc_stderr": 0.027731022753539277 }, "original|mmlu:professional_accounting|5": { "acc": 0.35815602836879434, "acc_stderr": 0.028602085862759422 }, "original|mmlu:professional_law|5": { "acc": 0.34028683181225555, "acc_stderr": 0.012101217610223794 }, "original|mmlu:professional_medicine|5": { "acc": 0.44485294117647056, "acc_stderr": 0.03018753206032939 }, "original|mmlu:professional_psychology|5": { "acc": 0.5130718954248366, "acc_stderr": 0.020220920829626916 }, "original|mmlu:public_relations|5": { "acc": 0.5272727272727272, "acc_stderr": 0.04782001791380061 }, "original|mmlu:security_studies|5": { "acc": 0.4, "acc_stderr": 0.03136250240935893 }, "original|mmlu:sociology|5": { "acc": 0.5422885572139303, "acc_stderr": 0.035228658640995975 }, "original|mmlu:us_foreign_policy|5": { "acc": 0.49, "acc_stderr": 0.05024183937956912 }, "original|mmlu:virology|5": { "acc": 0.4397590361445783, "acc_stderr": 0.03864139923699121 }, "original|mmlu:world_religions|5": { "acc": 0.4327485380116959, "acc_stderr": 0.03799978644370607 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
roa7n/patched_1000_test_p_40_m2_predictions
--- dataset_info: features: - name: id dtype: string - name: sequence_str dtype: string - name: label dtype: int64 - name: features sequence: float64 - name: m2_preds dtype: float32 splits: - name: train num_bytes: 8380474294 num_examples: 942535 download_size: 7949577002 dataset_size: 8380474294 --- # Dataset Card for "patched_1000_test_p_40_m2_predictions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ibivibiv/alpaca_tiny6
--- dataset_info: features: - name: output dtype: string - name: instruction dtype: string - name: input dtype: string splits: - name: train num_bytes: 460477309 num_examples: 290901 download_size: 266456741 dataset_size: 460477309 configs: - config_name: default data_files: - split: train path: data/train-* ---
HydraLM/GPT4-LLM-Cleaned_standardized
--- dataset_info: features: - name: message dtype: string - name: message_type dtype: string - name: message_id dtype: int64 - name: conversation_id dtype: int64 splits: - name: train num_bytes: 41862648 num_examples: 163704 download_size: 0 dataset_size: 41862648 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "GPT4-LLM-Cleaned_standardized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
realnetworks-kontxt/arctic-hs
--- license: cc-by-4.0 task_categories: - audio-classification language: - en tags: - speech - speech-classifiation - text-to-speech - spoofing - accents pretty_name: ARCTIC-HS size_categories: - 10K<n<100K --- # ARCTIC-HS An extension of the [CMU_ARCTIC](http://festvox.org/cmu_arctic/) and [L2-ARCTIC](https://psi.engr.tamu.edu/l2-arctic-corpus/) datasets for synthetic speech detection using text-to-speech, featured in the paper **Synthetic speech detection with Wav2Vec 2.0 in various language settings**. Specifically, the `symmetric` variants were used. This dataset is 1 of 3 used in the paper, the others being: - [FLEURS-HS](https://huggingface.co/datasets/realnetworks-kontxt/fleurs-hs) - the default train, dev and test sets - [FLEURS-HS VITS](https://huggingface.co/datasets/realnetworks-kontxt/fleurs-hs-vits) - test set containing (generally) more difficult synthetic samples - separated due to different licensing ## Dataset Details ### Dataset Description The dataset features 3 parts obtained from the 2 original datasets: - CMU (native) non-US English speakers - CMU (native) US English speakers - L2 (non-native) English speakers The original ARCTIC samples are used as `human` samples, while `synthetic` samples are generated using [Google Cloud Text-To-Speech](https://cloud.google.com/text-to-speech). The resulting `symmetric` datasets features exactly twice the samples of the original ones, but we also provide: - human samples that couldn't be paired - 4 speakers in entirety we couldn't pair with a TTS voice - a small amount of utterances unrelated to the A and B ARCTIC samples - synthetic samples that couldn't be paired - mostly when a human speaker didn't read the B ARCTIC samples - **Curated by:** [KONTXT by RealNetworks](https://realnetworks.com/kontxt) - **Funded by:** [RealNetworks](https://realnetworks.com/) - **Language(s) (NLP):** English - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) for the code, [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) for the dataset, however: - the human part of the dataset is under a **custom CMU license** - it should be compatible with **CC BY 4.0** - the human part of the L2 dataset is under **CC BY-NC 4.0** ### Dataset Sources The original ARCTIC sets were downloaded from their original sources. - **CMU_ARCTIC Repository:** [festvox.org](http://festvox.org/cmu_arctic/) - **L2-ARCTIC Repository:** [tamu.edu](https://psi.engr.tamu.edu/l2-arctic-corpus/) - **CMU_ARCTIC Paper:** [cmu.edu](https://www.cs.cmu.edu/~awb/papers/ssw5/arctic.pdf) - **L2-ARCTIC Paper:** [tamu.edu](https://psi.engr.tamu.edu/wp-content/uploads/2018/08/zhao2018interspeech.pdf) - **Paper:** Synthetic speech detection with Wav2Vec 2.0 in various language settings ## Uses This dataset is best used as a test set for accents. Each sample contains an `Audio` feature, and a label: `human` or `synthetic`. ### Direct Use The following snippet of code demonstrates loading the CMU non-US English speaker part of the dataset: ```python from datasets import load_dataset arctic_hs = load_dataset( "realnetworks-kontxt/arctic-hs", "cmu_non-us", split="test", trust_remote_code=True, ) ``` To load a different part, change `cmu_non-us` into one of the following: - `cmu_us` for CMU (native) US English speakers - `l2` for L2 (non-native) English speakers This dataset only has a `test` split. To load only the paired samples, append `_symmetric` to the name. For example, `cmu_non-us` will load the test set also containing human and synthetic samples without their counterpart, while `cmu_non-us_symmetric` will only load samples where there is both a human and synthetic variant. This is useful if you want to have perfectly balanced labels within speakers, and if you wish to exclude speakers for which there are no TTS counterparts at all. This is also the family of datasets used in the paper. The `trust_remote_code=True` parameter is necessary because this dataset uses a custom loader. To check out which code is being ran, check out the [loading script](./arctic-hs.py). ## Dataset Structure The dataset data is contained in the [data directory](https://huggingface.co/datasets/realnetworks-kontxt/arctic-hs/tree/main/data). There exists 1 directory per part. Within those directories, there are 2 further directories: - `splits` - `pairs` Within the `splits` folder, there is 1 file per split: - `test.tar.gz` Those `.tar.gz` files contain 2 directories: - `human` - `synthetic` Each of these directories contain `.wav` files. Keep in mind that these directories can't be merged as they share most of their file names. An identical file name implies a speaker-voice pair, ex. `human/arctic_a0001.wav` and `synthetic/arctic_a0001.wav`. The `pairs` folder contains a list of file names within each speaker, and whether or not there is a human-synthetic pair. Based on that metadata we determine which samples appear in `symmetric` datasets. Back to the part directories, each contain 2 metadata files, which are not used in the loaded dataset, but might be useful to researchers: - `speaker-metadata.csv` - contains the speaker IDs paired with their speech properties - `voice-metadata.csv` - contains speaker-TTS name pairs Finally, the `data` root contains a single metadata file, `prompts.csv`, which as the name would suggest, contains the prompt transcripts. The only samples for which there are no transcripts are the ARCTIC-C ones, for which we couldn't find a source in the internet. ### Sample A sample contains contains an Audio feature `audio`, and a string `label`. ``` { 'audio': { 'path': 'ahw/human/arctic_a0001.wav', 'array': array([0., 0., 0., ..., 0., 0., 0.]), 'sampling_rate': 16000 }, 'label': 'human' } ``` ## Citation The dataset is featured alongside our paper, **Synthetic speech detection with Wav2Vec 2.0 in various language settings**, which will be published on IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW). We'll provide links once it's available online. **BibTeX:** Note, the following BibTeX is incomplete - we'll update it once the actual one is known. ``` @inproceedings{dropuljic-ssdww2v2ivls author={Dropuljić, Branimir and Šuflaj, Miljenko and Jertec, Andrej and Obadić, Leo} booktitle={2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)} title={Synthetic speech detection with Wav2Vec 2.0 in various language settings} year={2024} volume={} number={} pages={1-5} keywords={Synthetic speech detection;text-to-speech;wav2vec 2.0;spoofing attack;multilingualism} doi={} } ``` ## Dataset Card Authors - [Miljenko Šuflaj](https://huggingface.co/suflaj) ## Dataset Card Contact - [Miljenko Šuflaj](mailto:msuflaj@realnetworks.com)
huggingartists/rocket
--- language: - en tags: - huggingartists - lyrics --- # Dataset Card for "huggingartists/rocket" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of the generated dataset:** 0.424035 MB <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/0fb709925134799103886db5e722ef73.1000x1000x1.jpg&#39;)"> </div> </div> <a href="https://huggingface.co/huggingartists/rocket"> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> </a> <div style="text-align: center; font-size: 16px; font-weight: 800">ROCKET</div> <a href="https://genius.com/artists/rocket"> <div style="text-align: center; font-size: 14px;">@rocket</div> </a> </div> ### Dataset Summary The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available [here](https://huggingface.co/huggingartists/rocket). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages en ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/rocket") ``` ## Dataset Structure An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..." } ``` ### Data Fields The data fields are the same among all splits. - `text`: a `string` feature. ### Data Splits | train |validation|test| |------:|---------:|---:| |134| -| -| 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code: ```python from datasets import load_dataset, Dataset, DatasetDict import numpy as np datasets = load_dataset("huggingartists/rocket") train_percentage = 0.9 validation_percentage = 0.07 test_percentage = 0.03 train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))]) datasets = DatasetDict( { 'train': Dataset.from_dict({'text': list(train)}), 'validation': Dataset.from_dict({'text': list(validation)}), 'test': Dataset.from_dict({'text': list(test)}) } ) ``` ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingartists, author={Aleksey Korshuk} year=2021 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
RIW/small-coco-wm_1_4
--- dataset_info: features: - name: image dtype: image - name: caption dtype: string - name: url dtype: string - name: key dtype: string - name: status dtype: string - name: error_message dtype: 'null' - name: width dtype: int64 - name: height dtype: int64 - name: original_width dtype: int64 - name: original_height dtype: int64 - name: exif dtype: string - name: sha256 dtype: string splits: - name: train num_bytes: 1889862708.952 num_examples: 9984 - name: validation num_bytes: 1877853501.979 num_examples: 9977 download_size: 883834147 dataset_size: 3767716210.9309998 --- # Dataset Card for "small-coco-wm_1_4" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
reciprocate/vicuna_benchmark_pairwise
--- dataset_info: features: - name: prompt dtype: string - name: selected dtype: string - name: rejected dtype: string splits: - name: train num_bytes: 168709 num_examples: 80 download_size: 100327 dataset_size: 168709 --- # Dataset Card for "vicuna_benchmark_pairwise" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gsstein/25-percent-human-dataset-llama-og
--- dataset_info: features: - name: id dtype: string - name: url dtype: string - name: title dtype: string - name: summary dtype: string - name: text dtype: string - name: generated dtype: bool - name: prompt dtype: string splits: - name: train num_bytes: 85994822 num_examples: 15326 - name: test num_bytes: 3054139 num_examples: 576 - name: validation num_bytes: 3251995 num_examples: 576 download_size: 57087958 dataset_size: 92300956 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* - split: validation path: data/validation-* ---
iarbel/cimphony_legal_ds_v2
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: source dtype: string - name: messages list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 23122251 num_examples: 7935 - name: test num_bytes: 2557775 num_examples: 882 download_size: 10302225 dataset_size: 25680026 --- # Dataset Card for "cimphony_legal_ds_v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
k2141255/RealMedQA
--- license: cc-by-nc-4.0 task_categories: - question-answering language: - en tags: - medical pretty_name: d size_categories: - 1K<n<10K --- # RealMedQA RealMedQA is a biomedical question answering dataset consisting of realistic question and answer pairs. The questions were created by medical students and a large language model (LLM), while the answers are guideline recommendations provided by the UK's National Institute for Health and Care Excellence (NICE). ## Dataset creation ## Data collection Initially, 12,543 guidelines were retrieved using the [NICE syndication API](https://www.nice.org.uk/about/what-we-do/nice-syndication-api). As we were interested in only the guidelines that pertain to clinical practice, we only used the guidelines that came under 'Conditions and diseases' which reduced the number to 7,385. ## Question generation We created an instruction sheet with examples which we provided to both the humans (medical students) and the LLM to generate the several questions for each guideline recommendation. The instruction sheet was fed as a prompt along with each recommendation to the LLM, while the humans created the questions using Google forms. ## Question verification Both the QA pairs generated by the LLM and those generated by human annotators were verified by humans for quality. The verifiers were asked whether each question: * Could plausibly be asked by a clinician would ask in practice; * Was adequately answered by the accompanying recommendation. A total of 800 human QA pairs and 400 LLM QA pairs were verified. ## Dataset structure The dataset is structured according to the following columns: * **Generator:** whether the question was generated by a 'Human' or 'LLM'; * **Question:** the question created by either the human or LLM; * **Recommendation:** the recommendation that is supposed to address the question; * **Plausible:** whether the question could plausibly be asked by a clinician in practice (as assessed by the verifier); * **Answered:** whether the question is adequately answered by the clinician (as assessed by the verifier). ## Contact * Gregory Kell: gregory.kell@kcl.ac.uk * Iain Marshall: iain.marshall@kcl.ac.uk
sagakarl/ms151
--- license: mit ---
kerinin/hackernews
--- license: apache-2.0 ---
cxdsadXD/baldi
--- language: - en size_categories: - 1M<n<10M ---
huginfdel/pizzasierrade
--- license: cc ---
heegyu/open-korean-instructions
--- license: mit --- 4가지 한국어 챗봇 학습용 데이터셋을 합쳐놓았습니다. 이중 ShareGPT 데이터는 멀티턴으로 되어있습니다. 데이터 생성 및 합치는 코드는 https://github.com/HeegyuKim/open-korean-instructions 여기를 참고하세요 | 이름 | # | 타입 | |---|---|---| | [KoAlpaca v1.0](https://huggingface.co/datasets/Bingsu/ko_alpaca_data) | 52K | 싱글턴 | | [KoAlpaca v1.1](https://raw.githubusercontent.com/Beomi/KoAlpaca/main/KoAlpaca_v1.1.jsonl) | 21K | 싱글턴 | | [ShareGPT DeepL 번역](https://huggingface.co/datasets/junelee/sharegpt_deepl_ko) | 620K(싱글턴), 84K(멀티턴) | 멀티턴, 싱글턴 | | [OIG-small-chip2-ko](https://huggingface.co/datasets/heegyu/OIG-small-chip2-ko) | 210K | 싱글턴 | | [Korquad-Chat](https://huggingface.co/datasets/heegyu/korquad-chat-v1) | 9.6K | 멀티턴, 지식기반 | 모든 데이터는 포멧이 통일되어 있습니다. `<sys>`, `<usr>`, `<bot>` 세가지 토큰과 줄넘김으로 화자를 구분합니다. korquad-chat 데이터의 경우, 유저와 봇이 서로를 호칭할 때는 `<|bot|>`, `<|user|>`로 되어있습니다. ``` {"source": "koalpaca-v1.0", "text": "<usr> 홀수 중 하나를 밝히세요.\n<sys> 트위터, 인스타그램, 텔레그램\n<bot> 텔레그램입니다."} {"source": "korquad-chat", "text": " <sys>1984년 9월부터 덴노지 역 ~ 히네노 역간에 운전을 개시한 '홈 라이너 이즈미'가 전신이다. 특급 구로시오의 381계 전동차의 히네노 전차구 입선 회송 열차를 좌석 정리권을 필요로 하는 정원제 열차로서 운행해온 것으로 간사이 지방 최초의 홈 라이너였다. 1986년 11월 개정에 따라 운전 구간이 와카야마 역까지 연장되어 한와 라이너로 개칭되어 현재에 이르게 되었다. 이전에는 평일에 상행 3편, 하행 6편이 운전되었지만 일부가 신오사카 역이나 교토 역 방면 발착 특급 구로시오로 변경되어 2002년 3월 이후에는 상행 2편, 하행 4편으로 축소되었다. 2009년 3월의 시간표 개정부터는 휴일 시간대의 운전이 중단되었으며 같은 해 6월 1일부터는 전면 금연화되었다.\n <usr> 안녕하세요, <|bot|>. \"한와_선\"에 대해 알고 계시나요?\n <bot> 네, 전체적으로 알고 있습니다. 1984년에 운전을 개시한 \"홈 라이너 이즈미\"가 전신이며, 간사이 지방 최초의 홈 라이너였습니다. 1986년에는 와카야마 역까지 연장된 후 \"한와 라이너\"로 개칭되었습니다.\n <usr> 그렇군요. 이전에는 평일에 몇 편이 운행되었나요?\n <bot> 평일에 상행 3편, 하행 6편이 운전되었습니다. 하지만 2002년 이후로는 상행 2편, 하행 4편으로 축소되었습니다.\n <usr> 그렇군요. 혹시 특급 구로시오와 연관이 있을까요?\n <bot> 네, \"한와 라이너\"는 특급 구로시오의 381계 전동차의 히네노 전차구 입선 회송 열차를 좌석 정리권을 필요로 하는 정원제 열차로서 운행되었습니다." } ```
antoniopagnotts/block-world-problem-v1
--- license: mit ---
alfredplpl/genai-terminology-en-ja
--- license: apache-2.0 language: - en - ja size_categories: - n<1K --- 生成AIの日英専門用語集です。正確さは保証しませんが、GPT-4などの頭に入れておくと綺麗に訳せると思います。
csebuetnlp/xnli_bn
--- annotations_creators: - machine-generated language_creators: - found multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended task_categories: - text-classification task_ids: - natural-language-inference language: - bn license: - cc-by-nc-sa-4.0 --- # Dataset Card for `xnli_bn` ## Table of Contents - [Dataset Card for `xnli_bn`](#dataset-card-for-xnli_bn) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Usage](#usage) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [https://github.com/csebuetnlp/banglabert](https://github.com/csebuetnlp/banglabert) - **Paper:** [**"BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding"**](https://arxiv.org/abs/2101.00204) - **Point of Contact:** [Tahmid Hasan](mailto:tahmidhasan@cse.buet.ac.bd) ### Dataset Summary This is a Natural Language Inference (NLI) dataset for Bengali, curated using the subset of MNLI data used in XNLI and state-of-the-art English to Bengali translation model introduced **[here](https://aclanthology.org/2020.emnlp-main.207/).** ### Supported Tasks and Leaderboards [More information needed](https://github.com/csebuetnlp/banglabert) ### Languages * `Bengali` ### Usage ```python from datasets import load_dataset dataset = load_dataset("csebuetnlp/xnli_bn") ``` ## Dataset Structure ### Data Instances One example from the dataset is given below in JSON format. ``` { "sentence1": "আসলে, আমি এমনকি এই বিষয়ে চিন্তাও করিনি, কিন্তু আমি এত হতাশ হয়ে পড়েছিলাম যে, শেষ পর্যন্ত আমি আবার তার সঙ্গে কথা বলতে শুরু করেছিলাম", "sentence2": "আমি তার সাথে আবার কথা বলিনি।", "label": "contradiction" } ``` ### Data Fields The data fields are as follows: - `sentence1`: a `string` feature indicating the premise. - `sentence2`: a `string` feature indicating the hypothesis. - `label`: a classification label, where possible values are `contradiction` (0), `entailment` (1), `neutral` (2) . ### Data Splits | split |count | |----------|--------| |`train`| 381449 | |`validation`| 2419 | |`test`| 4895 | ## Dataset Creation The dataset curation procedure was the same as the [XNLI](https://aclanthology.org/D18-1269/) dataset: we translated the [MultiNLI](https://aclanthology.org/N18-1101/) training data using the English to Bangla translation model introduced [here](https://aclanthology.org/2020.emnlp-main.207/). Due to the possibility of incursions of error during automatic translation, we used the [Language-Agnostic BERT Sentence Embeddings (LaBSE)](https://arxiv.org/abs/2007.01852) of the translations and original sentences to compute their similarity. All sentences below a similarity threshold of 0.70 were discarded. ### Curation Rationale [More information needed](https://github.com/csebuetnlp/banglabert) ### Source Data [XNLI](https://aclanthology.org/D18-1269/) #### Initial Data Collection and Normalization [More information needed](https://github.com/csebuetnlp/banglabert) #### Who are the source language producers? [More information needed](https://github.com/csebuetnlp/banglabert) ### Annotations [More information needed](https://github.com/csebuetnlp/banglabert) #### Annotation process [More information needed](https://github.com/csebuetnlp/banglabert) #### Who are the annotators? [More information needed](https://github.com/csebuetnlp/banglabert) ### Personal and Sensitive Information [More information needed](https://github.com/csebuetnlp/banglabert) ## Considerations for Using the Data ### Social Impact of Dataset [More information needed](https://github.com/csebuetnlp/banglabert) ### Discussion of Biases [More information needed](https://github.com/csebuetnlp/banglabert) ### Other Known Limitations [More information needed](https://github.com/csebuetnlp/banglabert) ## Additional Information ### Dataset Curators [More information needed](https://github.com/csebuetnlp/banglabert) ### Licensing Information Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders. ### Citation Information If you use the dataset, please cite the following paper: ``` @misc{bhattacharjee2021banglabert, title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding}, author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar}, year={2021}, eprint={2101.00204}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset.
Aryan2406/english-marathi
--- task_categories: - translation language: - en - mr size_categories: - 10K<n<100K ---
Kaue123456/VegetaAlfredoRollo
--- license: openrail ---
kunishou/HelpSteer-35k-ja
--- license: cc-by-4.0 language: - ja --- NVIDIA が公開している SteerLM 向けのトライアルデータセット [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer)を日本語に自動翻訳したデータセットになります。 SteerLM でのアライメントをお試ししたい際にご活用下さい。 SteerLM での LLM トレーニング方法については以下の URL を参考にして下さい。 - Announcing NVIDIA SteerLM : https://developer.nvidia.com/blog/announcing-steerlm-a-simple-and-practical-technique-to-customize-llms-during-inference - NeMo Aligner : https://github.com/NVIDIA/NeMo-Aligner - SteerLM training user guide : https://docs.nvidia.com/nemo-framework/user-guide/latest/modelalignment/steerlm.html [参考] [SteerLM : LLMを自在に操作する新しいアライメント手法について調べてみた](https://note.com/aisatoshi/n/n18c018506da2)
AdapterOcean/med_alpaca_standardized_cluster_61
--- dataset_info: features: - name: text dtype: string - name: conversation_id dtype: int64 - name: embedding sequence: float64 - name: cluster dtype: int64 splits: - name: train num_bytes: 127598566 num_examples: 13439 download_size: 36504837 dataset_size: 127598566 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "med_alpaca_standardized_cluster_61" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Kavinprasanth/Sample_dataset
--- dataset_info: features: - name: 'Unnamed: 0' dtype: string splits: - name: train num_bytes: 11050 num_examples: 50 download_size: 6456 dataset_size: 11050 configs: - config_name: default data_files: - split: train path: data/train-* ---
tr416/test_dataset_20231007_171958
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: input_ids sequence: int32 - name: attention_mask sequence: int8 splits: - name: train num_bytes: 762696.0 num_examples: 297 - name: test num_bytes: 7704.0 num_examples: 3 download_size: 73618 dataset_size: 770400.0 --- # Dataset Card for "test_dataset_20231007_171958" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
cj-mills/coco-bounding-box-toy-dataset
--- license: mit ---
valerieyuan/bimcv_covid19_all_cxr
--- dataset_info: features: - name: zip_name dtype: string - name: file_path dtype: string - name: image_name dtype: string - name: date dtype: string - name: subjectId dtype: string - name: sessionId dtype: string - name: acq_num dtype: string - name: run_num dtype: string - name: loc1 dtype: string - name: loc2 dtype: string - name: labels dtype: string - name: age dtype: int64 - name: gender dtype: string - name: label dtype: string splits: - name: positive num_bytes: 25884321 num_examples: 80890 - name: negative num_bytes: 1774039 num_examples: 5594 download_size: 5160419 dataset_size: 27658360 configs: - config_name: default data_files: - split: positive path: data/positive-* - split: negative path: data/negative-* ---
sayan1101/final_summarize
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: prompt dtype: string splits: - name: train num_bytes: 869663385 num_examples: 200000 download_size: 254181054 dataset_size: 869663385 --- # Dataset Card for "final_summarize" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/SST2_train
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': negative '1': positive - name: id dtype: int64 splits: - name: train num_bytes: 117277546.0 num_examples: 6920 download_size: 114148970 dataset_size: 117277546.0 --- # Dataset Card for "SST2_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AnonymousSubmissionOnly/Chaizi
--- license: mit ---
qwedsacf/competition_math
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - mit multilinguality: - monolingual pretty_name: Mathematics Aptitude Test of Heuristics (MATH) size_categories: - 10K<n<100K source_datasets: - original task_categories: - text2text-generation task_ids: [] tags: - explanation-generation --- # Dataset Card for Mathematics Aptitude Test of Heuristics (MATH) dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/hendrycks/math - **Repository:** https://github.com/hendrycks/math - **Paper:** https://arxiv.org/pdf/2103.03874.pdf - **Leaderboard:** N/A - **Point of Contact:** Dan Hendrycks ### Dataset Summary The Mathematics Aptitude Test of Heuristics (MATH) dataset consists of problems from mathematics competitions, including the AMC 10, AMC 12, AIME, and more. Each problem in MATH has a full step-by-step solution, which can be used to teach models to generate answer derivations and explanations. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances A data instance consists of a competition math problem and its step-by-step solution written in LaTeX and natural language. The step-by-step solution contains the final answer enclosed in LaTeX's `\boxed` tag. An example from the dataset is: ``` {'problem': 'A board game spinner is divided into three parts labeled $A$, $B$ and $C$. The probability of the spinner landing on $A$ is $\\frac{1}{3}$ and the probability of the spinner landing on $B$ is $\\frac{5}{12}$. What is the probability of the spinner landing on $C$? Express your answer as a common fraction.', 'level': 'Level 1', 'type': 'Counting & Probability', 'solution': 'The spinner is guaranteed to land on exactly one of the three regions, so we know that the sum of the probabilities of it landing in each region will be 1. If we let the probability of it landing in region $C$ be $x$, we then have the equation $1 = \\frac{5}{12}+\\frac{1}{3}+x$, from which we have $x=\\boxed{\\frac{1}{4}}$.'} ``` ### Data Fields * `problem`: The competition math problem. * `solution`: The step-by-step solution. * `level`: The problem's difficulty level from 'Level 1' to 'Level 5', where a subject's easiest problems for humans are assigned to 'Level 1' and a subject's hardest problems are assigned to 'Level 5'. * `type`: The subject of the problem: Algebra, Counting & Probability, Geometry, Intermediate Algebra, Number Theory, Prealgebra and Precalculus. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information https://github.com/hendrycks/math/blob/main/LICENSE ### Citation Information ```bibtex @article{hendrycksmath2021, title={Measuring Mathematical Problem Solving With the MATH Dataset}, author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt}, journal={arXiv preprint arXiv:2103.03874}, year={2021} } ```
DFKI-SLT/sciarg
--- annotations_creators: - expert-generated language: - en language_creators: - expert-generated license: [] multilinguality: - monolingual pretty_name: SciArg size_categories: - 1K<n<10K source_datasets: - dr inventor corpus tags: - argument mining - scientific text - relation extraction - argumentative discourse unit recognition task_categories: - token-classification task_ids: [] --- # Dataset Card for "sciarg" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/anlausch/ArguminSci](https://github.com/anlausch/ArguminSci) - **Repository:** [https://github.com/anlausch/ArguminSci](https://github.com/anlausch/ArguminSci) - **Paper:** [An argument-annotated corpus of scientific publications](https://aclanthology.org/W18-5206.pdf) - **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary The SciArg dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing fine-grained argumentative components and relations. It is the first argument-annotated corpus of scientific publications (in English), which allows for joint analyses of argumentation and other rhetorical dimensions of scientific writing. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages The language in the dataset is English. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - `document_id`: the base file name, e.g. "A28" - `text`: the parsed text of the scientific publication in the XML format - `text_bound_annotations`: span annotations that mark argumentative discourse units (ADUs). Each entry has the following fields: `offsets`, `text`, `type`, and `id`. - `relations`: binary relation annotations that mark the argumentative relations that hold between a head and a tail ADU. Each entry has the following fields: `id`, `head`, `tail`, and `type` where `head` and `tail` each have the fields: `ref_id` and `role`. ### Data Splits The dataset consists of a single `train` split that has 40 documents. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{lauscher2018b, title = {An argument-annotated corpus of scientific publications}, booktitle = {Proceedings of the 5th Workshop on Mining Argumentation}, publisher = {Association for Computational Linguistics}, author = {Lauscher, Anne and Glava\v{s}, Goran and Ponzetto, Simone Paolo}, address = {Brussels, Belgium}, year = {2018}, pages = {40–46} } ``` ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
liuyanchen1015/MULTI_VALUE_mrpc_null_genitive
--- dataset_info: features: - name: sentence1 dtype: string - name: sentence2 dtype: string - name: label dtype: int64 - name: idx dtype: int64 - name: value_score dtype: int64 splits: - name: test num_bytes: 119706 num_examples: 436 - name: train num_bytes: 260557 num_examples: 933 - name: validation num_bytes: 34480 num_examples: 122 download_size: 276164 dataset_size: 414743 --- # Dataset Card for "MULTI_VALUE_mrpc_null_genitive" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tsuinzues/doutorabrinquedos
--- license: openrail ---
minh21/COVID-QA-sentence-transformer-biencoder-data-75_25
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: question dtype: string - name: positive dtype: string - name: negative dtype: string - name: document_id dtype: int64 splits: - name: train num_bytes: 25188652 num_examples: 12274 - name: test num_bytes: 2473938 num_examples: 1360 download_size: 1946559 dataset_size: 27662590 --- # Dataset Card for "COVID-QA-sentence-transformer-biencoder-data-75_25" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DialogueCharacter/chinese_general_instruction_with_reward_score
--- dataset_info: features: - name: input dtype: string - name: output dtype: string - name: reward_score dtype: float64 splits: - name: train num_bytes: 1634095908 num_examples: 1169201 download_size: 998968518 dataset_size: 1634095908 --- # Dataset Card for "chinese_general_instruction_with_reward_score" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ConseggioLigure/lijnews-instruct-lij-ita
--- license: cc-by-4.0 task_categories: - conversational - translation pretty_name: LigurianNews lij-ita translation dataset (instruction-style) dataset_info: features: - name: inputs dtype: string - name: targets dtype: string - name: template_id dtype: int64 - name: template_lang sequence: string splits: - name: train num_bytes: 288462 num_examples: 153 - name: dev num_bytes: 47500 num_examples: 27 - name: test num_bytes: 67307 num_examples: 36 download_size: 292727 dataset_size: 403269 configs: - config_name: default data_files: - split: train path: data/train-* - split: dev path: data/dev-* - split: test path: data/test-* --- This is a Ligurian→Italian document-level translation dataset. The original data comes from the [LigurianNews corpus](https://github.com/ConseggioLigure/data/), and it has been converted to the instruction format. The prompts, written in Ligurian, ask the model to "translate the following text to Italian". There are several variants of the prompt which were randomly sampled for each sentence: ``` Traduxi in italian: \<sentence> Traduxi da-o zeneise à l’italian: \<sentence> Traduxi da-o ligure à l’italian: \<sentence> Traduxi sto testo in italian: \<sentence> Traduxi in lengua italiaña: \<sentence> Traduxi sto testo da-o zeneise à l’italian: \<sentence> Traduxi sto testo da-o ligure à l’italian: \<sentence> Comm’à l’é a traduçion italiaña de sto testo? \<sentence> Quæ a l’é a traduçion italiaña de sto testo? \<sentence> Ti peu tradue sto testo in italian? \<sentence> ``` The prompt template used for each dataset entry is referenced in the column `template_id`, with ids ranging from 1 to 10 according to the order given above. The targets are always prefixed with the string _"A traduçion in italian do testo a l’é: \<sentence>"_ ("The Italian translation of the text is:"). The correspondence between `template_id`, prompt template and target template is therefore: ``` [ (1, "Traduxi in italian:\n", "A traduçion in italian do testo a l’é:\n"), (2, "Traduxi da-o zeneise à l’italian:\n", "A traduçion in italian do testo a l’é:\n"), (3, "Traduxi da-o ligure à l’italian:\n", "A traduçion in italian do testo a l’é:\n"), (4, "Traduxi sto testo in italian:\n", "A traduçion in italian do testo a l’é:\n"), (5, "Traduxi in lengua italiaña:\n", "A traduçion in italian do testo a l’é:\n"), (6, "Traduxi sto testo da-o zeneise à l’italian:\n", "A traduçion in italian do testo a l’é:\n"), (7, "Traduxi sto testo da-o ligure à l’italian:\n", "A traduçion in italian do testo a l’é:\n"), (8, "Comm’à l’é a traduçion italiaña de sto testo?\n", "A traduçion in italian do testo a l’é:\n"), (9, "Quæ a l’é a traduçion italiaña de sto testo?\n", "A traduçion in italian do testo a l’é:\n"), (10, "Ti peu tradue sto testo in italian?\n", "A traduçion in italian do testo a l’é:\n"), ] ``` The dataset contains 153 train samples, 27 validation samples and 36 test samples.
hybrid_qa
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering task_ids: [] paperswithcode_id: hybridqa pretty_name: HybridQA tags: - multihop-tabular-text-qa dataset_info: config_name: hybrid_qa features: - name: question_id dtype: string - name: question dtype: string - name: table_id dtype: string - name: answer_text dtype: string - name: question_postag dtype: string - name: table struct: - name: url dtype: string - name: title dtype: string - name: header sequence: string - name: data list: - name: value dtype: string - name: urls list: - name: url dtype: string - name: summary dtype: string - name: section_title dtype: string - name: section_text dtype: string - name: uid dtype: string - name: intro dtype: string splits: - name: train num_bytes: 2745712265 num_examples: 62682 - name: validation num_bytes: 153511944 num_examples: 3466 - name: test num_bytes: 148795847 num_examples: 3463 download_size: 217436855 dataset_size: 3048020056 --- # Dataset Card for HybridQA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://hybridqa.github.io/index.html - **Repository:** [GitHub](https://github.com/wenhuchen/HybridQA) - **Paper:** [HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data](https://arxiv.org/abs/1909.05358) - **Leaderboard:** [HybridQA Competition](https://competitions.codalab.org/competitions/24420) - **Point of Contact:** [Wenhu Chen](wenhuchen@cs.ucsb.edu) ### Dataset Summary Existing question answering datasets focus on dealing with homogeneous information, based either only on text or KB/Table information alone. However, as human knowledge is distributed over heterogeneous forms, using homogeneous information alone might lead to severe coverage problems. To fill in the gap, we present HybridQA, a new large-scale question-answering dataset that requires reasoning on heterogeneous information. Each question is aligned with a Wikipedia table and multiple free-form corpora linked with the entities in the table. The questions are designed to aggregate both tabular information and text information, i.e., lack of either form would render the question unanswerable. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is in English language. ## Dataset Structure ### Data Instances A typical example looks like this ``` { "question_id": "00009b9649d0dd0a", "question": "Who were the builders of the mosque in Herat with fire temples ?", "table_id": "List_of_mosques_in_Afghanistan_0", "answer_text": "Ghurids", "question_postag": "WP VBD DT NNS IN DT NN IN NNP IN NN NNS .", "table": { "url": "https://en.wikipedia.org/wiki/List_of_mosques_in_Afghanistan", "title": "List of mosques in Afghanistan", "header": [ "Name", "Province", "City", "Year", "Remarks" ], "data": [ { "value": "Kabul", "urls": [ { "summary": "Kabul ( Persian : کابل , romanized : Kābol , Pashto : کابل , romanized : Kābəl ) is the capital and largest city of Afghanistan...", "url": "/wiki/Kabul" } ] } ] }, "section_title": "", "section_text": "", "uid": "List_of_mosques_in_Afghanistan_0", "intro": "The following is an incomplete list of large mosques in Afghanistan:" } ``` ### Data Fields - `question_id` (str) - `question` (str) - `table_id` (str) - `answer_text` (str) - `question_postag` (str) - `table` (dict): - `url` (str) - `title` (str) - `header` (list of str) - `data` (list of dict): - `value` (str) - `urls` (list of dict): - `url` (str) - `summary` (str) - `section_title` (str) - `section_text` (str) - `uid` (str) - `intro` (str) ### Data Splits The dataset is split into `train`, `dev` and `test` splits. | | train | validation | test | | --------------- |------:|-----------:|-----:| | N. Instances | 62682 | 3466 | 3463 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/). ### Citation Information [More Information Needed] ``` @article{chen2020hybridqa, title={HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data}, author={Chen, Wenhu and Zha, Hanwen and Chen, Zhiyu and Xiong, Wenhan and Wang, Hong and Wang, William}, journal={Findings of EMNLP 2020}, year={2020} } ``` ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
hacoro/whisper-preprocessed-sample
--- license: mit dataset_info: features: - name: input_features sequence: sequence: float32 - name: labels sequence: int64 splits: - name: train num_bytes: 1537068032 num_examples: 1600 - name: test num_bytes: 192137248 num_examples: 200 - name: valid num_bytes: 192130616 num_examples: 200 download_size: 338284107 dataset_size: 1921335896 ---
TriadParty/deepmoney-sft
--- license: apache-2.0 --- This is the dataset which i used to train https://huggingface.co/TriadParty/deepmoney-34b-200k-chat-evaluator Enjoy it
abneraigc/text2sql_chatgpt
--- license: apache-2.0 ---
BangumiBase/scottpilgrimtakesoff
--- license: mit tags: - art size_categories: - 1K<n<10K --- # Bangumi Image Base of Scott Pilgrim Takes Off This is the image base of bangumi Scott Pilgrim Takes Off, we detected 31 characters, 1712 images in total. The full dataset is [here](all.zip). **Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability). Here is the characters' preview: | # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 | |:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------| | 0 | 14 | [Download](0/dataset.zip) | ![preview 1](0/preview_1.png) | ![preview 2](0/preview_2.png) | ![preview 3](0/preview_3.png) | ![preview 4](0/preview_4.png) | ![preview 5](0/preview_5.png) | ![preview 6](0/preview_6.png) | ![preview 7](0/preview_7.png) | ![preview 8](0/preview_8.png) | | 1 | 46 | [Download](1/dataset.zip) | ![preview 1](1/preview_1.png) | ![preview 2](1/preview_2.png) | ![preview 3](1/preview_3.png) | ![preview 4](1/preview_4.png) | ![preview 5](1/preview_5.png) | ![preview 6](1/preview_6.png) | ![preview 7](1/preview_7.png) | ![preview 8](1/preview_8.png) | | 2 | 30 | [Download](2/dataset.zip) | ![preview 1](2/preview_1.png) | ![preview 2](2/preview_2.png) | ![preview 3](2/preview_3.png) | ![preview 4](2/preview_4.png) | ![preview 5](2/preview_5.png) | ![preview 6](2/preview_6.png) | ![preview 7](2/preview_7.png) | ![preview 8](2/preview_8.png) | | 3 | 73 | [Download](3/dataset.zip) | ![preview 1](3/preview_1.png) | ![preview 2](3/preview_2.png) | ![preview 3](3/preview_3.png) | ![preview 4](3/preview_4.png) | ![preview 5](3/preview_5.png) | ![preview 6](3/preview_6.png) | ![preview 7](3/preview_7.png) | ![preview 8](3/preview_8.png) | | 4 | 29 | [Download](4/dataset.zip) | ![preview 1](4/preview_1.png) | ![preview 2](4/preview_2.png) | ![preview 3](4/preview_3.png) | ![preview 4](4/preview_4.png) | ![preview 5](4/preview_5.png) | ![preview 6](4/preview_6.png) | ![preview 7](4/preview_7.png) | ![preview 8](4/preview_8.png) | | 5 | 83 | [Download](5/dataset.zip) | ![preview 1](5/preview_1.png) | ![preview 2](5/preview_2.png) | ![preview 3](5/preview_3.png) | ![preview 4](5/preview_4.png) | ![preview 5](5/preview_5.png) | ![preview 6](5/preview_6.png) | ![preview 7](5/preview_7.png) | ![preview 8](5/preview_8.png) | | 6 | 28 | [Download](6/dataset.zip) | ![preview 1](6/preview_1.png) | ![preview 2](6/preview_2.png) | ![preview 3](6/preview_3.png) | ![preview 4](6/preview_4.png) | ![preview 5](6/preview_5.png) | ![preview 6](6/preview_6.png) | ![preview 7](6/preview_7.png) | ![preview 8](6/preview_8.png) | | 7 | 79 | [Download](7/dataset.zip) | ![preview 1](7/preview_1.png) | ![preview 2](7/preview_2.png) | ![preview 3](7/preview_3.png) | ![preview 4](7/preview_4.png) | ![preview 5](7/preview_5.png) | ![preview 6](7/preview_6.png) | ![preview 7](7/preview_7.png) | ![preview 8](7/preview_8.png) | | 8 | 91 | [Download](8/dataset.zip) | ![preview 1](8/preview_1.png) | ![preview 2](8/preview_2.png) | ![preview 3](8/preview_3.png) | ![preview 4](8/preview_4.png) | ![preview 5](8/preview_5.png) | ![preview 6](8/preview_6.png) | ![preview 7](8/preview_7.png) | ![preview 8](8/preview_8.png) | | 9 | 279 | [Download](9/dataset.zip) | ![preview 1](9/preview_1.png) | ![preview 2](9/preview_2.png) | ![preview 3](9/preview_3.png) | ![preview 4](9/preview_4.png) | ![preview 5](9/preview_5.png) | ![preview 6](9/preview_6.png) | ![preview 7](9/preview_7.png) | ![preview 8](9/preview_8.png) | | 10 | 21 | [Download](10/dataset.zip) | ![preview 1](10/preview_1.png) | ![preview 2](10/preview_2.png) | ![preview 3](10/preview_3.png) | ![preview 4](10/preview_4.png) | ![preview 5](10/preview_5.png) | ![preview 6](10/preview_6.png) | ![preview 7](10/preview_7.png) | ![preview 8](10/preview_8.png) | | 11 | 59 | [Download](11/dataset.zip) | ![preview 1](11/preview_1.png) | ![preview 2](11/preview_2.png) | ![preview 3](11/preview_3.png) | ![preview 4](11/preview_4.png) | ![preview 5](11/preview_5.png) | ![preview 6](11/preview_6.png) | ![preview 7](11/preview_7.png) | ![preview 8](11/preview_8.png) | | 12 | 35 | [Download](12/dataset.zip) | ![preview 1](12/preview_1.png) | ![preview 2](12/preview_2.png) | ![preview 3](12/preview_3.png) | ![preview 4](12/preview_4.png) | ![preview 5](12/preview_5.png) | ![preview 6](12/preview_6.png) | ![preview 7](12/preview_7.png) | ![preview 8](12/preview_8.png) | | 13 | 29 | [Download](13/dataset.zip) | ![preview 1](13/preview_1.png) | ![preview 2](13/preview_2.png) | ![preview 3](13/preview_3.png) | ![preview 4](13/preview_4.png) | ![preview 5](13/preview_5.png) | ![preview 6](13/preview_6.png) | ![preview 7](13/preview_7.png) | ![preview 8](13/preview_8.png) | | 14 | 30 | [Download](14/dataset.zip) | ![preview 1](14/preview_1.png) | ![preview 2](14/preview_2.png) | ![preview 3](14/preview_3.png) | ![preview 4](14/preview_4.png) | ![preview 5](14/preview_5.png) | ![preview 6](14/preview_6.png) | ![preview 7](14/preview_7.png) | ![preview 8](14/preview_8.png) | | 15 | 80 | [Download](15/dataset.zip) | ![preview 1](15/preview_1.png) | ![preview 2](15/preview_2.png) | ![preview 3](15/preview_3.png) | ![preview 4](15/preview_4.png) | ![preview 5](15/preview_5.png) | ![preview 6](15/preview_6.png) | ![preview 7](15/preview_7.png) | ![preview 8](15/preview_8.png) | | 16 | 32 | [Download](16/dataset.zip) | ![preview 1](16/preview_1.png) | ![preview 2](16/preview_2.png) | ![preview 3](16/preview_3.png) | ![preview 4](16/preview_4.png) | ![preview 5](16/preview_5.png) | ![preview 6](16/preview_6.png) | ![preview 7](16/preview_7.png) | ![preview 8](16/preview_8.png) | | 17 | 11 | [Download](17/dataset.zip) | ![preview 1](17/preview_1.png) | ![preview 2](17/preview_2.png) | ![preview 3](17/preview_3.png) | ![preview 4](17/preview_4.png) | ![preview 5](17/preview_5.png) | ![preview 6](17/preview_6.png) | ![preview 7](17/preview_7.png) | ![preview 8](17/preview_8.png) | | 18 | 11 | [Download](18/dataset.zip) | ![preview 1](18/preview_1.png) | ![preview 2](18/preview_2.png) | ![preview 3](18/preview_3.png) | ![preview 4](18/preview_4.png) | ![preview 5](18/preview_5.png) | ![preview 6](18/preview_6.png) | ![preview 7](18/preview_7.png) | ![preview 8](18/preview_8.png) | | 19 | 68 | [Download](19/dataset.zip) | ![preview 1](19/preview_1.png) | ![preview 2](19/preview_2.png) | ![preview 3](19/preview_3.png) | ![preview 4](19/preview_4.png) | ![preview 5](19/preview_5.png) | ![preview 6](19/preview_6.png) | ![preview 7](19/preview_7.png) | ![preview 8](19/preview_8.png) | | 20 | 175 | [Download](20/dataset.zip) | ![preview 1](20/preview_1.png) | ![preview 2](20/preview_2.png) | ![preview 3](20/preview_3.png) | ![preview 4](20/preview_4.png) | ![preview 5](20/preview_5.png) | ![preview 6](20/preview_6.png) | ![preview 7](20/preview_7.png) | ![preview 8](20/preview_8.png) | | 21 | 23 | [Download](21/dataset.zip) | ![preview 1](21/preview_1.png) | ![preview 2](21/preview_2.png) | ![preview 3](21/preview_3.png) | ![preview 4](21/preview_4.png) | ![preview 5](21/preview_5.png) | ![preview 6](21/preview_6.png) | ![preview 7](21/preview_7.png) | ![preview 8](21/preview_8.png) | | 22 | 11 | [Download](22/dataset.zip) | ![preview 1](22/preview_1.png) | ![preview 2](22/preview_2.png) | ![preview 3](22/preview_3.png) | ![preview 4](22/preview_4.png) | ![preview 5](22/preview_5.png) | ![preview 6](22/preview_6.png) | ![preview 7](22/preview_7.png) | ![preview 8](22/preview_8.png) | | 23 | 85 | [Download](23/dataset.zip) | ![preview 1](23/preview_1.png) | ![preview 2](23/preview_2.png) | ![preview 3](23/preview_3.png) | ![preview 4](23/preview_4.png) | ![preview 5](23/preview_5.png) | ![preview 6](23/preview_6.png) | ![preview 7](23/preview_7.png) | ![preview 8](23/preview_8.png) | | 24 | 78 | [Download](24/dataset.zip) | ![preview 1](24/preview_1.png) | ![preview 2](24/preview_2.png) | ![preview 3](24/preview_3.png) | ![preview 4](24/preview_4.png) | ![preview 5](24/preview_5.png) | ![preview 6](24/preview_6.png) | ![preview 7](24/preview_7.png) | ![preview 8](24/preview_8.png) | | 25 | 14 | [Download](25/dataset.zip) | ![preview 1](25/preview_1.png) | ![preview 2](25/preview_2.png) | ![preview 3](25/preview_3.png) | ![preview 4](25/preview_4.png) | ![preview 5](25/preview_5.png) | ![preview 6](25/preview_6.png) | ![preview 7](25/preview_7.png) | ![preview 8](25/preview_8.png) | | 26 | 10 | [Download](26/dataset.zip) | ![preview 1](26/preview_1.png) | ![preview 2](26/preview_2.png) | ![preview 3](26/preview_3.png) | ![preview 4](26/preview_4.png) | ![preview 5](26/preview_5.png) | ![preview 6](26/preview_6.png) | ![preview 7](26/preview_7.png) | ![preview 8](26/preview_8.png) | | 27 | 26 | [Download](27/dataset.zip) | ![preview 1](27/preview_1.png) | ![preview 2](27/preview_2.png) | ![preview 3](27/preview_3.png) | ![preview 4](27/preview_4.png) | ![preview 5](27/preview_5.png) | ![preview 6](27/preview_6.png) | ![preview 7](27/preview_7.png) | ![preview 8](27/preview_8.png) | | 28 | 30 | [Download](28/dataset.zip) | ![preview 1](28/preview_1.png) | ![preview 2](28/preview_2.png) | ![preview 3](28/preview_3.png) | ![preview 4](28/preview_4.png) | ![preview 5](28/preview_5.png) | ![preview 6](28/preview_6.png) | ![preview 7](28/preview_7.png) | ![preview 8](28/preview_8.png) | | 29 | 13 | [Download](29/dataset.zip) | ![preview 1](29/preview_1.png) | ![preview 2](29/preview_2.png) | ![preview 3](29/preview_3.png) | ![preview 4](29/preview_4.png) | ![preview 5](29/preview_5.png) | ![preview 6](29/preview_6.png) | ![preview 7](29/preview_7.png) | ![preview 8](29/preview_8.png) | | noise | 119 | [Download](-1/dataset.zip) | ![preview 1](-1/preview_1.png) | ![preview 2](-1/preview_2.png) | ![preview 3](-1/preview_3.png) | ![preview 4](-1/preview_4.png) | ![preview 5](-1/preview_5.png) | ![preview 6](-1/preview_6.png) | ![preview 7](-1/preview_7.png) | ![preview 8](-1/preview_8.png) |
atgarcia/EMGSoundTrain2
--- dataset_info: features: - name: text dtype: string - name: audio struct: - name: array sequence: float64 - name: path dtype: string - name: sampling_rate dtype: int64 - name: emg sequence: sequence: float64 - name: emg_sound sequence: float64 splits: - name: train num_bytes: 2220850617 num_examples: 1000 download_size: 746818389 dataset_size: 2220850617 configs: - config_name: default data_files: - split: train path: data/train-* ---
atmallen/qm_alice_mixture_1.0e
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: alice_label dtype: bool - name: bob_label dtype: bool - name: difficulty dtype: int64 - name: statement dtype: string - name: choices sequence: string - name: character dtype: string - name: label dtype: class_label: names: '0': 'False' '1': 'True' splits: - name: train num_bytes: 22366655.5 num_examples: 200000 - name: validation num_bytes: 2254431.5 num_examples: 20000 - name: test num_bytes: 2248382.5 num_examples: 20000 download_size: 0 dataset_size: 26869469.5 --- # Dataset Card for "qm_alice__mixture_1.0e" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
baber/NYPL_entries
--- license: cc0-1.0 --- The year, volume, part and page fields are taken from the xml metadata and they refer to the catalog publication. Each entry contains a registration number (`regNum`) and a registration date (`regDate`) which were copied verbatim but the xml entry also standardized those to the `regnum` and `date` attributes.
medieval-data/mgh-critical-edition-layout
--- license: cc-by-nc-4.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: val path: data/val-* dataset_info: features: - name: image_id dtype: string - name: image dtype: image - name: width dtype: int64 - name: height dtype: int64 - name: objects struct: - name: bbox sequence: sequence: float64 - name: category sequence: int64 - name: id sequence: 'null' splits: - name: train num_bytes: 19639133.0 num_examples: 79 - name: val num_bytes: 4967295.0 num_examples: 21 download_size: 24112875 dataset_size: 24606428.0 --- --- license: cc-by-nc-4.0 task_categories: - object-detection language: - la tags: - object detection - critical edition - yolo size_categories: - n<1K --- # MGH Layout Detection Dataset ## Dataset Description ### General Description This dataset consists of scans from the MGH critical edition of Alcuin's letters, which were first edited by Ernestus Duemmler in 1895. The digital scans were sourced from the DMGH's repository, which can be accessed [here](https://www.dmgh.de/mgh_epp_4). The scans were annotated using CVAT, marking out two classes: the title of a letter and the body of the letter. ### Why was this dataset created? The primary motivation behind the creation of this dataset was to enhance the downstream task of OCR. OCR often returns errors due to interferences like marginalia and footnotes present in the scanned pages. By having accurate annotations for the title and body of the letters, users can efficiently isolate the main content of the letters and possibly achieve better OCR results. Future plans for this dataset include expanding the annotations to encompass footnotes and marginalia, thus further refining the demarcation between the main content and supplementary notes. ### Classes Currently, the dataset has two annotated classes: - Title of the letter - Body of the letter Planned future additions include: - Footnotes - Marginalia ## Sample Annotation ![sample_annotation](sample_annotation.JPG) ## Biographical Information ### About Alcuin Alcuin of York (c. 735 – 804 AD) was an English scholar, clergyman, poet, and teacher. He was born in York and became a leading figure in the so-called "Carolingian renaissance." Alcuin made significant contributions to the educational and religious reforms initiated by Charlemagne, emphasizing the importance of classical studies. ### About Alcuin's Letters Alcuin's letters provide a crucial insight into the Carolingian world, highlighting the intellectual and religious discourse of the time. They serve as invaluable resources for understanding the interactions between some of the important figures of Charlemagne's court, the challenges they faced, and the solutions they proposed. The letters also offer a window into Alcuin's own thoughts, his relationships with peers and, most importantly, his students, and his role as an advisor to Charlemagne. ## Dataset and Annotation Details ### Annotation Process The scans of Alcuin's letters were annotated manually using the CVAT tool. The primary focus was to delineate the titles and bodies of the letters. This clear demarcation aids in improving the precision of OCR tools by allowing them to target specific regions in the scanned pages. ### Dataset Limitations As the dataset currently focuses only on titles and bodies of the letters, it may not fully address the challenges posed by marginalia and footnotes in OCR tasks. However, the planned expansion to include these classes will provide a more comprehensive solution. ### Usage Given the non-commercial restriction associated with the source scans, users of this dataset should be mindful of the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) license under which it is distributed. ## Additional Information For more details on the dataset and to access the digital scans, visit the DMGH repository link provided above.
iloncka/mosal
--- license: apache-2.0 ---
tumeteor/Security-TTP-Mapping
--- license: cc task_categories: - text-classification - question-answering - zero-shot-classification - sentence-similarity language: - en tags: - security - ttp mapping - mitre att&ck - 'extreme multilabel ' - multilabel classification pretty_name: Security Attack Pattern Recognition Datasets size_categories: - 1K<n<10K --- # The Security Attack Pattern (TTP) Recognition or Mapping Task [![License](https://img.shields.io/badge/license-CC--BY--NC--SA--4.0-lightgrey)](https://creativecommons.org/licenses/by/4.0/) [![arXiv](https://img.shields.io/badge/arXiv-2109.05105-29d634.svg)](https://arxiv.org/abs/2401.10337) We share in this repo the MITRE ATT&amp;CK mapping datasets, with `training`, `validation` and `test` splits. The datasets can be considered as an emerging and challenging `multilabel classification` NLP task, with over 600 hierarchical classes. NOTE: due to their security nature, these datasets contain textual information about `malware` and other security aspects. ## Datasets ### TRAM This dataset belongs to [CTID](https://mitre-engenuity.org/cybersecurity/center-for-threat-informed-defense/), is originally provided in this [github link](https://github.com/center-for-threat-informed-defense/tram). We processed the original files (i.e., gather from all sources, remove duplicates, resolve noisy / too short text and noisy labels, remap to MITRE ATTACK 12.0) and split into training, dev and test splits. ### Procedure+ The dataset consists of two sub- datasets: - Procedures: belong to [MITRE](https://github.com/mitre/cti/tree/master). All procedure examples from v12.0 are gathered and processed (i.e., remove markups) and split into training, dev and test splits. - Derived procedures: we crawled the URL references for each procedure example, and extract original text from the articles that are determined to be relevant to the procedure examples. The text are processed and split into training, dev and test splits. ### Expert The dataset is constructed from a large pool of high-quality threat reports. The rich textual paragraphs are carefully selected and then annotated by seasoned security experts. The dataset is also pre-split into `training`, `dev` and `test` splits. There are ~4 labels per text in the `test` split, on average. ## Citations If you use the datasets in your research or want to refer to our work, please cite: ``` @inproceedings{nguyen-srndic-neth-ttpm, title = "Noise Contrastive Estimation-based Matching Framework for Low-resource Security Attack Pattern Recognition", author = "Nguyen, Tu and Šrndić, Nedim and Neth, Alexander", booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics", month = mar, year = "2024", publisher = "Association for Computational Linguistics", abstract = "Tactics, Techniques and Procedures (TTPs) represent sophisticated attack patterns in the cybersecurity domain, described encyclopedically in textual knowledge bases. Identifying TTPs in cybersecurity writing, often called TTP mapping, is an important and challenging task. Conventional learning approaches often target the problem in the classical multi-class or multilabel classification setting. This setting hinders the learning ability of the model due to a large number of classes (i.e., TTPs), the inevitable skewness of the label distribution and the complex hierarchical structure of the label space. We formulate the problem in a different learning paradigm, where the assignment of a text to a TTP label is decided by the direct semantic similarity between the two, thus reducing the complexity of competing solely over the large labeling space. To that end, we propose a neural matching architecture with an effective sampling-based learn-to-compare mechanism, facilitating the learning process of the matching model despite constrained resources.", } ``` ## License This project is licensed under the Creative Commons CC BY License, version 4.0.
lewtun/alps
--- dataset_info: features: - name: image dtype: image splits: - name: train num_bytes: 27913166.0 num_examples: 10 download_size: 27914963 dataset_size: 27913166.0 --- # Dataset Card for "alps" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
boda/word_init_disjoint_half
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: labels dtype: string - name: clue dtype: string splits: - name: train num_bytes: 4163893.0 num_examples: 69339 - name: test num_bytes: 1306598.0 num_examples: 21707 download_size: 4312817 dataset_size: 5470491.0 --- # Dataset Card for "word_init_disjoint_half" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aisyahhrazak/crawl-aliffchannel.com
--- language: - ms --- About - Data scraped from https://aliffchannel.com
ccmusic-database/CNPM
--- license: mit task_categories: - audio-classification language: - zh - en tags: - music - art pretty_name: Chinese National Pentatonic Mode Dataset size_categories: - n<1K viewer: false --- # Dataset Card for Chinese National Pentatonic Mode Dataset The raw dataset includes audio recordings and annotations of five modes of Chinese music, encompassing the Gong, Shang, Jue, Zhi, and Yu modes. Musical pieces collected are mostly composed in pentatonic (five-note) scales, with some of them being hexatonic (six-note) and heptatonic (seven-note) scales. The total recording number is 287 with the average duration being 179.5s. ## Dataset Description - **Homepage:** <https://ccmusic-database.github.io> - **Repository:** <https://huggingface.co/datasets/ccmusic-database/CNPM> - **Paper:** <https://doi.org/10.5281/zenodo.5676893> - **Leaderboard:** <https://www.modelscope.cn/datasets/ccmusic/CNPM> - **Point of Contact:** Chinese Ethnic Pentatonic Scale; Database; Music Information Retrieval; Pentatonic Therapy ### Dataset Summary The expanded dataset is integrated into our database, and each data entry consists of seven columns: the first column denotes the audio recording in .wav format, sampled at 22,050 Hz. The second and third presents the name of the piece and artist. The subsequent columns represent the system, tonic, pattern, and type of the musical piece, respectively. The eighth column contains an additional Chinese name of the mode, while the final column indicates the duration of the audio in seconds. ### Supported Tasks and Leaderboards MIR, audio classification ### Languages Chinese, English ## Dataset Structure | audio(.wav, 22050Hz) | mel(.jpg, 22050Hz) | title | artist | system | tonic | pattern | type | mode_name | length | | :-----------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------: | :----: | :----: | :------: | :------: | :-----: | :-----: | :-------: | :----: | | <audio controls src="https://huggingface.co/datasets/ccmusic-database/CNPM/resolve/main/data/%E5%88%AE%E5%9C%B0%E9%A3%8E%20-%20%E5%BB%96%E8%8E%8E.mp3"> | <img src="./data/刮地风 - 廖莎.jpg"> | string | string | 12-class | 12-class | 5-class | 6-class | string | string | | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ### Data Instances .zip(.wav), .csv ### Data Fields ### Mode type, Name, Performer, Album Name, National Mode Name, Tonggong System, Audio Links ### Data Splits train ## Usage ```python from datasets import load_dataset dataset = load_dataset("ccmusic-dabase/CNPM", split='train') for data in dataset: print(data) ``` ## Maintenance ```bash GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/ccmusic-database/CNPM cd CNPM ``` ## Dataset Creation ### Curation Rationale Lack of a dataset for Chinese National Pentatonic Mode ### Source Data #### Initial Data Collection and Normalization Weixin Ren, Mingjin Che, Zhaowen Wang, Qinyu Li, Jiaye Hu, Fan Xia, Wei Li, Monan Zhou #### Who are the source language producers? Teachers & students from FD-LAMT, CCOM, SCCM ### Annotations #### Annotation process Based on the working idea of combining manual labeling with a computer in the construction of the World Music Database, this database collects and labels the audio of five modes (including five tones, six tones and seven tones) of "Gong, Shang, Jue, Zhi and Yu". At the same time, it makes a detailed analysis of the judgment of Chinese national pentatonic modes and finds application scenarios and technical models, which can provide raw data for the analysis and retrieval of Chinese national music characteristics. #### Who are the annotators? Teachers & students from FD-LAMT, CCOM, SCCM ### Personal and Sensitive Information Due to copyright reasons, only some of the audio can be released directly. This part of the audio is the Shang mode and Jue mode tracks performed by professional performers. The rest of the audio needs to be searched and downloaded by the dataset user from music platforms such as Kugou Music, NetEase Cloud Music and QQ Music, based on song titles, artists and album names. ## Considerations for Using the Data ### Social Impact of Dataset Promoting the development of the music AI industry ### Discussion of Biases Only for Traditional Chinese Instruments ### Other Known Limitations Only for Pentatonic Mode ## Additional Information ### Dataset Curators Weixin Ren, Mingjin Che, Zhaowen Wang, Qinyu Li, Jiaye Hu, Fan Xia, Wei Li. ### Evaluation [任伟鑫,车明锦,汪照文,孟文武,李沁雨,胡佳弋,夏凡,李伟.CNPM Database:一个用于计算音乐学的中国民族五声调式数据库[J].复旦学报(自然科学版),2022,61(05):555-563.DOI:10.15943/j.cnki.fdxb-jns.20221017.008.](https://kns.cnki.net/kcms2/article/abstract?v=lD5CuVSaeOtw0E2oWliKSMrLiLDt9iwvkwoTgSclPspwUECyt4uNZ6T7DCLlfwMqohXCQXkFzf_XjAUOQ3CAkhPqNj20H8eG9UfUVuHEey0x7Kqp32fMlJiM9xuPtdVMvC1PB2qW0qI=&uniplatform=NZKPT&src=copy) ### Licensing Information ``` MIT License Copyright (c) FD-LAMT Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ### Citation Information ```bibtex @dataset{zhaorui_liu_2021_5676893, author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han}, title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research}, month = {mar}, year = {2024}, publisher = {HuggingFace}, version = {1.2}, url = {https://huggingface.co/ccmusic-database} } ``` ### Contributions Provide a dataset for the Chinese National Pentatonic Mode
gcaillaut/frwiki_el
--- annotations_creators: - crowdsourced language_creators: - machine-generated language: - fr license: - wtfpl multilinguality: - monolingual pretty_name: French Wikipedia dataset for Entity Linking size_categories: - 1M<n<10M source_datasets: - original task_categories: - token-classification task_ids: [] --- # Dataset Card for frwiki_good_pages_el ## Dataset Description - Repository: [frwiki_el](https://github.com/GaaH/frwiki_el) - Point of Contact: [Gaëtan Caillaut](mailto://g.caillaut@brgm.fr) ### Dataset Summary This dataset contains articles from the French Wikipédia. It is intended to be used to train Entity Linking (EL) systems. Links in articles are used to detect named entities. The dataset `frwiki` contains sentences of each Wikipedia pages. The dataset `entities` contains description for each Wikipedia pages. ### Languages - French ## Dataset Structure ### frwiki ``` { "name": "Title of the page", "wikidata_id": "Identifier of the related Wikidata entity. Can be null.", "wikipedia_id": "Identifier of the Wikipedia page", "wikipedia_url": "URL to the Wikipedia page", "wikidata_url": "URL to the Wikidata page. Can be null.", "sentences" : [ { "text": "text of the current sentence", "ner": ["list", "of", "ner", "labels"], "mention_mappings": [ (start_of_first_mention, end_of_first_mention), (start_of_second_mention, end_of_second_mention) ], "el_wikidata_id": ["wikidata id of first mention", "wikidata id of second mention"], "el_wikipedia_id": [wikipedia id of first mention, wikipedia id of second mention], "el_wikipedia_title": ["wikipedia title of first mention", "wikipedia title of second mention"] } ] "words": ["words", "in", "the", "sentence"], "ner": ["ner", "labels", "of", "each", "words"], "el": ["el", "labels", "of", "each", "words"] } ``` ### entities ``` { "name": "Title of the page", "wikidata_id": "Identifier of the related Wikidata entity. Can be null.", "wikipedia_id": "Identifier of the Wikipedia page", "wikipedia_url": "URL to the Wikipedia page", "wikidata_url": "URL to the Wikidata page. Can be null.", "description": "Description of the entity" } ```
trng1305/sroie_dataset
--- dataset_info: features: - name: id dtype: string - name: words sequence: string - name: bboxes sequence: sequence: int64 - name: ner_tags sequence: int64 - name: image_path dtype: string splits: - name: train num_bytes: 4001095 num_examples: 626 - name: test num_bytes: 2192614 num_examples: 347 download_size: 992661 dataset_size: 6193709 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
CyberHarem/leipzig_azurlane
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of leipzig/ライプツィヒ/莱比锡 (Azur Lane) This is the dataset of leipzig/ライプツィヒ/莱比锡 (Azur Lane), containing 64 images and their tags. The core tags of this character are `breasts, blonde_hair, short_hair, hat, hair_bun, bangs, single_hair_bun, purple_eyes, large_breasts, medium_breasts, ribbon, black_headwear, pink_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 64 | 79.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/leipzig_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 64 | 45.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/leipzig_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 158 | 99.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/leipzig_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 64 | 70.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/leipzig_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 158 | 143.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/leipzig_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/leipzig_azurlane', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, looking_at_viewer, open_mouth, solo, upper_body, bare_shoulders, black_gloves, blush, cleavage, simple_background, white_background, beret, hair_ribbon, red_eyes | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, looking_at_viewer, solo, bare_shoulders, black_gloves, cleavage, dress, black_thighhighs, blush, open_mouth, simple_background, white_background | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, ass, bare_shoulders, black_thighhighs, blush, looking_at_viewer, solo, white_background, black_gloves, cannon, closed_mouth, from_behind, machinery, simple_background, turret, zettai_ryouiki, black_dress, pleated_dress, short_dress, black_skirt, full_body, hair_ribbon, high_heels, looking_back, red_ribbon, rigging, rudder_footwear, sleeveless, twisted_torso | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | open_mouth | solo | upper_body | bare_shoulders | black_gloves | blush | cleavage | simple_background | white_background | beret | hair_ribbon | red_eyes | dress | black_thighhighs | ass | cannon | closed_mouth | from_behind | machinery | turret | zettai_ryouiki | black_dress | pleated_dress | short_dress | black_skirt | full_body | high_heels | looking_back | red_ribbon | rigging | rudder_footwear | sleeveless | twisted_torso | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------------|:-------|:-------------|:-----------------|:---------------|:--------|:-----------|:--------------------|:-------------------|:--------|:--------------|:-----------|:--------|:-------------------|:------|:---------|:---------------|:--------------|:------------|:---------|:-----------------|:--------------|:----------------|:--------------|:--------------|:------------|:-------------|:---------------|:-------------|:----------|:------------------|:-------------|:----------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | | X | X | X | X | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | X | | X | X | X | | X | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
bigheiniuJ/JimmyLuAugOneShot
--- dataset_info: features: - name: output dtype: string - name: input dtype: string - name: seed dtype: string - name: split dtype: string - name: task dtype: string - name: id dtype: int64 - name: aug_type dtype: string - name: aug_time dtype: int64 - name: options sequence: string splits: - name: train num_bytes: 85056907.26195276 num_examples: 210791 download_size: 20781191 dataset_size: 85056907.26195276 configs: - config_name: default data_files: - split: train path: data/train-* ---
lm1b
--- pretty_name: One Billion Word Language Model Benchmark paperswithcode_id: billion-word-benchmark dataset_info: features: - name: text dtype: string config_name: plain_text splits: - name: train num_bytes: 4238206516 num_examples: 30301028 - name: test num_bytes: 42942045 num_examples: 306688 download_size: 1792209805 dataset_size: 4281148561 task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling language: - en --- # Dataset Card for One Billion Word Language Model Benchmark ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [statmt](http://www.statmt.org/lm-benchmark/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [arxiv](https://arxiv.org/pdf/1312.3005v3.pdf) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.79 GB - **Size of the generated dataset:** 4.28 GB - **Total amount of disk used:** 6.07 GB ### Dataset Summary A benchmark corpus to be used for measuring progress in statistical language modeling. This has almost one billion words in the training data. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### plain_text - **Size of downloaded dataset files:** 1.79 GB - **Size of the generated dataset:** 4.28 GB - **Total amount of disk used:** 6.07 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "While athletes in different professions dealt with doping scandals and other controversies , Woods continued to do what he did best : dominate the field of professional golf and rake in endorsements ." } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `text`: a `string` feature. ### Data Splits | name | train | test | |------------|----------|--------| | plain_text | 30301028 | 306688 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations The dataset doesn't contain annotations. ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needeate this repository accordingly. ### Citation Information ```bibtex @misc{chelba2014billion, title={One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling}, author={Ciprian Chelba and Tomas Mikolov and Mike Schuster and Qi Ge and Thorsten Brants and Phillipp Koehn and Tony Robinson}, year={2014}, eprint={1312.3005}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
quocanh34/cv_13_fake
--- dataset_info: features: - name: audio dtype: audio: sampling_rate: 16000 - name: transcription dtype: string splits: - name: train num_bytes: 238031.0 num_examples: 5 - name: validation num_bytes: 111976.0 num_examples: 5 download_size: 350428 dataset_size: 350007.0 --- # Dataset Card for "cv_13_fake" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
language-plus-molecules/LPM-24_train
--- dataset_info: features: - name: molecule dtype: string - name: caption dtype: string splits: - name: train num_bytes: 54111487 num_examples: 160560 - name: split_train num_bytes: 43016879 num_examples: 126864 - name: split_valid num_bytes: 11094608 num_examples: 33696 download_size: 24292581 dataset_size: 108222974 configs: - config_name: default data_files: - split: train path: data/train-* - split: split_train path: data/split_train-* - split: split_valid path: data/split_valid-* ---
kenhktsui/squad_v2_factuality_v2
--- dataset_info: features: - name: context dtype: string - name: label dtype: string - name: entity_modified dtype: string - name: new_entity dtype: string - name: ner_type dtype: string - name: top_k dtype: int64 splits: - name: train num_bytes: 13981154 num_examples: 17753 download_size: 8852587 dataset_size: 13981154 --- # Dataset Card for "squad_v2_factuality_v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bigcode/the-stack-v2-dedup
--- annotations_creators: [] language_creators: - crowdsourced - expert-generated language: - code license: - other multilinguality: - multilingual pretty_name: The-Stack-v2 size_categories: - unknown source_datasets: [] task_categories: - text-generation task_ids: [] extra_gated_prompt: |- ## Terms of Use for The Stack v2 The Stack v2 dataset is a collection of source code in over 600 programming languages. We ask that you read and acknowledge the following points before using the dataset: 1. Downloading the dataset in bulk requires a an agreement with SoftwareHeritage and INRIA. Contact [datasets@softwareheritage.org](mailto:datasets@softwareheritage.org?subject=TheStackV2%20request%20for%20dataset%20access%20information) for more information. 2. If you are using the dataset to train models you must adhere to the SoftwareHeritage [principles for language model training](https://www.softwareheritage.org/2023/10/19/swh-statement-on-llm-for-code/). 3. The Stack v2 is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack v2 must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point. 4. The Stack v2 is regularly updated to enact validated data removal requests. By clicking on "Access repository", you agree to update your own version of The Stack v2 to the most recent usable version. By clicking on "Access repository" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well. extra_gated_fields: Email: text I have read the License and agree with its terms: checkbox dataset_info: features: - name: blob_id dtype: string - name: directory_id dtype: string - name: path dtype: string - name: content_id dtype: string - name: detected_licenses sequence: string - name: license_type dtype: string - name: repo_name dtype: string - name: snapshot_id dtype: string - name: revision_id dtype: string - name: branch_name dtype: string - name: visit_date dtype: timestamp[ns] - name: revision_date dtype: timestamp[ns] - name: committer_date dtype: timestamp[ns] - name: github_id dtype: int64 - name: star_events_count dtype: int64 - name: fork_events_count dtype: int64 - name: gha_license_id dtype: string - name: gha_event_created_at dtype: timestamp[ns] - name: gha_created_at dtype: timestamp[ns] - name: gha_language dtype: string - name: src_encoding dtype: string - name: language dtype: string - name: is_vendor dtype: bool - name: is_generated dtype: bool - name: length_bytes dtype: int64 - name: extension dtype: string - name: filename dtype: string configs: - config_name: default default: true data_files: - split: train path: "data/*/*.parquet" - config_name: "1C_Enterprise" data_files: - split: train path: "data/1C_Enterprise/*.parquet" - config_name: "2-Dimensional_Array" data_files: - split: train path: "data/2-Dimensional_Array/*.parquet" - config_name: "4D" data_files: - split: train path: "data/4D/*.parquet" - config_name: "ABAP" data_files: - split: train path: "data/ABAP/*.parquet" - config_name: "ABAP_CDS" data_files: - split: train path: "data/ABAP_CDS/*.parquet" - config_name: "ABNF" data_files: - split: train path: "data/ABNF/*.parquet" - config_name: "AGS_Script" data_files: - split: train path: "data/AGS_Script/*.parquet" - config_name: "AIDL" data_files: - split: train path: "data/AIDL/*.parquet" - config_name: "AL" data_files: - split: train path: "data/AL/*.parquet" - config_name: "AMPL" data_files: - split: train path: "data/AMPL/*.parquet" - config_name: "ANTLR" data_files: - split: train path: "data/ANTLR/*.parquet" - config_name: "API_Blueprint" data_files: - split: train path: "data/API_Blueprint/*.parquet" - config_name: "APL" data_files: - split: train path: "data/APL/*.parquet" - config_name: "ASL" data_files: - split: train path: "data/ASL/*.parquet" - config_name: "ASN.1" data_files: - split: train path: "data/ASN.1/*.parquet" - config_name: "ASP.NET" data_files: - split: train path: "data/ASP.NET/*.parquet" - config_name: "ATS" data_files: - split: train path: "data/ATS/*.parquet" - config_name: "ActionScript" data_files: - split: train path: "data/ActionScript/*.parquet" - config_name: "Ada" data_files: - split: train path: "data/Ada/*.parquet" - config_name: "Adobe_Font_Metrics" data_files: - split: train path: "data/Adobe_Font_Metrics/*.parquet" - config_name: "Agda" data_files: - split: train path: "data/Agda/*.parquet" - config_name: "Alloy" data_files: - split: train path: "data/Alloy/*.parquet" - config_name: "Alpine_Abuild" data_files: - split: train path: "data/Alpine_Abuild/*.parquet" - config_name: "Altium_Designer" data_files: - split: train path: "data/Altium_Designer/*.parquet" - config_name: "AngelScript" data_files: - split: train path: "data/AngelScript/*.parquet" - config_name: "Ant_Build_System" data_files: - split: train path: "data/Ant_Build_System/*.parquet" - config_name: "Antlers" data_files: - split: train path: "data/Antlers/*.parquet" - config_name: "ApacheConf" data_files: - split: train path: "data/ApacheConf/*.parquet" - config_name: "Apex" data_files: - split: train path: "data/Apex/*.parquet" - config_name: "Apollo_Guidance_Computer" data_files: - split: train path: "data/Apollo_Guidance_Computer/*.parquet" - config_name: "AppleScript" data_files: - split: train path: "data/AppleScript/*.parquet" - config_name: "Arc" data_files: - split: train path: "data/Arc/*.parquet" - config_name: "AsciiDoc" data_files: - split: train path: "data/AsciiDoc/*.parquet" - config_name: "AspectJ" data_files: - split: train path: "data/AspectJ/*.parquet" - config_name: "Assembly" data_files: - split: train path: "data/Assembly/*.parquet" - config_name: "Astro" data_files: - split: train path: "data/Astro/*.parquet" - config_name: "Asymptote" data_files: - split: train path: "data/Asymptote/*.parquet" - config_name: "Augeas" data_files: - split: train path: "data/Augeas/*.parquet" - config_name: "AutoHotkey" data_files: - split: train path: "data/AutoHotkey/*.parquet" - config_name: "AutoIt" data_files: - split: train path: "data/AutoIt/*.parquet" - config_name: "Avro_IDL" data_files: - split: train path: "data/Avro_IDL/*.parquet" - config_name: "Awk" data_files: - split: train path: "data/Awk/*.parquet" - config_name: "BASIC" data_files: - split: train path: "data/BASIC/*.parquet" - config_name: "Ballerina" data_files: - split: train path: "data/Ballerina/*.parquet" - config_name: "Batchfile" data_files: - split: train path: "data/Batchfile/*.parquet" - config_name: "Beef" data_files: - split: train path: "data/Beef/*.parquet" - config_name: "Befunge" data_files: - split: train path: "data/Befunge/*.parquet" - config_name: "Berry" data_files: - split: train path: "data/Berry/*.parquet" - config_name: "BibTeX" data_files: - split: train path: "data/BibTeX/*.parquet" - config_name: "Bicep" data_files: - split: train path: "data/Bicep/*.parquet" - config_name: "Bikeshed" data_files: - split: train path: "data/Bikeshed/*.parquet" - config_name: "Bison" data_files: - split: train path: "data/Bison/*.parquet" - config_name: "BitBake" data_files: - split: train path: "data/BitBake/*.parquet" - config_name: "Blade" data_files: - split: train path: "data/Blade/*.parquet" - config_name: "BlitzBasic" data_files: - split: train path: "data/BlitzBasic/*.parquet" - config_name: "BlitzMax" data_files: - split: train path: "data/BlitzMax/*.parquet" - config_name: "Bluespec" data_files: - split: train path: "data/Bluespec/*.parquet" - config_name: "Boo" data_files: - split: train path: "data/Boo/*.parquet" - config_name: "Boogie" data_files: - split: train path: "data/Boogie/*.parquet" - config_name: "Brainfuck" data_files: - split: train path: "data/Brainfuck/*.parquet" - config_name: "BrighterScript" data_files: - split: train path: "data/BrighterScript/*.parquet" - config_name: "Brightscript" data_files: - split: train path: "data/Brightscript/*.parquet" - config_name: "Browserslist" data_files: - split: train path: "data/Browserslist/*.parquet" - config_name: "C" data_files: - split: train path: "data/C/*.parquet" - config_name: "C++" data_files: - split: train path: "data/C++/*.parquet" - config_name: "C-ObjDump" data_files: - split: train path: "data/C-ObjDump/*.parquet" - config_name: "C-Sharp" data_files: - split: train path: "data/C-Sharp/*.parquet" - config_name: "C2hs_Haskell" data_files: - split: train path: "data/C2hs_Haskell/*.parquet" - config_name: "CAP_CDS" data_files: - split: train path: "data/CAP_CDS/*.parquet" - config_name: "CIL" data_files: - split: train path: "data/CIL/*.parquet" - config_name: "CLIPS" data_files: - split: train path: "data/CLIPS/*.parquet" - config_name: "CMake" data_files: - split: train path: "data/CMake/*.parquet" - config_name: "COBOL" data_files: - split: train path: "data/COBOL/*.parquet" - config_name: "CODEOWNERS" data_files: - split: train path: "data/CODEOWNERS/*.parquet" - config_name: "COLLADA" data_files: - split: train path: "data/COLLADA/*.parquet" - config_name: "CSON" data_files: - split: train path: "data/CSON/*.parquet" - config_name: "CSS" data_files: - split: train path: "data/CSS/*.parquet" - config_name: "CSV" data_files: - split: train path: "data/CSV/*.parquet" - config_name: "CUE" data_files: - split: train path: "data/CUE/*.parquet" - config_name: "CWeb" data_files: - split: train path: "data/CWeb/*.parquet" - config_name: "Cabal_Config" data_files: - split: train path: "data/Cabal_Config/*.parquet" - config_name: "Cadence" data_files: - split: train path: "data/Cadence/*.parquet" - config_name: "Cairo" data_files: - split: train path: "data/Cairo/*.parquet" - config_name: "CameLIGO" data_files: - split: train path: "data/CameLIGO/*.parquet" - config_name: "Cap-n_Proto" data_files: - split: train path: "data/Cap-n_Proto/*.parquet" - config_name: "CartoCSS" data_files: - split: train path: "data/CartoCSS/*.parquet" - config_name: "Ceylon" data_files: - split: train path: "data/Ceylon/*.parquet" - config_name: "Chapel" data_files: - split: train path: "data/Chapel/*.parquet" - config_name: "Charity" data_files: - split: train path: "data/Charity/*.parquet" - config_name: "Checksums" data_files: - split: train path: "data/Checksums/*.parquet" - config_name: "ChucK" data_files: - split: train path: "data/ChucK/*.parquet" - config_name: "Cirru" data_files: - split: train path: "data/Cirru/*.parquet" - config_name: "Clarion" data_files: - split: train path: "data/Clarion/*.parquet" - config_name: "Clarity" data_files: - split: train path: "data/Clarity/*.parquet" - config_name: "Classic_ASP" data_files: - split: train path: "data/Classic_ASP/*.parquet" - config_name: "Clean" data_files: - split: train path: "data/Clean/*.parquet" - config_name: "Click" data_files: - split: train path: "data/Click/*.parquet" - config_name: "Clojure" data_files: - split: train path: "data/Clojure/*.parquet" - config_name: "Closure_Templates" data_files: - split: train path: "data/Closure_Templates/*.parquet" - config_name: "Cloud_Firestore_Security_Rules" data_files: - split: train path: "data/Cloud_Firestore_Security_Rules/*.parquet" - config_name: "CoNLL-U" data_files: - split: train path: "data/CoNLL-U/*.parquet" - config_name: "CodeQL" data_files: - split: train path: "data/CodeQL/*.parquet" - config_name: "CoffeeScript" data_files: - split: train path: "data/CoffeeScript/*.parquet" - config_name: "ColdFusion" data_files: - split: train path: "data/ColdFusion/*.parquet" - config_name: "ColdFusion_CFC" data_files: - split: train path: "data/ColdFusion_CFC/*.parquet" - config_name: "Common_Lisp" data_files: - split: train path: "data/Common_Lisp/*.parquet" - config_name: "Common_Workflow_Language" data_files: - split: train path: "data/Common_Workflow_Language/*.parquet" - config_name: "Component_Pascal" data_files: - split: train path: "data/Component_Pascal/*.parquet" - config_name: "Cool" data_files: - split: train path: "data/Cool/*.parquet" - config_name: "Coq" data_files: - split: train path: "data/Coq/*.parquet" - config_name: "Creole" data_files: - split: train path: "data/Creole/*.parquet" - config_name: "Crystal" data_files: - split: train path: "data/Crystal/*.parquet" - config_name: "Csound" data_files: - split: train path: "data/Csound/*.parquet" - config_name: "Csound_Document" data_files: - split: train path: "data/Csound_Document/*.parquet" - config_name: "Csound_Score" data_files: - split: train path: "data/Csound_Score/*.parquet" - config_name: "Cuda" data_files: - split: train path: "data/Cuda/*.parquet" - config_name: "Cue_Sheet" data_files: - split: train path: "data/Cue_Sheet/*.parquet" - config_name: "Curry" data_files: - split: train path: "data/Curry/*.parquet" - config_name: "Cycript" data_files: - split: train path: "data/Cycript/*.parquet" - config_name: "Cython" data_files: - split: train path: "data/Cython/*.parquet" - config_name: "D" data_files: - split: train path: "data/D/*.parquet" - config_name: "DIGITAL_Command_Language" data_files: - split: train path: "data/DIGITAL_Command_Language/*.parquet" - config_name: "DM" data_files: - split: train path: "data/DM/*.parquet" - config_name: "DNS_Zone" data_files: - split: train path: "data/DNS_Zone/*.parquet" - config_name: "DTrace" data_files: - split: train path: "data/DTrace/*.parquet" - config_name: "Dafny" data_files: - split: train path: "data/Dafny/*.parquet" - config_name: "Darcs_Patch" data_files: - split: train path: "data/Darcs_Patch/*.parquet" - config_name: "Dart" data_files: - split: train path: "data/Dart/*.parquet" - config_name: "DataWeave" data_files: - split: train path: "data/DataWeave/*.parquet" - config_name: "Debian_Package_Control_File" data_files: - split: train path: "data/Debian_Package_Control_File/*.parquet" - config_name: "DenizenScript" data_files: - split: train path: "data/DenizenScript/*.parquet" - config_name: "Dhall" data_files: - split: train path: "data/Dhall/*.parquet" - config_name: "Diff" data_files: - split: train path: "data/Diff/*.parquet" - config_name: "DirectX_3D_File" data_files: - split: train path: "data/DirectX_3D_File/*.parquet" - config_name: "Dockerfile" data_files: - split: train path: "data/Dockerfile/*.parquet" - config_name: "Dogescript" data_files: - split: train path: "data/Dogescript/*.parquet" - config_name: "Dylan" data_files: - split: train path: "data/Dylan/*.parquet" - config_name: "E" data_files: - split: train path: "data/E/*.parquet" - config_name: "E-mail" data_files: - split: train path: "data/E-mail/*.parquet" - config_name: "EBNF" data_files: - split: train path: "data/EBNF/*.parquet" - config_name: "ECL" data_files: - split: train path: "data/ECL/*.parquet" - config_name: "ECLiPSe" data_files: - split: train path: "data/ECLiPSe/*.parquet" - config_name: "EJS" data_files: - split: train path: "data/EJS/*.parquet" - config_name: "EQ" data_files: - split: train path: "data/EQ/*.parquet" - config_name: "Eagle" data_files: - split: train path: "data/Eagle/*.parquet" - config_name: "Earthly" data_files: - split: train path: "data/Earthly/*.parquet" - config_name: "Easybuild" data_files: - split: train path: "data/Easybuild/*.parquet" - config_name: "Ecere_Projects" data_files: - split: train path: "data/Ecere_Projects/*.parquet" - config_name: "EditorConfig" data_files: - split: train path: "data/EditorConfig/*.parquet" - config_name: "Edje_Data_Collection" data_files: - split: train path: "data/Edje_Data_Collection/*.parquet" - config_name: "Eiffel" data_files: - split: train path: "data/Eiffel/*.parquet" - config_name: "Elixir" data_files: - split: train path: "data/Elixir/*.parquet" - config_name: "Elm" data_files: - split: train path: "data/Elm/*.parquet" - config_name: "Elvish" data_files: - split: train path: "data/Elvish/*.parquet" - config_name: "Emacs_Lisp" data_files: - split: train path: "data/Emacs_Lisp/*.parquet" - config_name: "EmberScript" data_files: - split: train path: "data/EmberScript/*.parquet" - config_name: "Erlang" data_files: - split: train path: "data/Erlang/*.parquet" - config_name: "Euphoria" data_files: - split: train path: "data/Euphoria/*.parquet" - config_name: "F-Sharp" data_files: - split: train path: "data/F-Sharp/*.parquet" - config_name: "F-Star" data_files: - split: train path: "data/F-Star/*.parquet" - config_name: "FIGlet_Font" data_files: - split: train path: "data/FIGlet_Font/*.parquet" - config_name: "FLUX" data_files: - split: train path: "data/FLUX/*.parquet" - config_name: "Factor" data_files: - split: train path: "data/Factor/*.parquet" - config_name: "Fancy" data_files: - split: train path: "data/Fancy/*.parquet" - config_name: "Fantom" data_files: - split: train path: "data/Fantom/*.parquet" - config_name: "Faust" data_files: - split: train path: "data/Faust/*.parquet" - config_name: "Fennel" data_files: - split: train path: "data/Fennel/*.parquet" - config_name: "Filebench_WML" data_files: - split: train path: "data/Filebench_WML/*.parquet" - config_name: "Filterscript" data_files: - split: train path: "data/Filterscript/*.parquet" - config_name: "Fluent" data_files: - split: train path: "data/Fluent/*.parquet" - config_name: "Formatted" data_files: - split: train path: "data/Formatted/*.parquet" - config_name: "Forth" data_files: - split: train path: "data/Forth/*.parquet" - config_name: "Fortran" data_files: - split: train path: "data/Fortran/*.parquet" - config_name: "Fortran_Free_Form" data_files: - split: train path: "data/Fortran_Free_Form/*.parquet" - config_name: "FreeBasic" data_files: - split: train path: "data/FreeBasic/*.parquet" - config_name: "FreeMarker" data_files: - split: train path: "data/FreeMarker/*.parquet" - config_name: "Frege" data_files: - split: train path: "data/Frege/*.parquet" - config_name: "Futhark" data_files: - split: train path: "data/Futhark/*.parquet" - config_name: "G-code" data_files: - split: train path: "data/G-code/*.parquet" - config_name: "GAML" data_files: - split: train path: "data/GAML/*.parquet" - config_name: "GAMS" data_files: - split: train path: "data/GAMS/*.parquet" - config_name: "GAP" data_files: - split: train path: "data/GAP/*.parquet" - config_name: "GCC_Machine_Description" data_files: - split: train path: "data/GCC_Machine_Description/*.parquet" - config_name: "GDB" data_files: - split: train path: "data/GDB/*.parquet" - config_name: "GDScript" data_files: - split: train path: "data/GDScript/*.parquet" - config_name: "GEDCOM" data_files: - split: train path: "data/GEDCOM/*.parquet" - config_name: "GLSL" data_files: - split: train path: "data/GLSL/*.parquet" - config_name: "GN" data_files: - split: train path: "data/GN/*.parquet" - config_name: "GSC" data_files: - split: train path: "data/GSC/*.parquet" - config_name: "Game_Maker_Language" data_files: - split: train path: "data/Game_Maker_Language/*.parquet" - config_name: "Gemfile.lock" data_files: - split: train path: "data/Gemfile.lock/*.parquet" - config_name: "Gemini" data_files: - split: train path: "data/Gemini/*.parquet" - config_name: "Genero" data_files: - split: train path: "data/Genero/*.parquet" - config_name: "Genero_Forms" data_files: - split: train path: "data/Genero_Forms/*.parquet" - config_name: "Genie" data_files: - split: train path: "data/Genie/*.parquet" - config_name: "Genshi" data_files: - split: train path: "data/Genshi/*.parquet" - config_name: "Gentoo_Ebuild" data_files: - split: train path: "data/Gentoo_Ebuild/*.parquet" - config_name: "Gentoo_Eclass" data_files: - split: train path: "data/Gentoo_Eclass/*.parquet" - config_name: "Gerber_Image" data_files: - split: train path: "data/Gerber_Image/*.parquet" - config_name: "Gettext_Catalog" data_files: - split: train path: "data/Gettext_Catalog/*.parquet" - config_name: "Gherkin" data_files: - split: train path: "data/Gherkin/*.parquet" - config_name: "Git_Attributes" data_files: - split: train path: "data/Git_Attributes/*.parquet" - config_name: "Git_Config" data_files: - split: train path: "data/Git_Config/*.parquet" - config_name: "Git_Revision_List" data_files: - split: train path: "data/Git_Revision_List/*.parquet" - config_name: "Gleam" data_files: - split: train path: "data/Gleam/*.parquet" - config_name: "Glyph" data_files: - split: train path: "data/Glyph/*.parquet" - config_name: "Glyph_Bitmap_Distribution_Format" data_files: - split: train path: "data/Glyph_Bitmap_Distribution_Format/*.parquet" - config_name: "Gnuplot" data_files: - split: train path: "data/Gnuplot/*.parquet" - config_name: "Go" data_files: - split: train path: "data/Go/*.parquet" - config_name: "Go_Checksums" data_files: - split: train path: "data/Go_Checksums/*.parquet" - config_name: "Go_Module" data_files: - split: train path: "data/Go_Module/*.parquet" - config_name: "Golo" data_files: - split: train path: "data/Golo/*.parquet" - config_name: "Gosu" data_files: - split: train path: "data/Gosu/*.parquet" - config_name: "Grace" data_files: - split: train path: "data/Grace/*.parquet" - config_name: "Gradle" data_files: - split: train path: "data/Gradle/*.parquet" - config_name: "Grammatical_Framework" data_files: - split: train path: "data/Grammatical_Framework/*.parquet" - config_name: "GraphQL" data_files: - split: train path: "data/GraphQL/*.parquet" - config_name: "Graph_Modeling_Language" data_files: - split: train path: "data/Graph_Modeling_Language/*.parquet" - config_name: "Graphviz_(DOT)" data_files: - split: train path: "data/Graphviz_(DOT)/*.parquet" - config_name: "Groovy" data_files: - split: train path: "data/Groovy/*.parquet" - config_name: "Groovy_Server_Pages" data_files: - split: train path: "data/Groovy_Server_Pages/*.parquet" - config_name: "HAProxy" data_files: - split: train path: "data/HAProxy/*.parquet" - config_name: "HCL" data_files: - split: train path: "data/HCL/*.parquet" - config_name: "HLSL" data_files: - split: train path: "data/HLSL/*.parquet" - config_name: "HOCON" data_files: - split: train path: "data/HOCON/*.parquet" - config_name: "HTML" data_files: - split: train path: "data/HTML/*.parquet" - config_name: "HTML+ECR" data_files: - split: train path: "data/HTML+ECR/*.parquet" - config_name: "HTML+EEX" data_files: - split: train path: "data/HTML+EEX/*.parquet" - config_name: "HTML+ERB" data_files: - split: train path: "data/HTML+ERB/*.parquet" - config_name: "HTML+PHP" data_files: - split: train path: "data/HTML+PHP/*.parquet" - config_name: "HTML+Razor" data_files: - split: train path: "data/HTML+Razor/*.parquet" - config_name: "HTTP" data_files: - split: train path: "data/HTTP/*.parquet" - config_name: "HXML" data_files: - split: train path: "data/HXML/*.parquet" - config_name: "Hack" data_files: - split: train path: "data/Hack/*.parquet" - config_name: "Haml" data_files: - split: train path: "data/Haml/*.parquet" - config_name: "Handlebars" data_files: - split: train path: "data/Handlebars/*.parquet" - config_name: "Harbour" data_files: - split: train path: "data/Harbour/*.parquet" - config_name: "Haskell" data_files: - split: train path: "data/Haskell/*.parquet" - config_name: "Haxe" data_files: - split: train path: "data/Haxe/*.parquet" - config_name: "HiveQL" data_files: - split: train path: "data/HiveQL/*.parquet" - config_name: "HolyC" data_files: - split: train path: "data/HolyC/*.parquet" - config_name: "Hy" data_files: - split: train path: "data/Hy/*.parquet" - config_name: "HyPhy" data_files: - split: train path: "data/HyPhy/*.parquet" - config_name: "IDL" data_files: - split: train path: "data/IDL/*.parquet" - config_name: "IGOR_Pro" data_files: - split: train path: "data/IGOR_Pro/*.parquet" - config_name: "INI" data_files: - split: train path: "data/INI/*.parquet" - config_name: "IRC_log" data_files: - split: train path: "data/IRC_log/*.parquet" - config_name: "Idris" data_files: - split: train path: "data/Idris/*.parquet" - config_name: "Ignore_List" data_files: - split: train path: "data/Ignore_List/*.parquet" - config_name: "ImageJ_Macro" data_files: - split: train path: "data/ImageJ_Macro/*.parquet" - config_name: "Inform_7" data_files: - split: train path: "data/Inform_7/*.parquet" - config_name: "Inno_Setup" data_files: - split: train path: "data/Inno_Setup/*.parquet" - config_name: "Io" data_files: - split: train path: "data/Io/*.parquet" - config_name: "Ioke" data_files: - split: train path: "data/Ioke/*.parquet" - config_name: "Isabelle" data_files: - split: train path: "data/Isabelle/*.parquet" - config_name: "Isabelle_ROOT" data_files: - split: train path: "data/Isabelle_ROOT/*.parquet" - config_name: "J" data_files: - split: train path: "data/J/*.parquet" - config_name: "JAR_Manifest" data_files: - split: train path: "data/JAR_Manifest/*.parquet" - config_name: "JFlex" data_files: - split: train path: "data/JFlex/*.parquet" - config_name: "JSON" data_files: - split: train path: "data/JSON/*.parquet" - config_name: "JSON5" data_files: - split: train path: "data/JSON5/*.parquet" - config_name: "JSONLD" data_files: - split: train path: "data/JSONLD/*.parquet" - config_name: "JSON_with_Comments" data_files: - split: train path: "data/JSON_with_Comments/*.parquet" - config_name: "JSONiq" data_files: - split: train path: "data/JSONiq/*.parquet" - config_name: "Janet" data_files: - split: train path: "data/Janet/*.parquet" - config_name: "Jasmin" data_files: - split: train path: "data/Jasmin/*.parquet" - config_name: "Java" data_files: - split: train path: "data/Java/*.parquet" - config_name: "JavaScript" data_files: - split: train path: "data/JavaScript/*.parquet" - config_name: "JavaScript+ERB" data_files: - split: train path: "data/JavaScript+ERB/*.parquet" - config_name: "Java_Properties" data_files: - split: train path: "data/Java_Properties/*.parquet" - config_name: "Java_Server_Pages" data_files: - split: train path: "data/Java_Server_Pages/*.parquet" - config_name: "Jest_Snapshot" data_files: - split: train path: "data/Jest_Snapshot/*.parquet" - config_name: "JetBrains_MPS" data_files: - split: train path: "data/JetBrains_MPS/*.parquet" - config_name: "Jinja" data_files: - split: train path: "data/Jinja/*.parquet" - config_name: "Jison" data_files: - split: train path: "data/Jison/*.parquet" - config_name: "Jison_Lex" data_files: - split: train path: "data/Jison_Lex/*.parquet" - config_name: "Jolie" data_files: - split: train path: "data/Jolie/*.parquet" - config_name: "Jsonnet" data_files: - split: train path: "data/Jsonnet/*.parquet" - config_name: "Julia" data_files: - split: train path: "data/Julia/*.parquet" - config_name: "Jupyter_Notebook" data_files: - split: train path: "data/Jupyter_Notebook/*.parquet" - config_name: "KRL" data_files: - split: train path: "data/KRL/*.parquet" - config_name: "Kaitai_Struct" data_files: - split: train path: "data/Kaitai_Struct/*.parquet" - config_name: "KakouneScript" data_files: - split: train path: "data/KakouneScript/*.parquet" - config_name: "KiCad_Layout" data_files: - split: train path: "data/KiCad_Layout/*.parquet" - config_name: "KiCad_Legacy_Layout" data_files: - split: train path: "data/KiCad_Legacy_Layout/*.parquet" - config_name: "KiCad_Schematic" data_files: - split: train path: "data/KiCad_Schematic/*.parquet" - config_name: "Kit" data_files: - split: train path: "data/Kit/*.parquet" - config_name: "Kotlin" data_files: - split: train path: "data/Kotlin/*.parquet" - config_name: "Kusto" data_files: - split: train path: "data/Kusto/*.parquet" - config_name: "LFE" data_files: - split: train path: "data/LFE/*.parquet" - config_name: "LLVM" data_files: - split: train path: "data/LLVM/*.parquet" - config_name: "LOLCODE" data_files: - split: train path: "data/LOLCODE/*.parquet" - config_name: "LSL" data_files: - split: train path: "data/LSL/*.parquet" - config_name: "LTspice_Symbol" data_files: - split: train path: "data/LTspice_Symbol/*.parquet" - config_name: "LabVIEW" data_files: - split: train path: "data/LabVIEW/*.parquet" - config_name: "Lark" data_files: - split: train path: "data/Lark/*.parquet" - config_name: "Lasso" data_files: - split: train path: "data/Lasso/*.parquet" - config_name: "Latte" data_files: - split: train path: "data/Latte/*.parquet" - config_name: "Lean" data_files: - split: train path: "data/Lean/*.parquet" - config_name: "Less" data_files: - split: train path: "data/Less/*.parquet" - config_name: "Lex" data_files: - split: train path: "data/Lex/*.parquet" - config_name: "LigoLANG" data_files: - split: train path: "data/LigoLANG/*.parquet" - config_name: "LilyPond" data_files: - split: train path: "data/LilyPond/*.parquet" - config_name: "Limbo" data_files: - split: train path: "data/Limbo/*.parquet" - config_name: "Linker_Script" data_files: - split: train path: "data/Linker_Script/*.parquet" - config_name: "Linux_Kernel_Module" data_files: - split: train path: "data/Linux_Kernel_Module/*.parquet" - config_name: "Liquid" data_files: - split: train path: "data/Liquid/*.parquet" - config_name: "Literate_Agda" data_files: - split: train path: "data/Literate_Agda/*.parquet" - config_name: "Literate_CoffeeScript" data_files: - split: train path: "data/Literate_CoffeeScript/*.parquet" - config_name: "Literate_Haskell" data_files: - split: train path: "data/Literate_Haskell/*.parquet" - config_name: "LiveScript" data_files: - split: train path: "data/LiveScript/*.parquet" - config_name: "Logos" data_files: - split: train path: "data/Logos/*.parquet" - config_name: "Logtalk" data_files: - split: train path: "data/Logtalk/*.parquet" - config_name: "LookML" data_files: - split: train path: "data/LookML/*.parquet" - config_name: "LoomScript" data_files: - split: train path: "data/LoomScript/*.parquet" - config_name: "Lua" data_files: - split: train path: "data/Lua/*.parquet" - config_name: "M" data_files: - split: train path: "data/M/*.parquet" - config_name: "M4" data_files: - split: train path: "data/M4/*.parquet" - config_name: "M4Sugar" data_files: - split: train path: "data/M4Sugar/*.parquet" - config_name: "MATLAB" data_files: - split: train path: "data/MATLAB/*.parquet" - config_name: "MAXScript" data_files: - split: train path: "data/MAXScript/*.parquet" - config_name: "MLIR" data_files: - split: train path: "data/MLIR/*.parquet" - config_name: "MQL4" data_files: - split: train path: "data/MQL4/*.parquet" - config_name: "MQL5" data_files: - split: train path: "data/MQL5/*.parquet" - config_name: "MTML" data_files: - split: train path: "data/MTML/*.parquet" - config_name: "MUF" data_files: - split: train path: "data/MUF/*.parquet" - config_name: "Macaulay2" data_files: - split: train path: "data/Macaulay2/*.parquet" - config_name: "Makefile" data_files: - split: train path: "data/Makefile/*.parquet" - config_name: "Mako" data_files: - split: train path: "data/Mako/*.parquet" - config_name: "Markdown" data_files: - split: train path: "data/Markdown/*.parquet" - config_name: "Marko" data_files: - split: train path: "data/Marko/*.parquet" - config_name: "Mask" data_files: - split: train path: "data/Mask/*.parquet" - config_name: "Mathematica" data_files: - split: train path: "data/Mathematica/*.parquet" - config_name: "Maven_POM" data_files: - split: train path: "data/Maven_POM/*.parquet" - config_name: "Max" data_files: - split: train path: "data/Max/*.parquet" - config_name: "Mercury" data_files: - split: train path: "data/Mercury/*.parquet" - config_name: "Meson" data_files: - split: train path: "data/Meson/*.parquet" - config_name: "Metal" data_files: - split: train path: "data/Metal/*.parquet" - config_name: "Microsoft_Developer_Studio_Project" data_files: - split: train path: "data/Microsoft_Developer_Studio_Project/*.parquet" - config_name: "Microsoft_Visual_Studio_Solution" data_files: - split: train path: "data/Microsoft_Visual_Studio_Solution/*.parquet" - config_name: "MiniD" data_files: - split: train path: "data/MiniD/*.parquet" - config_name: "MiniYAML" data_files: - split: train path: "data/MiniYAML/*.parquet" - config_name: "Mint" data_files: - split: train path: "data/Mint/*.parquet" - config_name: "Mirah" data_files: - split: train path: "data/Mirah/*.parquet" - config_name: "Modelica" data_files: - split: train path: "data/Modelica/*.parquet" - config_name: "Modula-2" data_files: - split: train path: "data/Modula-2/*.parquet" - config_name: "Modula-3" data_files: - split: train path: "data/Modula-3/*.parquet" - config_name: "Module_Management_System" data_files: - split: train path: "data/Module_Management_System/*.parquet" - config_name: "Monkey" data_files: - split: train path: "data/Monkey/*.parquet" - config_name: "Monkey_C" data_files: - split: train path: "data/Monkey_C/*.parquet" - config_name: "Moocode" data_files: - split: train path: "data/Moocode/*.parquet" - config_name: "MoonScript" data_files: - split: train path: "data/MoonScript/*.parquet" - config_name: "Motoko" data_files: - split: train path: "data/Motoko/*.parquet" - config_name: "Motorola_68K_Assembly" data_files: - split: train path: "data/Motorola_68K_Assembly/*.parquet" - config_name: "Move" data_files: - split: train path: "data/Move/*.parquet" - config_name: "Muse" data_files: - split: train path: "data/Muse/*.parquet" - config_name: "Mustache" data_files: - split: train path: "data/Mustache/*.parquet" - config_name: "Myghty" data_files: - split: train path: "data/Myghty/*.parquet" - config_name: "NASL" data_files: - split: train path: "data/NASL/*.parquet" - config_name: "NCL" data_files: - split: train path: "data/NCL/*.parquet" - config_name: "NEON" data_files: - split: train path: "data/NEON/*.parquet" - config_name: "NL" data_files: - split: train path: "data/NL/*.parquet" - config_name: "NPM_Config" data_files: - split: train path: "data/NPM_Config/*.parquet" - config_name: "NSIS" data_files: - split: train path: "data/NSIS/*.parquet" - config_name: "NWScript" data_files: - split: train path: "data/NWScript/*.parquet" - config_name: "Nasal" data_files: - split: train path: "data/Nasal/*.parquet" - config_name: "Nearley" data_files: - split: train path: "data/Nearley/*.parquet" - config_name: "Nemerle" data_files: - split: train path: "data/Nemerle/*.parquet" - config_name: "NetLinx" data_files: - split: train path: "data/NetLinx/*.parquet" - config_name: "NetLinx+ERB" data_files: - split: train path: "data/NetLinx+ERB/*.parquet" - config_name: "NetLogo" data_files: - split: train path: "data/NetLogo/*.parquet" - config_name: "NewLisp" data_files: - split: train path: "data/NewLisp/*.parquet" - config_name: "Nextflow" data_files: - split: train path: "data/Nextflow/*.parquet" - config_name: "Nginx" data_files: - split: train path: "data/Nginx/*.parquet" - config_name: "Nim" data_files: - split: train path: "data/Nim/*.parquet" - config_name: "Ninja" data_files: - split: train path: "data/Ninja/*.parquet" - config_name: "Nit" data_files: - split: train path: "data/Nit/*.parquet" - config_name: "Nix" data_files: - split: train path: "data/Nix/*.parquet" - config_name: "Nu" data_files: - split: train path: "data/Nu/*.parquet" - config_name: "NumPy" data_files: - split: train path: "data/NumPy/*.parquet" - config_name: "Nunjucks" data_files: - split: train path: "data/Nunjucks/*.parquet" - config_name: "OCaml" data_files: - split: train path: "data/OCaml/*.parquet" - config_name: "ObjDump" data_files: - split: train path: "data/ObjDump/*.parquet" - config_name: "ObjectScript" data_files: - split: train path: "data/ObjectScript/*.parquet" - config_name: "Object_Data_Instance_Notation" data_files: - split: train path: "data/Object_Data_Instance_Notation/*.parquet" - config_name: "Objective-C" data_files: - split: train path: "data/Objective-C/*.parquet" - config_name: "Objective-C++" data_files: - split: train path: "data/Objective-C++/*.parquet" - config_name: "Objective-J" data_files: - split: train path: "data/Objective-J/*.parquet" - config_name: "Odin" data_files: - split: train path: "data/Odin/*.parquet" - config_name: "Omgrofl" data_files: - split: train path: "data/Omgrofl/*.parquet" - config_name: "Opa" data_files: - split: train path: "data/Opa/*.parquet" - config_name: "Opal" data_files: - split: train path: "data/Opal/*.parquet" - config_name: "OpenCL" data_files: - split: train path: "data/OpenCL/*.parquet" - config_name: "OpenEdge_ABL" data_files: - split: train path: "data/OpenEdge_ABL/*.parquet" - config_name: "OpenQASM" data_files: - split: train path: "data/OpenQASM/*.parquet" - config_name: "OpenRC_runscript" data_files: - split: train path: "data/OpenRC_runscript/*.parquet" - config_name: "OpenSCAD" data_files: - split: train path: "data/OpenSCAD/*.parquet" - config_name: "OpenStep_Property_List" data_files: - split: train path: "data/OpenStep_Property_List/*.parquet" - config_name: "OpenType_Feature_File" data_files: - split: train path: "data/OpenType_Feature_File/*.parquet" - config_name: "Open_Policy_Agent" data_files: - split: train path: "data/Open_Policy_Agent/*.parquet" - config_name: "Org" data_files: - split: train path: "data/Org/*.parquet" - config_name: "Ox" data_files: - split: train path: "data/Ox/*.parquet" - config_name: "Oxygene" data_files: - split: train path: "data/Oxygene/*.parquet" - config_name: "Oz" data_files: - split: train path: "data/Oz/*.parquet" - config_name: "P4" data_files: - split: train path: "data/P4/*.parquet" - config_name: "PEG.js" data_files: - split: train path: "data/PEG.js/*.parquet" - config_name: "PHP" data_files: - split: train path: "data/PHP/*.parquet" - config_name: "PLSQL" data_files: - split: train path: "data/PLSQL/*.parquet" - config_name: "PLpgSQL" data_files: - split: train path: "data/PLpgSQL/*.parquet" - config_name: "POV-Ray_SDL" data_files: - split: train path: "data/POV-Ray_SDL/*.parquet" - config_name: "Pan" data_files: - split: train path: "data/Pan/*.parquet" - config_name: "Papyrus" data_files: - split: train path: "data/Papyrus/*.parquet" - config_name: "Parrot" data_files: - split: train path: "data/Parrot/*.parquet" - config_name: "Parrot_Assembly" data_files: - split: train path: "data/Parrot_Assembly/*.parquet" - config_name: "Parrot_Internal_Representation" data_files: - split: train path: "data/Parrot_Internal_Representation/*.parquet" - config_name: "Pascal" data_files: - split: train path: "data/Pascal/*.parquet" - config_name: "Pawn" data_files: - split: train path: "data/Pawn/*.parquet" - config_name: "Pep8" data_files: - split: train path: "data/Pep8/*.parquet" - config_name: "Perl" data_files: - split: train path: "data/Perl/*.parquet" - config_name: "Pic" data_files: - split: train path: "data/Pic/*.parquet" - config_name: "Pickle" data_files: - split: train path: "data/Pickle/*.parquet" - config_name: "PicoLisp" data_files: - split: train path: "data/PicoLisp/*.parquet" - config_name: "PigLatin" data_files: - split: train path: "data/PigLatin/*.parquet" - config_name: "Pike" data_files: - split: train path: "data/Pike/*.parquet" - config_name: "PlantUML" data_files: - split: train path: "data/PlantUML/*.parquet" - config_name: "Pod" data_files: - split: train path: "data/Pod/*.parquet" - config_name: "Pod_6" data_files: - split: train path: "data/Pod_6/*.parquet" - config_name: "PogoScript" data_files: - split: train path: "data/PogoScript/*.parquet" - config_name: "Pony" data_files: - split: train path: "data/Pony/*.parquet" - config_name: "Portugol" data_files: - split: train path: "data/Portugol/*.parquet" - config_name: "PostCSS" data_files: - split: train path: "data/PostCSS/*.parquet" - config_name: "PostScript" data_files: - split: train path: "data/PostScript/*.parquet" - config_name: "PowerBuilder" data_files: - split: train path: "data/PowerBuilder/*.parquet" - config_name: "PowerShell" data_files: - split: train path: "data/PowerShell/*.parquet" - config_name: "Prisma" data_files: - split: train path: "data/Prisma/*.parquet" - config_name: "Processing" data_files: - split: train path: "data/Processing/*.parquet" - config_name: "Procfile" data_files: - split: train path: "data/Procfile/*.parquet" - config_name: "Proguard" data_files: - split: train path: "data/Proguard/*.parquet" - config_name: "Prolog" data_files: - split: train path: "data/Prolog/*.parquet" - config_name: "Promela" data_files: - split: train path: "data/Promela/*.parquet" - config_name: "Propeller_Spin" data_files: - split: train path: "data/Propeller_Spin/*.parquet" - config_name: "Protocol_Buffer" data_files: - split: train path: "data/Protocol_Buffer/*.parquet" - config_name: "Protocol_Buffer_Text_Format" data_files: - split: train path: "data/Protocol_Buffer_Text_Format/*.parquet" - config_name: "Public_Key" data_files: - split: train path: "data/Public_Key/*.parquet" - config_name: "Pug" data_files: - split: train path: "data/Pug/*.parquet" - config_name: "Puppet" data_files: - split: train path: "data/Puppet/*.parquet" - config_name: "PureBasic" data_files: - split: train path: "data/PureBasic/*.parquet" - config_name: "PureScript" data_files: - split: train path: "data/PureScript/*.parquet" - config_name: "Pure_Data" data_files: - split: train path: "data/Pure_Data/*.parquet" - config_name: "Python" data_files: - split: train path: "data/Python/*.parquet" - config_name: "Python_traceback" data_files: - split: train path: "data/Python_traceback/*.parquet" - config_name: "Q-Sharp" data_files: - split: train path: "data/Q-Sharp/*.parquet" - config_name: "QML" data_files: - split: train path: "data/QML/*.parquet" - config_name: "QMake" data_files: - split: train path: "data/QMake/*.parquet" - config_name: "Qt_Script" data_files: - split: train path: "data/Qt_Script/*.parquet" - config_name: "Quake" data_files: - split: train path: "data/Quake/*.parquet" - config_name: "R" data_files: - split: train path: "data/R/*.parquet" - config_name: "RAML" data_files: - split: train path: "data/RAML/*.parquet" - config_name: "RDoc" data_files: - split: train path: "data/RDoc/*.parquet" - config_name: "REALbasic" data_files: - split: train path: "data/REALbasic/*.parquet" - config_name: "REXX" data_files: - split: train path: "data/REXX/*.parquet" - config_name: "RMarkdown" data_files: - split: train path: "data/RMarkdown/*.parquet" - config_name: "RPC" data_files: - split: train path: "data/RPC/*.parquet" - config_name: "RPGLE" data_files: - split: train path: "data/RPGLE/*.parquet" - config_name: "RPM_Spec" data_files: - split: train path: "data/RPM_Spec/*.parquet" - config_name: "RUNOFF" data_files: - split: train path: "data/RUNOFF/*.parquet" - config_name: "Racket" data_files: - split: train path: "data/Racket/*.parquet" - config_name: "Ragel" data_files: - split: train path: "data/Ragel/*.parquet" - config_name: "Raku" data_files: - split: train path: "data/Raku/*.parquet" - config_name: "Rascal" data_files: - split: train path: "data/Rascal/*.parquet" - config_name: "Raw_token_data" data_files: - split: train path: "data/Raw_token_data/*.parquet" - config_name: "ReScript" data_files: - split: train path: "data/ReScript/*.parquet" - config_name: "Readline_Config" data_files: - split: train path: "data/Readline_Config/*.parquet" - config_name: "Reason" data_files: - split: train path: "data/Reason/*.parquet" - config_name: "ReasonLIGO" data_files: - split: train path: "data/ReasonLIGO/*.parquet" - config_name: "Rebol" data_files: - split: train path: "data/Rebol/*.parquet" - config_name: "Record_Jar" data_files: - split: train path: "data/Record_Jar/*.parquet" - config_name: "Red" data_files: - split: train path: "data/Red/*.parquet" - config_name: "Redcode" data_files: - split: train path: "data/Redcode/*.parquet" - config_name: "Redirect_Rules" data_files: - split: train path: "data/Redirect_Rules/*.parquet" - config_name: "Regular_Expression" data_files: - split: train path: "data/Regular_Expression/*.parquet" - config_name: "Ren-Py" data_files: - split: train path: "data/Ren-Py/*.parquet" - config_name: "RenderScript" data_files: - split: train path: "data/RenderScript/*.parquet" - config_name: "Rich_Text_Format" data_files: - split: train path: "data/Rich_Text_Format/*.parquet" - config_name: "Ring" data_files: - split: train path: "data/Ring/*.parquet" - config_name: "Riot" data_files: - split: train path: "data/Riot/*.parquet" - config_name: "RobotFramework" data_files: - split: train path: "data/RobotFramework/*.parquet" - config_name: "Roff" data_files: - split: train path: "data/Roff/*.parquet" - config_name: "Roff_Manpage" data_files: - split: train path: "data/Roff_Manpage/*.parquet" - config_name: "Rouge" data_files: - split: train path: "data/Rouge/*.parquet" - config_name: "RouterOS_Script" data_files: - split: train path: "data/RouterOS_Script/*.parquet" - config_name: "Ruby" data_files: - split: train path: "data/Ruby/*.parquet" - config_name: "Rust" data_files: - split: train path: "data/Rust/*.parquet" - config_name: "SAS" data_files: - split: train path: "data/SAS/*.parquet" - config_name: "SCSS" data_files: - split: train path: "data/SCSS/*.parquet" - config_name: "SELinux_Policy" data_files: - split: train path: "data/SELinux_Policy/*.parquet" - config_name: "SMT" data_files: - split: train path: "data/SMT/*.parquet" - config_name: "SPARQL" data_files: - split: train path: "data/SPARQL/*.parquet" - config_name: "SQF" data_files: - split: train path: "data/SQF/*.parquet" - config_name: "SQL" data_files: - split: train path: "data/SQL/*.parquet" - config_name: "SQLPL" data_files: - split: train path: "data/SQLPL/*.parquet" - config_name: "SRecode_Template" data_files: - split: train path: "data/SRecode_Template/*.parquet" - config_name: "SSH_Config" data_files: - split: train path: "data/SSH_Config/*.parquet" - config_name: "STAR" data_files: - split: train path: "data/STAR/*.parquet" - config_name: "STL" data_files: - split: train path: "data/STL/*.parquet" - config_name: "STON" data_files: - split: train path: "data/STON/*.parquet" - config_name: "SVG" data_files: - split: train path: "data/SVG/*.parquet" - config_name: "SWIG" data_files: - split: train path: "data/SWIG/*.parquet" - config_name: "Sage" data_files: - split: train path: "data/Sage/*.parquet" - config_name: "SaltStack" data_files: - split: train path: "data/SaltStack/*.parquet" - config_name: "Sass" data_files: - split: train path: "data/Sass/*.parquet" - config_name: "Scala" data_files: - split: train path: "data/Scala/*.parquet" - config_name: "Scaml" data_files: - split: train path: "data/Scaml/*.parquet" - config_name: "Scheme" data_files: - split: train path: "data/Scheme/*.parquet" - config_name: "Scilab" data_files: - split: train path: "data/Scilab/*.parquet" - config_name: "Self" data_files: - split: train path: "data/Self/*.parquet" - config_name: "ShaderLab" data_files: - split: train path: "data/ShaderLab/*.parquet" - config_name: "Shell" data_files: - split: train path: "data/Shell/*.parquet" - config_name: "ShellCheck_Config" data_files: - split: train path: "data/ShellCheck_Config/*.parquet" - config_name: "ShellSession" data_files: - split: train path: "data/ShellSession/*.parquet" - config_name: "Shen" data_files: - split: train path: "data/Shen/*.parquet" - config_name: "Sieve" data_files: - split: train path: "data/Sieve/*.parquet" - config_name: "Singularity" data_files: - split: train path: "data/Singularity/*.parquet" - config_name: "Slash" data_files: - split: train path: "data/Slash/*.parquet" - config_name: "Slice" data_files: - split: train path: "data/Slice/*.parquet" - config_name: "Slim" data_files: - split: train path: "data/Slim/*.parquet" - config_name: "SmPL" data_files: - split: train path: "data/SmPL/*.parquet" - config_name: "Smali" data_files: - split: train path: "data/Smali/*.parquet" - config_name: "Smalltalk" data_files: - split: train path: "data/Smalltalk/*.parquet" - config_name: "Smarty" data_files: - split: train path: "data/Smarty/*.parquet" - config_name: "Solidity" data_files: - split: train path: "data/Solidity/*.parquet" - config_name: "Soong" data_files: - split: train path: "data/Soong/*.parquet" - config_name: "SourcePawn" data_files: - split: train path: "data/SourcePawn/*.parquet" - config_name: "Spline_Font_Database" data_files: - split: train path: "data/Spline_Font_Database/*.parquet" - config_name: "Squirrel" data_files: - split: train path: "data/Squirrel/*.parquet" - config_name: "Stan" data_files: - split: train path: "data/Stan/*.parquet" - config_name: "Standard_ML" data_files: - split: train path: "data/Standard_ML/*.parquet" - config_name: "Starlark" data_files: - split: train path: "data/Starlark/*.parquet" - config_name: "Stata" data_files: - split: train path: "data/Stata/*.parquet" - config_name: "StringTemplate" data_files: - split: train path: "data/StringTemplate/*.parquet" - config_name: "Stylus" data_files: - split: train path: "data/Stylus/*.parquet" - config_name: "SubRip_Text" data_files: - split: train path: "data/SubRip_Text/*.parquet" - config_name: "SugarSS" data_files: - split: train path: "data/SugarSS/*.parquet" - config_name: "SuperCollider" data_files: - split: train path: "data/SuperCollider/*.parquet" - config_name: "Svelte" data_files: - split: train path: "data/Svelte/*.parquet" - config_name: "Swift" data_files: - split: train path: "data/Swift/*.parquet" - config_name: "SystemVerilog" data_files: - split: train path: "data/SystemVerilog/*.parquet" - config_name: "TI_Program" data_files: - split: train path: "data/TI_Program/*.parquet" - config_name: "TLA" data_files: - split: train path: "data/TLA/*.parquet" - config_name: "TOML" data_files: - split: train path: "data/TOML/*.parquet" - config_name: "TSQL" data_files: - split: train path: "data/TSQL/*.parquet" - config_name: "TSV" data_files: - split: train path: "data/TSV/*.parquet" - config_name: "TSX" data_files: - split: train path: "data/TSX/*.parquet" - config_name: "TXL" data_files: - split: train path: "data/TXL/*.parquet" - config_name: "Talon" data_files: - split: train path: "data/Talon/*.parquet" - config_name: "Tcl" data_files: - split: train path: "data/Tcl/*.parquet" - config_name: "Tcsh" data_files: - split: train path: "data/Tcsh/*.parquet" - config_name: "TeX" data_files: - split: train path: "data/TeX/*.parquet" - config_name: "Tea" data_files: - split: train path: "data/Tea/*.parquet" - config_name: "Terra" data_files: - split: train path: "data/Terra/*.parquet" - config_name: "Texinfo" data_files: - split: train path: "data/Texinfo/*.parquet" - config_name: "Text" data_files: - split: train path: "data/Text/*.parquet" - config_name: "TextMate_Properties" data_files: - split: train path: "data/TextMate_Properties/*.parquet" - config_name: "Textile" data_files: - split: train path: "data/Textile/*.parquet" - config_name: "Thrift" data_files: - split: train path: "data/Thrift/*.parquet" - config_name: "Turing" data_files: - split: train path: "data/Turing/*.parquet" - config_name: "Turtle" data_files: - split: train path: "data/Turtle/*.parquet" - config_name: "Twig" data_files: - split: train path: "data/Twig/*.parquet" - config_name: "TypeScript" data_files: - split: train path: "data/TypeScript/*.parquet" - config_name: "Type_Language" data_files: - split: train path: "data/Type_Language/*.parquet" - config_name: "Unified_Parallel_C" data_files: - split: train path: "data/Unified_Parallel_C/*.parquet" - config_name: "Unity3D_Asset" data_files: - split: train path: "data/Unity3D_Asset/*.parquet" - config_name: "Unix_Assembly" data_files: - split: train path: "data/Unix_Assembly/*.parquet" - config_name: "Uno" data_files: - split: train path: "data/Uno/*.parquet" - config_name: "UnrealScript" data_files: - split: train path: "data/UnrealScript/*.parquet" - config_name: "UrWeb" data_files: - split: train path: "data/UrWeb/*.parquet" - config_name: "V" data_files: - split: train path: "data/V/*.parquet" - config_name: "VBA" data_files: - split: train path: "data/VBA/*.parquet" - config_name: "VBScript" data_files: - split: train path: "data/VBScript/*.parquet" - config_name: "VCL" data_files: - split: train path: "data/VCL/*.parquet" - config_name: "VHDL" data_files: - split: train path: "data/VHDL/*.parquet" - config_name: "Vala" data_files: - split: train path: "data/Vala/*.parquet" - config_name: "Valve_Data_Format" data_files: - split: train path: "data/Valve_Data_Format/*.parquet" - config_name: "Velocity_Template_Language" data_files: - split: train path: "data/Velocity_Template_Language/*.parquet" - config_name: "Verilog" data_files: - split: train path: "data/Verilog/*.parquet" - config_name: "Vim_Help_File" data_files: - split: train path: "data/Vim_Help_File/*.parquet" - config_name: "Vim_Script" data_files: - split: train path: "data/Vim_Script/*.parquet" - config_name: "Vim_Snippet" data_files: - split: train path: "data/Vim_Snippet/*.parquet" - config_name: "Visual_Basic_.NET" data_files: - split: train path: "data/Visual_Basic_.NET/*.parquet" - config_name: "Volt" data_files: - split: train path: "data/Volt/*.parquet" - config_name: "Vue" data_files: - split: train path: "data/Vue/*.parquet" - config_name: "Vyper" data_files: - split: train path: "data/Vyper/*.parquet" - config_name: "Wavefront_Material" data_files: - split: train path: "data/Wavefront_Material/*.parquet" - config_name: "Wavefront_Object" data_files: - split: train path: "data/Wavefront_Object/*.parquet" - config_name: "WebAssembly" data_files: - split: train path: "data/WebAssembly/*.parquet" - config_name: "WebIDL" data_files: - split: train path: "data/WebIDL/*.parquet" - config_name: "WebVTT" data_files: - split: train path: "data/WebVTT/*.parquet" - config_name: "Web_Ontology_Language" data_files: - split: train path: "data/Web_Ontology_Language/*.parquet" - config_name: "Wget_Config" data_files: - split: train path: "data/Wget_Config/*.parquet" - config_name: "Whiley" data_files: - split: train path: "data/Whiley/*.parquet" - config_name: "Wikitext" data_files: - split: train path: "data/Wikitext/*.parquet" - config_name: "Win32_Message_File" data_files: - split: train path: "data/Win32_Message_File/*.parquet" - config_name: "Windows_Registry_Entries" data_files: - split: train path: "data/Windows_Registry_Entries/*.parquet" - config_name: "Witcher_Script" data_files: - split: train path: "data/Witcher_Script/*.parquet" - config_name: "Wollok" data_files: - split: train path: "data/Wollok/*.parquet" - config_name: "World_of_Warcraft_Addon_Data" data_files: - split: train path: "data/World_of_Warcraft_Addon_Data/*.parquet" - config_name: "Wren" data_files: - split: train path: "data/Wren/*.parquet" - config_name: "X10" data_files: - split: train path: "data/X10/*.parquet" - config_name: "XC" data_files: - split: train path: "data/XC/*.parquet" - config_name: "XCompose" data_files: - split: train path: "data/XCompose/*.parquet" - config_name: "XML" data_files: - split: train path: "data/XML/*.parquet" - config_name: "XML_Property_List" data_files: - split: train path: "data/XML_Property_List/*.parquet" - config_name: "XPages" data_files: - split: train path: "data/XPages/*.parquet" - config_name: "XProc" data_files: - split: train path: "data/XProc/*.parquet" - config_name: "XQuery" data_files: - split: train path: "data/XQuery/*.parquet" - config_name: "XS" data_files: - split: train path: "data/XS/*.parquet" - config_name: "XSLT" data_files: - split: train path: "data/XSLT/*.parquet" - config_name: "X_BitMap" data_files: - split: train path: "data/X_BitMap/*.parquet" - config_name: "X_Font_Directory_Index" data_files: - split: train path: "data/X_Font_Directory_Index/*.parquet" - config_name: "X_PixMap" data_files: - split: train path: "data/X_PixMap/*.parquet" - config_name: "Xojo" data_files: - split: train path: "data/Xojo/*.parquet" - config_name: "Xonsh" data_files: - split: train path: "data/Xonsh/*.parquet" - config_name: "Xtend" data_files: - split: train path: "data/Xtend/*.parquet" - config_name: "YAML" data_files: - split: train path: "data/YAML/*.parquet" - config_name: "YANG" data_files: - split: train path: "data/YANG/*.parquet" - config_name: "YARA" data_files: - split: train path: "data/YARA/*.parquet" - config_name: "YASnippet" data_files: - split: train path: "data/YASnippet/*.parquet" - config_name: "Yacc" data_files: - split: train path: "data/Yacc/*.parquet" - config_name: "Yul" data_files: - split: train path: "data/Yul/*.parquet" - config_name: "ZAP" data_files: - split: train path: "data/ZAP/*.parquet" - config_name: "ZIL" data_files: - split: train path: "data/ZIL/*.parquet" - config_name: "Zeek" data_files: - split: train path: "data/Zeek/*.parquet" - config_name: "ZenScript" data_files: - split: train path: "data/ZenScript/*.parquet" - config_name: "Zephir" data_files: - split: train path: "data/Zephir/*.parquet" - config_name: "Zig" data_files: - split: train path: "data/Zig/*.parquet" - config_name: "Zimpl" data_files: - split: train path: "data/Zimpl/*.parquet" - config_name: "cURL_Config" data_files: - split: train path: "data/cURL_Config/*.parquet" - config_name: "desktop" data_files: - split: train path: "data/desktop/*.parquet" - config_name: "dircolors" data_files: - split: train path: "data/dircolors/*.parquet" - config_name: "eC" data_files: - split: train path: "data/eC/*.parquet" - config_name: "edn" data_files: - split: train path: "data/edn/*.parquet" - config_name: "fish" data_files: - split: train path: "data/fish/*.parquet" - config_name: "hoon" data_files: - split: train path: "data/hoon/*.parquet" - config_name: "jq" data_files: - split: train path: "data/jq/*.parquet" - config_name: "kvlang" data_files: - split: train path: "data/kvlang/*.parquet" - config_name: "mIRC_Script" data_files: - split: train path: "data/mIRC_Script/*.parquet" - config_name: "mcfunction" data_files: - split: train path: "data/mcfunction/*.parquet" - config_name: "mupad" data_files: - split: train path: "data/mupad/*.parquet" - config_name: "nanorc" data_files: - split: train path: "data/nanorc/*.parquet" - config_name: "nesC" data_files: - split: train path: "data/nesC/*.parquet" - config_name: "ooc" data_files: - split: train path: "data/ooc/*.parquet" - config_name: "q" data_files: - split: train path: "data/q/*.parquet" - config_name: "reStructuredText" data_files: - split: train path: "data/reStructuredText/*.parquet" - config_name: "robots.txt" data_files: - split: train path: "data/robots.txt/*.parquet" - config_name: "sed" data_files: - split: train path: "data/sed/*.parquet" - config_name: "wdl" data_files: - split: train path: "data/wdl/*.parquet" - config_name: "wisp" data_files: - split: train path: "data/wisp/*.parquet" - config_name: "xBase" data_files: - split: train path: "data/xBase/*.parquet" --- # The Stack v2 <center> <img src="https://huggingface.co/datasets/bigcode/admin_private/resolve/main/thestackv2_banner.png" alt="Stackv2" width="900" height="600"> </center> ## Dataset Description - **Homepage:** https://www.bigcode-project.org/ - **Repository:** https://github.com/bigcode-project - **Paper:** [Link](https://huggingface.co/papers/2402.19173) - **Point of Contact:** contact@bigcode-project.org The dataset consists of 4 versions: - [`bigcode/the-stack-v2`](https://huggingface.co/datasets/bigcode/the-stack-v2): the full "The Stack v2" dataset - [`bigcode/the-stack-v2-dedup`](https://huggingface.co/datasets/bigcode/the-stack-v2-dedup): based on the `bigcode/the-stack-v2` but further near-deduplicated **<-- you are here** - [`bigcode/the-stack-v2-train-full-ids`](https://huggingface.co/datasets/bigcode/the-stack-v2-train-full-ids): based on the `bigcode/the-stack-v2-dedup` dataset but further filtered with heuristics and spanning 600+ programming languages. The data is grouped into repositories. - [`bigcode/the-stack-v2-train-smol-ids`](https://huggingface.co/datasets/bigcode/the-stack-v2-train-smol-ids): based on the `bigcode/the-stack-v2-dedup` dataset but further filtered with heuristics and spanning 17 programming languages. The data is grouped into repositories. **These datasets only contain the SWHIDs to download the code files and not the content of the files itself. See examples below to see how to download content. We are working on making the training datasets available in the coming weeks.** The Stack v2 is significantly larger than v1: ||The Stack v1|The Stack v2| |-|-|-| | full | 6.4TB | 67.5TB | | dedup | 2.9TB | 32.1TB | | train (full) | ~200B tokens | ~900B tokens | ### Changelog |Release|Description| |-|-| | v2.0.1 | Removed repositories that opted out before 2023-10-20. StarCoder2 was trained on this version | | v2.0 | Initial release of the Stack v2 | ### Dataset Summary The Stack v2 contains over 3B files in 600+ programming and markup languages. The dataset was created as part of the [BigCode Project](https://www.bigcode-project.org/), an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets. This dataset is derived from the Software Heritage archive, the largest public archive of software source code and accompanying development history. Software Heritage is an open, non profit initiative to collect, preserve, and share the source code of all publicly available software, launched by Inria, in partnership with UNESCO. We acknowledge Software Heritage for providing access to this invaluable resource. For more details, visit the [Software Heritage website](https://www.softwareheritage.org). ### Languages The dataset contains 658 languages. The full list can be found in the [language stats table](https://huggingface.co/datasets/bigcode/the-stack-v2/blob/main/language_stats.csv). ### How to use it ```python from datasets import load_dataset # full dataset (file IDs only) ds = load_dataset("bigcode/the-stack-v2-dedup", split="train") # specific language (e.g. Dockerfiles) ds = load_dataset("bigcode/the-stack-v2-dedup", "Dockerfile", split="train") # dataset streaming (will only download the data as needed) ds = load_dataset("bigcode/the-stack-v2-dedup", streaming=True, split="train") for sample in iter(ds): print(sample) ``` #### Downloading the file contents The file contents are stored in the Software Heritage S3 bucket to ensure data compliance. Downloading data in bulk requires an agreement with SoftwareHeritage and INRIA as stated in the dataset agreement. Make sure to configure your environment with your [AWS credentials](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/configure/index.html#examples). ```bash pip install smart_open[s3] ``` ```python import os import boto3 from smart_open import open from datasets import load_dataset session = boto3.Session( aws_access_key_id=os.environ["AWS_ACCESS_KEY_ID"], aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"]) s3 = session.client("s3") def download_contents(blob_id, src_encoding): s3_url = f"s3://softwareheritage/content/{blob_id}" with open(s3_url, "rb", compression=".gz", transport_params={"client": s3}) as fin: content = fin.read().decode(src_encoding) return {"content": content} ds = load_dataset("bigcode/the-stack-v2-dedup", split="train", streaming=True) ds = ds.map(lambda row: download_contents(row["blob_id"], row["src_encoding"])) for row in ds: print(row["content"]) break ``` ## Dataset Structure ### Data Fields * `blob_id` (`string`): Software Heritage (SWH) ID of the file on AWS S3. * `directory_id` (`string`): SWH ID of the root directory of the repository. * `path` (`string`): The file path within the repository. * `content_id` (`string`): SWH content ID. * `detected_licenses` (`string[]`): List of licenses (SPDX) detected by ScanCode. * `license_type` (`string`): Inferred license type (`permissive` or `no_license`). * `repo_name` (`string`): Repository name on GitHub. * `snapshot_id` (`string`): SWH snapshot ID. * `revision_id` (`string`): SWH revision (commit) ID. * `branch_name` (`string`): Repository branch name. * `visit_date` (`timestamp[ns]`): SWH crawl (snapshot) timestamp. * `revision_date` (`timestamp[ns]`): SWH revision (commit) timestamp. * `committer_date` (`timestamp[ns]`): SWH revision (commit) timestamp reported by the committer. * `github_id` (`int64`): GitHub identifier for the repository. * `star_events_count` (`int64`): number of stars calculated from GHArchive events. * `fork_events_count` (`int64`): number of forks calculated from GHArchive events. * `gha_license_id` (`string`): GHArchive SPDX license identifier, `None` if the repo is missing. * `gha_event_created_at` (`timestamp[ns]`): Timestamp of the latest event on GHArchive for this repository. * `gha_created_at` (`timestamp[ns]`): Timestamp of repository creation on GitHub, `None` if the repo is missing. * `gha_language` (`string`): Repository's primary programming language on GitHub, `None` if the repo is missing. * `src_encoding` (`string`): Original encoding of the file content befre converting to UTF-8. * `language` (`string`): Programming language of the file, detected by `go-enry / linguist`. * `is_vendor` (`bool`): Indicator of vendor file (external library), detected by `go-enry`. * `is_generated` (`bool`): Indicator of generated file (external library), detected by `go-enry`. * `length_bytes` (`int64`): Length of the file content in UTF-8 bytes. * `extension` (`string`): File extension. ### Data Splits The dataset has no splits and all data is loaded as train split by default. If you want to setup a custom train-test split beware that dataset contains a lot of near-duplicates which can cause leakage into the test split. ## Dataset Creation For more information on the dataset creation pipeline please refer to the [technical report](https://huggingface.co/papers/2402.19173). ### Curation Rationale One of the challenges faced by researchers working on code LLMs is the lack of openness and transparency around the development of these systems. Most prior works described the high-level data collection process but did not release the training data. It is therefore difficult for other researchers to fully reproduce these models and understand what kind of pre-training data leads to high-performing code LLMs. By releasing an open large-scale code dataset we hope to make training of code LLMs more reproducible. ### Source Data #### Data Collection 3.28B unique files belonging to 104.2M github repositories were collected by traversing the Software Heritage [2023-09-06](https://docs.softwareheritage.org/devel/swh-dataset/graph/dataset.html#graph-dataset-2023-09-06) graph dataset. Additional repository-level metadata was collected from [GitHub Archive](https://www.gharchive.org/) data up to 2023-09-14. The total uncompressed size of all files is 67.53TB. Near-deduplication was implemented in the pre-processing pipeline on top of exact deduplication. Roughly 40% of permissively licensed files were (near-)duplicates. The following are not stored: * Files that cannot contribute to training code: binary, empty, could not be decoded * Files larger than 10MB **Training Datasets**: For the training datasets the programming languages were filtered further to 17 and 600+ for the `the-stack-v2-smol-ids` and `the-stack-v2-full-ids` dataset, respecively. In addition, heuristics were applied to further increase the quality of the dataset. The code files are also grouped into repositories to allow to pretrain with full repository context. For more details see the [technical report](https://huggingface.co/papers/2402.19173). ##### License detection We extract repository-level license information from [GH Archive](https://www.gharchive.org/) for all repositories with matching names in the SWH dataset. When the repo-level license is not available, i.e., for 96.93\% of repositories, we use the [ScanCode Toolkit](https://github.com/nexB/scancode-toolkit) to detect file-level licenses as follows: * Find all filenames that could contain a license (e.g., LICENSE, MIT.txt, Apache2.0) or contain a reference to the license (e.g., README.md, GUIDELINES); * Apply ScanCode's license detection to the matching files and gather the SPDX IDs of the detected licenses; * Propagate the detected licenses to all files that have the same base path within the repository as the license file. The licenses we consider permissive are listed [here](https://huggingface.co/datasets/bigcode/the-stack-v2/blob/main/license_stats.csv). This list was compiled from the licenses approved by the [Blue Oak Council](https://blueoakcouncil.org/list), as well as licenses categorized as "Permissive" or "Public Domain" by [ScanCode](https://scancode-licensedb.aboutcode.org/). #### Who are the source language producers? The source (code) language producers are users of GitHub that created unique repository names up until 2023-09-06 (cutoff date). ### Personal and Sensitive Information The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub. Deduplication has helped to reduce the amount of sensitive data that may exist. In the event that the dataset contains personal information, researchers should only use public, non-personal information in support of conducting and publishing their [open-access](https://en.wikipedia.org/wiki/Open_access) research. Personal information should not be used for spamming purposes, including sending unsolicited emails or selling of personal information. Complaints, removal requests, and "do not contact" requests can be sent to contact@bigcode-project.org. ### Opting out of The Stack v2 We are giving developers the ability to have their code removed from the dataset upon request. The process for submitting and enacting removal requests will keep evolving throughout the project as we receive feedback and build up more data governance tools. You can check if your code is in The Stack v2 with the following ["Am I In The Stack?" Space](https://huggingface.co/spaces/bigcode/in-the-stack). If you'd like to have your data removed from the dataset follow the [instructions on GitHub](https://github.com/bigcode-project/opt-out-v2). ## Considerations for Using the Data ### Social Impact of Dataset The Stack v2 is an output of the BigCode Project. BigCode aims to be responsible by design and by default. The project is conducted in the spirit of Open Science, focused on the responsible development of LLMs for code. With the release of The Stack v2, we aim to increase access, reproducibility, and transparency of code LLMs in the research community. Work to de-risk and improve on the implementation of ethical best practices of code LLMs is conducted in various BigCode working groups. The Legal, Ethics, and Governance working group has explored topics such as licensing (including copyleft and the intended use of permissively licensed code), attribution of generated code to original code, rights to restrict processing, the inclusion of Personally Identifiable Information (PII), and risks of malicious code, among other topics. This work is ongoing as of October 25th, 2022. We expect code LLMs to enable people from diverse backgrounds to write higher quality code and develop low-code applications. Mission-critical software could become easier to maintain as professional developers are guided by code-generating systems on how to write more robust and efficient code. While the social impact is intended to be positive, the increased accessibility of code LLMs comes with certain risks such as over-reliance on the generated code and long-term effects on the software development job market. A broader impact analysis relating to Code LLMs can be found in section 7 of this [paper](https://arxiv.org/abs/2107.03374). An in-depth risk assessments for Code LLMs can be found in section 4 of this [paper](https://arxiv.org/abs/2207.14157). ### Discussion of Biases The code collected from GitHub does not contain demographic information or proxy information about the demographics. However, it is not without risks, as the comments within the code may contain harmful or offensive language, which could be learned by the models. Widely adopted programming languages like C and Javascript are overrepresented compared to niche programming languages like Julia and Scala. Some programming languages such as SQL, Batchfile, TypeScript are less likely to be permissively licensed (4% vs the average 10%). This may result in a biased representation of those languages. Permissively licensed files also tend to be longer. The majority of natural language present in code from GitHub is English. ### Other Known Limitations One of the current limitations of The Stack v2 is that scraped HTML for websites may not be compliant with Web Content Accessibility Guidelines ([WCAG](https://www.w3.org/WAI/standards-guidelines/wcag/)). This could have an impact on HTML-generated code that may introduce web accessibility issues. The training dataset could contain malicious code and/or the model could be used to generate malware or ransomware. To the best of our knowledge, all files contained in the dataset are licensed with one of the permissive licenses (see list in [Licensing information](#licensing-information)) or no license. The accuracy of license attribution is limited by the accuracy of GHArchive and ScanCode Toolkit. Any mistakes should be reported to BigCode Project for review and follow-up as needed. ## Additional Information ### Dataset Curators 1. Harm de Vries, ServiceNow Research, harm.devries@servicenow.com 2. Leandro von Werra, Hugging Face, leandro@huggingface.co ### Licensing Information The Stack v2 is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack v2 must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point. The list of [SPDX license identifiers](https://spdx.org/licenses/) included in the dataset can be found [here](https://huggingface.co/datasets/bigcode/the-stack-v2/blob/main/license_stats.csv). ### Citation Information ```bash @misc{lozhkov2024starcoder, title={StarCoder 2 and The Stack v2: The Next Generation}, author={Anton Lozhkov and Raymond Li and Loubna Ben Allal and Federico Cassano and Joel Lamy-Poirier and Nouamane Tazi and Ao Tang and Dmytro Pykhtar and Jiawei Liu and Yuxiang Wei and Tianyang Liu and Max Tian and Denis Kocetkov and Arthur Zucker and Younes Belkada and Zijian Wang and Qian Liu and Dmitry Abulkhanov and Indraneil Paul and Zhuang Li and Wen-Ding Li and Megan Risdal and Jia Li and Jian Zhu and Terry Yue Zhuo and Evgenii Zheltonozhskii and Nii Osae Osae Dade and Wenhao Yu and Lucas Krauß and Naman Jain and Yixuan Su and Xuanli He and Manan Dey and Edoardo Abati and Yekun Chai and Niklas Muennighoff and Xiangru Tang and Muhtasham Oblokulov and Christopher Akiki and Marc Marone and Chenghao Mou and Mayank Mishra and Alex Gu and Binyuan Hui and Tri Dao and Armel Zebaze and Olivier Dehaene and Nicolas Patry and Canwen Xu and Julian McAuley and Han Hu and Torsten Scholak and Sebastien Paquet and Jennifer Robinson and Carolyn Jane Anderson and Nicolas Chapados and Mostofa Patwary and Nima Tajbakhsh and Yacine Jernite and Carlos Muñoz Ferrandis and Lingming Zhang and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries}, year={2024}, eprint={2402.19173}, archivePrefix={arXiv}, primaryClass={cs.SE} } ```
freshpearYoon/train_free_22
--- dataset_info: features: - name: input_features sequence: sequence: float32 - name: labels sequence: int64 splits: - name: train num_bytes: 9604558456 num_examples: 10000 download_size: 1346862978 dataset_size: 9604558456 configs: - config_name: default data_files: - split: train path: data/train-* ---
anakib1/audioset
--- dataset_info: features: - name: audio dtype: audio: decode: false - name: label dtype: class_label: names: '0': bark '1': chatter '2': engine '3': footsteps '4': gunshot '5': hammer '6': helicopter '7': rain '8': siren '9': speech_babble '10': telephone '11': wind splits: - name: train num_bytes: 1010502551.0324244 num_examples: 4029 - name: test num_bytes: 294675672.0 num_examples: 1107 download_size: 1296848176 dataset_size: 1305178223.0324244 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
severo/CodeGen4Libs
--- license: mit tags: - code-generation pretty_name: CodeGen4Libs Dataset size_categories: - 100K<n<1M --- # Dataset Card for FudanSELab CodeGen4Libs Dataset ## Dataset Description - **Repository:** [GitHub Repository](https://github.com/FudanSELab/codegen4libs) - **Paper:** [CodeGen4Libs: A Two-stage Approach for Library-oriented Code Generation](https://mingwei-liu.github.io/publication/2023-08-18-ase-CodeGen4Libs) ### Dataset Summary This dataset is used in the ASE2023 paper titled ["CodeGen4Libs: A Two-stage Approach for Library-oriented Code Generation"](https://mingwei-liu.github.io/publication/2023-08-18-ase-CodeGen4Libs). ### Languages [More Information Needed] ## Dataset Structure ```python from datasets import load_dataset dataset = load_dataset("FudanSELab/CodeGen4Libs") DatasetDict({ train: Dataset({ features: ['id', 'method', 'clean_method', 'doc', 'comment', 'method_name', 'extra', 'imports_info', 'libraries_info', 'input_str', 'input_ids', 'tokenized_input_str', 'input_token_length', 'labels', 'tokenized_labels_str', 'labels_token_length', 'retrieved_imports_info', 'retrieved_code', 'imports', 'cluster_imports_info', 'libraries', 'attention_mask'], num_rows: 391811 }) validation: Dataset({ features: ['id', 'method', 'clean_method', 'doc', 'comment', 'method_name', 'extra', 'imports_info', 'libraries_info', 'input_str', 'input_ids', 'tokenized_input_str', 'input_token_length', 'labels', 'tokenized_labels_str', 'labels_token_length', 'retrieved_imports_info', 'retrieved_code', 'imports', 'cluster_imports_info', 'libraries', 'attention_mask'], num_rows: 5967 }) test: Dataset({ features: ['id', 'method', 'clean_method', 'doc', 'comment', 'method_name', 'extra', 'imports_info', 'libraries_info', 'input_str', 'input_ids', 'tokenized_input_str', 'input_token_length', 'labels', 'tokenized_labels_str', 'labels_token_length', 'retrieved_imports_info', 'retrieved_code', 'imports', 'cluster_imports_info', 'libraries', 'attention_mask'], num_rows: 6002 }) }) ``` ### Data Fields The specific data fields for each tuple are delineated as follows: - id: the unique identifier for each tuple. - method: the original method-level code for each tuple. - clean_method: the ground-truth method-level code for each task. - doc: the document of method-level code for each tuple. - comment: the natural language description for each tuple. - method_name: the name of the method. - extra: extra information on the code repository to which the method level code belongs. - license: the license of code repository. - path: the path of code repository. - repo_name: the name of code repository. - size: the size of code repository. - imports_info: the import statements for each tuple. - libraries_info: the libraries info for each tuple. - input_str: the design of model input. - input_ids: the ids of tokenized input. - tokenized_input_str: the tokenized input. - input_token_length: the length of the tokenized input. - labels: the ids of tokenized output. - tokenized_labels_str: the tokenized output. - labels_token_length: the length of the the tokenized output. - retrieved_imports_info: the retrieved import statements for each tuple. - retrieved_code: the retrieved method-level code for each tuple. - imports: the imported packages of each import statement. - cluster_imports_info: cluster import information of code. - libraries: libraries used by the code. - attention_mask: attention mask for the input. ### Data Splits The dataset is splited into a training set, a validation set, and a test set, with 391811, 5967, and 6002 data rows respectively. ## Additional Information ### Citation Information ``` @inproceedings{ase2023codegen4libs, author = {Mingwei Liu and Tianyong Yang and Yiling Lou and Xueying Du and Ying Wang and and Xin Peng}, title = {{CodeGen4Libs}: A Two-stage Approach for Library-oriented Code Generation}, booktitle = {38th {IEEE/ACM} International Conference on Automated Software Engineering, {ASE} 2023, Kirchberg, Luxembourg, September 11-15, 2023}, pages = {0--0}, publisher = {{IEEE}}, year = {2023}, } ```
mychen76/wildreceipts_ocr_eval
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 3239963.0 num_examples: 20 download_size: 3034931 dataset_size: 3239963.0 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "wildreceipts_ocr_eval" see train dataset for full detail: https://huggingface.co/datasets/mychen76/wildreceipts_ocr_train [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Boryak/Images
--- license: openrail ---
irds/mr-tydi_id_test
--- pretty_name: '`mr-tydi/id/test`' viewer: false source_datasets: ['irds/mr-tydi_id'] task_categories: - text-retrieval --- # Dataset Card for `mr-tydi/id/test` The `mr-tydi/id/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/id/test). # Data This dataset provides: - `queries` (i.e., topics); count=829 - `qrels`: (relevance assessments); count=961 - For `docs`, use [`irds/mr-tydi_id`](https://huggingface.co/datasets/irds/mr-tydi_id) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_id_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_id_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
LNTANOooo/wild_chat_en_zh_dedup_v3
--- dataset_info: features: - name: conversation list: - name: content dtype: string - name: role dtype: string splits: - name: Chinese num_bytes: 324851827.80972385 num_examples: 83541 - name: English num_bytes: 1151858348.4218686 num_examples: 218414 download_size: 731446606 dataset_size: 1476710176.2315924 configs: - config_name: default data_files: - split: Chinese path: data/Chinese-* - split: English path: data/English-* ---
Atharva-28/mini-platypus-two
--- dataset_info: features: - name: instruction dtype: string - name: output dtype: string splits: - name: train num_bytes: 4186564 num_examples: 1000 download_size: 2245925 dataset_size: 4186564 configs: - config_name: default data_files: - split: train path: data/train-* ---
dafun/open-web-math-clean
--- license: odc-by --- The dataset was derived from [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) after cleaning about 1.65m garbled and non-mathematical documents. You can find the detailed cleaning process on my [blog](https://medium.com/@bansafe/a-deep-dive-into-enhancing-openwebmath-dataset-783a79faf193).
ShuhuaiRen/TimeIT
--- license: cc-by-4.0 language: - en --- # Dataset Card for TimeIT TimeIT encompasses 6 longstanding timestamp-related video tasks and incorporates 12 specific datasets derived from different domains. **[NOTE]: Please refer to [DATA.md](https://github.com/RenShuhuai-Andy/TimeChat/blob/master/docs/DATA.md) for more details on downloading and processing video data.** ## Dataset Description - **Homepage: https://huggingface.co/datasets/ShuhuaiRen/TimeIT** - **Repository: https://huggingface.co/datasets/ShuhuaiRen/TimeIT** - **Paper: https://arxiv.org/abs/2312.02051** - **Leaderboard:** - **Point of Contact:** ## Dataset Statistics Our dataset compiles diverse tasks of time-sensitive long video understanding, including Dense Video Captioning, Video Grounding, Video Summarization, Video Highlight Detection, Step Localization, Transcribed Speech Generation. ### Instruction Statistics | Task | #Instructions | |-------------------------------|---------------| | Dense Video Captioning | 6 | | Temporal Video Grounding | 6 | | Video Summarization | 6 | | Video Highlight Detection | 6 | | Step Localization | 6 | | Transcribed Speech Generation | 6 | | Total | 36 | ### Task Statistics | Task | Description | #Train | |-------------------------------|----------------------------------------------------------------------------------------------------------------------|---------| | Dense Video Captioning | detects a series of events in the given video and outputs the corresponding timestamps and descriptions | 16,342 | | Temporal Video Grounding | predict a timestamp boundary including the start and end time in the video given a natural language query | 60,471 | | Video Summarization | create a compressed set of frames or clip shots to represent the most informative content of the given video | 75 | | Video Highlight Detection | identify the most exciting, impressive, or emotional moments that may not cover the full scope of the original video | 6,858 | | Step Localization | segment and describe significant steps in a long untrimmed video | 9,488 | | Transcribed Speech Generation | predict the speech content and its corresponding start and end timestamps based on visual signals in the video | 31,627 | | Total | - | 124861 | ### Detailed Dataset Statistics | Task | Dataset | #Train | |-------------------------------|------------------------|--------| | Dense Video Captioning | `ActivityNet Captions` | 10,009 | | | `ViTT` | 5,141 | | | `YouCook2` | 1,192 | | Temporal Video Grounding | `DiDeMo` | 33,002 | | | `QuerYD` | 14,602 | | | `HiREST_grounding` | 459 | | | `Charades-STA` | 12,408 | | Video Summarization | `TVSum` | 50 | | | `SumMe` | 25 | | Video Highlight Detection | `QVHighlights` | 6,858 | | Step Localization | `COIN` | 9,029 | | | `HiREST_step` | 459 | | Transcribed Speech Generation | `YT-Temporal` | 31,627 | ## Dataset Structure ### HuggingFace Login (Optional) ```python # OR run huggingface-cli login from huggingface_hub import login hf_token = "hf_xxx" # TODO: set a valid HuggingFace access token for loading datasets/models login(token=hf_token) ``` ### Data Loading ```python from datasets import load_dataset ds_name = "youcook2" # change the dataset name here dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name) ``` ### Data Splits ```python from datasets import load_dataset ds_name = "youcook2" # change the dataset name here dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name) train_set = dataset["train"] ``` ### Data Instances ```python from datasets import load_dataset ds_name = "youcook2" # change the dataset name here dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name) train_set = dataset["train"] for train_instance in train_set: question = train_instance["question"] # str answer = train_instance["answer"] # str video_path = train_instance["video_path"] # str ``` ### Data Fields ```python import datasets features = datasets.Features( { "video_path": datasets.Value("string"), "question": datasets.Value("string"), "answer": datasets.Value("string"), } ) ``` ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data | Task | Dataset [Citation] | Source | |-------------------------------|----------------------------|------------------------------------------------------------------------------------| | Dense Video Captioning | `ActivityNet Captions` [1] | [Source](http://activity-net.org/download.html) | | | `ViTT` [2] | [Source](https://github.com/google-research-datasets/Video-Timeline-Tags-ViTT) | | | `YouCook2` [3] | [Source](http://youcook2.eecs.umich.edu/) | | Temporal Video Grounding | `DiDeMo` [4] | [Source](https://github.com/LisaAnne/LocalizingMoments?tab=readme-ov-file#dataset) | | | `QuerYD` [5] | [Source](https://www.robots.ox.ac.uk/~vgg/data/queryd/) | | | `HiREST_grounding` [6] | [Source](https://github.com/j-min/HiREST) | | | `Charades-STA` [7] | [Source](https://github.com/jiyanggao/TALL) | | Video Summarization | `TVSum` [8] | [Source](https://github.com/yalesong/tvsum) | | | `SumMe` [9] | [Source](http://classif.ai/dataset/ethz-cvl-video-summe/) | | Video Highlight Detection | `QVHighlights` [10] | [Source](https://github.com/jayleicn/moment_detr/tree/main/data) | | Step Localization | `COIN` [11] | [Source](https://github.com/coin-dataset/annotations) | | | `HiREST_step` [6] | [Source](https://github.com/j-min/HiREST) | | Transcribed Speech Generation | `YT-Temporal` [12] | [Source](https://rowanzellers.com/merlot/#data) | ### Annotations #### Annotation process To build high-quality multimodal instruction datasets, we rewrite various datasets into multimodal-to-text dialog format. The annotation process includes four steps: - (1) **Stage I: Instruction Writing**: writing instructions for each task; - (2) **Stage II: Data Format Unification**: structuring images and texts into a unified schema; - (3) **Stage III: Quality Check**: checking the overall dataset quality; - (4) **Stage IV: Key Datasets Translation**: building multilingual sets. #### Who are the annotators? Three authors of this work are employed as human annotators, each of whom is a graduate student familiar with relevant literature. ## Additional Information ### Licensing Information The content of original dataset follows their original license. We suggest that for the task with Unknown/Custom license, the user can check the original project or contact the dataset owner for detailed license information. Our annotated instruction data is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). ### Citation Information ```bibtex @article{Ren2023TimeChat, title={TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding}, author={Shuhuai Ren and Linli Yao and Shicheng Li and Xu Sun and Lu Hou}, journal={ArXiv}, year={2023}, volume={abs/2312.02051}, } ``` ### Contributions TimeIT is a video-centric instruction-tuning dataset involving timestamps. designed to enable the development of general-purpose video agents. ## References - [1] Dense-Captioning Events in Videos - [2] Multimodal Pretraining for Dense Video Captioning - [3] Towards Automatic Learning of Procedures from Web Instructional Videos - [4] Localizing Moments in Video with Natural Language - [5] QuerYD: A video dataset with high-quality text and audio narrations - [6] Hierarchical Video-Moment Retrieval and Step-Captioning - [7] TALL: Temporal Activity Localization via Language Query - [8] TVSum: Summarizing Web Videos Using Titles - [9] Creating Summaries from User Videos - [10] QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries - [11] COIN: A Large-scale Dataset for Comprehensive Instructional Video Analysis - [12] MERLOT: Multimodal Neural Script Knowledge Models
rntc/blurb_bc5chem_aug_-template_false
--- dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: type dtype: string - name: ner_tags sequence: class_label: names: '0': O '1': B '2': I splits: - name: train num_bytes: 10361965 num_examples: 4560 - name: validation num_bytes: 10913316 num_examples: 4581 - name: test num_bytes: 10037457 num_examples: 4797 download_size: 4749221 dataset_size: 31312738 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---