id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
Goorm-AI-04/Drone_RCS_Image | 2023-09-16T10:50:16.000Z | [
"region:us"
] | Goorm-AI-04 | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: rcs_image
dtype: image
- name: drone_type
dtype: string
- name: frequency
dtype: int64
splits:
- name: train
num_bytes: 31214190.0
num_examples: 240
download_size: 31215528
dataset_size: 31214190.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Drone_RCS_Image"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
adityarra07/master_test | 2023-09-16T17:03:26.000Z | [
"region:us"
] | adityarra07 | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 337025121.8032651
num_examples: 2000
download_size: 330351099
dataset_size: 337025121.8032651
---
# Dataset Card for "master_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
boardsec/yara_dataset_v2 | 2023-09-17T00:35:14.000Z | [
"region:us"
] | boardsec | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: Chunk
dtype: string
- name: yara_rule
dtype: string
- name: cleaned_yara_rule
dtype: string
splits:
- name: train
num_bytes: 36039
num_examples: 67
download_size: 15832
dataset_size: 36039
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "yara_dataset_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nRuaif/ChaiML_feedback | 2023-09-17T04:50:30.000Z | [
"region:us"
] | nRuaif | null | null | null | 0 | 24 | Entry not found |
kewu93/three_styles_prompted | 2023-09-20T03:08:50.000Z | [
"region:us"
] | kewu93 | null | null | null | 0 | 24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 59921589.0
num_examples: 2100
- name: val
num_bytes: 25922766.5
num_examples: 900
download_size: 84801147
dataset_size: 85844355.5
---
# Dataset Card for "three_styles_prompted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thanhduycao/soict_train_dataset | 2023-09-21T15:05:06.000Z | [
"region:us"
] | thanhduycao | null | null | null | 0 | 24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: intent
dtype: string
- name: sentence_annotation
dtype: string
- name: entities
list:
- name: type
dtype: string
- name: filler
dtype: string
- name: file
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: origin_transcription
dtype: string
- name: sentence_norm
dtype: string
- name: sentence_norm_v2
dtype: string
splits:
- name: train
num_bytes: 3484626224
num_examples: 6729
- name: test
num_bytes: 390303091
num_examples: 748
download_size: 918877822
dataset_size: 3874929315
---
# Dataset Card for "soict_train_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SebastianMoncaleano/cammel | 2023-09-22T04:02:14.000Z | [
"region:us"
] | SebastianMoncaleano | null | null | null | 0 | 24 | Entry not found |
yuanmei424/fonts_sample | 2023-09-24T09:22:12.000Z | [
"region:us"
] | yuanmei424 | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: edit_prompt
dtype: string
- name: input_image
dtype: image
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 175755314.75
num_examples: 18197
download_size: 148960813
dataset_size: 175755314.75
---
# Dataset Card for "fonts_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kinianlo/wikipedia_pos_tagged | 2023-09-30T21:41:55.000Z | [
"region:us"
] | kinianlo | null | null | null | 2 | 24 | ---
dataset_info:
- config_name: 20220301_en_nltk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: pos_tags
sequence:
sequence:
sequence: string
splits:
- name: train
num_bytes: 88585221192
num_examples: 6458670
download_size: 3527644902
dataset_size: 88585221192
- config_name: 20220301_en_nltk_tags_only
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: pos_tags
sequence:
sequence:
sequence: string
splits:
- name: train
num_bytes: 68920385173
num_examples: 6458670
download_size: 0
dataset_size: 68920385173
- config_name: 20220301_simple_nltk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: pos_tags
sequence:
sequence:
sequence: string
splits:
- name: train
num_bytes: 1000903680
num_examples: 205328
download_size: 286763992
dataset_size: 1000903680
- config_name: 20220301_simple_nltk_tags_only
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: pos_tags
sequence:
sequence:
sequence: string
splits:
- name: train
num_bytes: 783729741
num_examples: 205328
download_size: 161414334
dataset_size: 783729741
- config_name: 20220301_simple_spacy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: pos_tags
sequence:
sequence:
sequence: string
splits:
- name: train
num_bytes: 1131814443
num_examples: 205328
download_size: 289479815
dataset_size: 1131814443
- config_name: 20220301_simple_spacy_tags_only
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: pos_tags
sequence:
sequence:
sequence: string
splits:
- name: train
num_bytes: 914640504
num_examples: 205328
download_size: 164284823
dataset_size: 914640504
configs:
- config_name: 20220301_en_nltk
data_files:
- split: train
path: 20220301_en_nltk/train-*
- config_name: 20220301_en_nltk_tags_only
data_files:
- split: train
path: 20220301_en_nltk_tags_only/train-*
- config_name: 20220301_simple_nltk
data_files:
- split: train
path: 20220301_simple_nltk/train-*
- config_name: 20220301_simple_nltk_tags_only
data_files:
- split: train
path: 20220301_simple_nltk_tags_only/train-*
- config_name: 20220301_simple_spacy
data_files:
- split: train
path: 20220301_simple_spacy/train-*
- config_name: 20220301_simple_spacy_tags_only
data_files:
- split: train
path: 20220301_simple_spacy_tags_only/train-*
---
# Dataset Card for "wikipedia_pos_tagged"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liveaverage/nvcf-test | 2023-09-26T00:38:07.000Z | [
"region:us"
] | liveaverage | null | null | null | 0 | 24 | |
ArwaAbdul/Fingerprint_split_90_10 | 2023-09-28T12:14:02.000Z | [
"region:us"
] | ArwaAbdul | null | null | null | 0 | 24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '1'
'1': '2'
'2': '3'
'3': '4'
splits:
- name: train
num_bytes: 504155396.6682027
num_examples: 3000
- name: test
num_bytes: 77898517.33179724
num_examples: 472
download_size: 337755809
dataset_size: 582053914.0
---
# Dataset Card for "Fingerprint_split_90_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zooxsmartufpb/dataset_complete3 | 2023-09-28T21:28:09.000Z | [
"region:us"
] | zooxsmartufpb | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 81060969
num_examples: 46099
download_size: 8042824
dataset_size: 81060969
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dataset_complete3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chrisgru/chat-v2.3 | 2023-09-29T11:28:08.000Z | [
"region:us"
] | chrisgru | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 1929381
num_examples: 500
- name: train
num_bytes: 6752911
num_examples: 4386
download_size: 3989528
dataset_size: 8682292
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
---
# Dataset Card for "chat-v2.3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hannxu/hc_var | 2023-10-03T16:33:15.000Z | [
"task_categories:text-classification",
"size_categories:100M<n<1B",
"language:en",
"license:apache-2.0",
"arxiv:2310.01307",
"region:us"
] | hannxu | null | null | null | 1 | 24 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
size_categories:
- 100M<n<1B
---
# Dataset Card for HC-Var (Human and ChatGPT Texts with Variety)
This is a collection of human texts and ChatGPT (GPT3.5-Turbo) generated texts, to faciliate studies such as generated texts detection.
It includes the texts which are generated / human written to accomplish various language tasks with various approaches.
The included language tasks and topics are summarized below. Note: For each language task, this dataset considers 3 different prompts to inquire ChatGPT outputs.
The example code to train binary classification models is in [this website](https://github.com/hannxu123/hc_var).
A technical report on some representative detection methods can be find in [this paper](https://arxiv.org/abs/2310.01307).
This dataset is collected by Han Xu from Michigan State
University. Potential issues and suggestions are welcomed to be dicussed in the community panel or emails to xuhan1@msu.edu.
## Key variables in the dataset:
**text**: The text body (including either human or ChatGPT texts.)\
**domain**: The language tasks included in this dataset: News, Review, (Essay) Writing, QA\
**topic**: The topic in each task.\
**prompt**: The prompt used to obtain ChatGPT outputs. "N/A" for human texts.\
**pp_id**: Each task has 3 prompts to inquire ChatGPT outputs. The "pp_id" denotes the index of prompt. "0" for human texts. "1-3" for ChatGPT texts.\
**label**: "0" for human texts. "1" for ChatGPT texts.
## To cite this dataset
```
@misc{xu2023generalization,
title={On the Generalization of Training-based ChatGPT Detection Methods},
author={Han Xu and Jie Ren and Pengfei He and Shenglai Zeng and Yingqian Cui and Amy Liu and Hui Liu and Jiliang Tang},
year={2023},
eprint={2310.01307},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
shossain/govreport-qa-5-4096 | 2023-10-03T19:36:20.000Z | [
"region:us"
] | shossain | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 266300
num_examples: 5
download_size: 71798
dataset_size: 266300
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "govreport-qa-5-4096"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AlanRobotics/lima-processed | 2023-10-03T20:49:49.000Z | [
"region:us"
] | AlanRobotics | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: user
dtype: string
- name: assistant
dtype: string
splits:
- name: train
num_bytes: 2868376
num_examples: 1030
download_size: 1682336
dataset_size: 2868376
---
# Dataset Card for "lima-processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sharka/CIVQA_easyocr_simple_valid_2 | 2023-10-04T09:39:42.000Z | [
"region:us"
] | Sharka | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: id
dtype: string
- name: words
sequence: string
- name: answers
dtype: string
- name: bboxes
sequence:
sequence: float32
- name: answers_bboxes
sequence:
sequence: float32
- name: questions
dtype: string
- name: image
dtype: string
splits:
- name: validation
num_bytes: 31568299194
num_examples: 34159
download_size: 10965715031
dataset_size: 31568299194
---
# Dataset Card for "CIVQA_easyocr_simple_valid_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lecslab/glosslm | 2023-10-10T02:00:50.000Z | [
"region:us"
] | lecslab | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: ID
dtype: string
- name: glottocode
dtype: string
- name: transcription
dtype: string
- name: glosses
dtype: string
- name: translation
dtype: string
- name: metalang_glottocode
dtype: string
- name: is_segmented
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 92191507
num_examples: 451407
download_size: 31679783
dataset_size: 92191507
---
# Dataset Card for "glosslm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zkdeng/dangerousSpiders | 2023-10-05T00:49:18.000Z | [
"region:us"
] | zkdeng | null | null | null | 0 | 24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Acantholycosa_lignaria
'1': Aglaoctenus_castaneus
'2': Aglaoctenus_lagotis
'3': Allocosa_funerea
'4': Allotrochosina_schauinslandi
'5': Alopecosa_albofasciata
'6': Alopecosa_barbipes
'7': Alopecosa_cuneata
'8': Alopecosa_inquilina
'9': Alopecosa_kochi
'10': Alopecosa_pulverulenta
'11': Anahita_punctulata
'12': Ancylometes_bogotensis
'13': Ancylometes_concolor
'14': Ancylometes_rufus
'15': Anoteropsis_hilaris
'16': Anoteropsis_litoralis
'17': Araneus_diadematus
'18': Arctosa_cinerea
'19': Arctosa_leopardus
'20': Arctosa_littoralis
'21': Arctosa_perita
'22': Arctosa_personata
'23': Asthenoctenus_borellii
'24': Aulonia_albimana
'25': Centroctenus_brevipes
'26': Cheiracanthium_erraticum
'27': Cheiracanthium_gracile
'28': Cheiracanthium_inclusum
'29': Cheiracanthium_mildei
'30': Cheiracanthium_punctorium
'31': Ctenus_amphora
'32': Ctenus_hibernalis
'33': Ctenus_medius
'34': Ctenus_ornatus
'35': Cupiennius_coccineus
'36': Cupiennius_getazi
'37': Cupiennius_salei
'38': Diapontia_uruguayensis
'39': Eratigena_agrestis
'40': Geolycosa_vultuosa
'41': Gladicosa_gulosa
'42': Gladicosa_pulchra
'43': Hippasa_holmerae
'44': Hogna_antelucana
'45': Hogna_baltimoriana
'46': Hogna_bivittata
'47': Hogna_carolinensis
'48': Hogna_crispipes
'49': Hogna_frondicola
'50': Hogna_gumia
'51': Hogna_radiata
'52': Lampona_cylindrata
'53': Latrodectus_bishopi
'54': Latrodectus_curacaviensis
'55': Latrodectus_geometricus
'56': Latrodectus_hasselti
'57': Latrodectus_hesperus
'58': Latrodectus_katipo
'59': Latrodectus_mactans
'60': Latrodectus_mirabilis
'61': Latrodectus_renivulvatus
'62': Latrodectus_tredecimguttatus
'63': Latrodectus_variolus
'64': Loxosceles_amazonica
'65': Loxosceles_deserta
'66': Loxosceles_laeta
'67': Loxosceles_reclusa
'68': Loxosceles_rufescens
'69': Loxosceles_tenochtitlan
'70': Loxosceles_yucatana
'71': Lycosa_erythrognatha
'72': Lycosa_hispanica
'73': Lycosa_pampeana
'74': Lycosa_praegrandis
'75': Lycosa_singoriensis
'76': Lycosa_tarantula
'77': Missulena_bradleyi
'78': Missulena_occatoria
'79': Paratrochosina_amica
'80': Pardosa_amentata
'81': Pardosa_lapidicina
'82': Pardosa_mercurialis
'83': Pardosa_moesta
'84': Pardosa_wagleri
'85': Phoneutria_boliviensis
'86': Phoneutria_depilata
'87': Phoneutria_fera
'88': Phoneutria_nigriventer
'89': Phoneutria_pertyi
'90': Phoneutria_reidyi
'91': Pirata_piraticus
'92': Portacosa_cinerea
'93': Rabidosa_hentzi
'94': Rabidosa_punctulata
'95': Rabidosa_rabida
'96': Schizocosa_avida
'97': Schizocosa_malitiosa
'98': Schizocosa_mccooki
'99': Sicarius_thomisoides
'100': Sosippus_californicus
'101': Tigrosa_annexa
'102': Tigrosa_aspersa
'103': Tigrosa_georgicola
'104': Tigrosa_helluo
'105': Trochosa_ruricola
'106': Trochosa_sepulchralis
'107': Trochosa_terricola
'108': Tropicosa_moesta
'109': Venator_immansuetus
'110': Venator_spenceri
'111': Venatrix_furcillata
'112': Wadicosa_fidelis
'113': Xerolycosa_miniata
'114': Xerolycosa_nemoralis
splits:
- name: train
num_bytes: 4290587998.03
num_examples: 166895
download_size: 3551438155
dataset_size: 4290587998.03
---
# Dataset Card for "dangerousSpiders"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mozci/typedb | 2023-10-07T04:27:41.000Z | [
"license:afl-3.0",
"region:us"
] | mozci | null | null | null | 0 | 24 | ---
license: afl-3.0
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 93126573.782
num_examples: 6001
download_size: 36065061
dataset_size: 93126573.782
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Type specimens dataset. Contains type specimens of 65 typefaces and corresponding captions.
Ex.

The letter M written with Monotype Old Style typeface. Hellenic, anno 1980, Sans Serif, OT, Cyrillic, OpenType, Greek, European language support, W1G, Readable, Business, Office, Greek-OpenType, Newsletters, Text, 1980s, Newspaper, Squared, magazines, 80s, Glyphic, Pro, EU-Fonts, Cyrillic -OpenType, OpenType Pro
|
ismailiismail/paragraphss_paraphrasing | 2023-10-07T19:59:35.000Z | [
"region:us"
] | ismailiismail | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: phrase
dtype: string
- name: paraphrase
dtype: string
splits:
- name: train
num_bytes: 1848761
num_examples: 1000
download_size: 963985
dataset_size: 1848761
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "paragraphss_paraphrasing"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Ckail/Needy_Girl_Overdose | 2023-10-08T08:42:38.000Z | [
"license:gpl-3.0",
"region:us"
] | Ckail | null | null | null | 0 | 24 | ---
license: gpl-3.0
---
|
AlekseyKorshuk/gambling-rewritten-new-130 | 2023-10-10T17:10:52.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
splits:
- name: train
num_bytes: 2306196
num_examples: 127
download_size: 1354179
dataset_size: 2306196
---
# Dataset Card for "gambling-rewritten-new-130"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
emre/Open_SLR108_Turkish_10_hours | 2022-12-06T21:00:45.000Z | [
"license:cc-by-4.0",
"robust-speech-event",
"arxiv:2103.16193",
"region:us"
] | emre | null | null | null | 3 | 23 | ---
license: cc-by-4.0
tags:
- robust-speech-event
datasets:
- MediaSpeech
---
MediaSpeech
Identifier: SLR108
Summary: French, Arabic, Turkish and Spanish media speech datasets
Category: Speech
License: dataset is distributed under the Creative Commons Attribution 4.0 International License.
About this resource:
MediaSpeech is a dataset of French, Arabic, Turkish and Spanish media speech built with the purpose of testing Automated Speech Recognition (ASR) systems performance. The dataset contains 10 hours of speech for each language provided.
The dataset consists of short speech segments automatically extracted from media videos available on YouTube and manually transcribed, with some pre- and post-processing.
Baseline models and wav version of the dataset can be found in the following git repository: https://github.com/NTRLab/MediaSpeech
@misc{mediaspeech2021,
title={MediaSpeech: Multilanguage ASR Benchmark and Dataset},
author={Rostislav Kolobov and Olga Okhapkina and Olga Omelchishina, Andrey Platunov and Roman Bedyakin and Vyacheslav Moshkin and Dmitry Menshikov and Nikolay Mikhaylovskiy},
year={2021},
eprint={2103.16193},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
|
vocab-transformers/wiki-en-passages-20210101 | 2022-02-24T17:09:32.000Z | [
"region:us"
] | vocab-transformers | null | null | null | 0 | 23 | # wiki-en-passages-20210101
This is a processed dump of the English Wikipedia from 2021-01-01. Each page has been splitted into paragraphs as they appear in the text. Lists, tables and headlines had been removed. In total it has 38,080,804 passages.
Further, each article contain meta-data on the number of languages this article exists in and on the number of views this article received over a 1 year period.
The articles are sorted from most popular (most languages available, most views) to least popular. |
billray110/corpus-of-diverse-styles | 2022-10-22T00:52:53.000Z | [
"task_categories:text-classification",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"arxiv:2010.05700",
"region:us"
] | billray110 | null | null | null | 3 | 23 | ---
annotations_creators: []
language_creators:
- found
language: []
license: []
multilinguality:
- monolingual
pretty_name: Corpus of Diverse Styles
size_categories:
- 10M<n<100M
source_datasets: []
task_categories:
- text-classification
task_ids: []
---
# Dataset Card for Corpus of Diverse Styles
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
## Disclaimer
I am not the original author of the paper that presents the Corpus of Diverse Styles. I uploaded the dataset to HuggingFace as a convenience.
## Dataset Description
- **Homepage:** http://style.cs.umass.edu/
- **Repository:** https://github.com/martiansideofthemoon/style-transfer-paraphrase
- **Paper:** https://arxiv.org/abs/2010.05700
### Dataset Summary
A new benchmark dataset that contains 15M
sentences from 11 diverse styles.
To create CDS, we obtain data from existing academic
research datasets and public APIs or online collections
like Project Gutenberg. We choose
styles that are easy for human readers to identify at
a sentence level (e.g., Tweets or Biblical text). While
prior benchmarks involve a transfer between two
styles, CDS has 110 potential transfer directions.
### Citation Information
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
``` |
khalidalt/HuffPost | 2023-05-19T18:35:08.000Z | [
"license:cc0-1.0",
"region:us"
] | khalidalt | A dataset of approximately 200K news headlines from the year 2012 to 2018 collected from HuffPost. | @book{book,
author = {Misra, Rishabh and Grover, Jigyasa},
year = {2021},
month = {01},
pages = {},
title = {Sculpting Data for ML: The first act of Machine Learning},
isbn = {978-0-578-83125-1}
}
@dataset{dataset,
author = {Misra, Rishabh},
year = {2018},
month = {06},
pages = {},
title = {News Category Dataset},
doi = {10.13140/RG.2.2.20331.18729}
} | null | 0 | 23 | ---
license: cc0-1.0
---
# Dataset Card for HuffPost
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:https://www.kaggle.com/datasets/rmisra/news-category-dataset/metadata**
### Dataset Summary
A dataset of approximately 200K news headlines from the year 2012 to 2018 collected from HuffPost.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
license: cc0-1.0
### Citation Information
```
@book{book,
author = {Misra, Rishabh and Grover, Jigyasa},
year = {2021},
month = {01},
pages = {},
title = {Sculpting Data for ML: The first act of Machine Learning},
isbn = {978-0-578-83125-1}
}
@dataset{dataset,
author = {Misra, Rishabh},
year = {2018},
month = {06},
pages = {},
title = {News Category Dataset},
doi = {10.13140/RG.2.2.20331.18729}
}
```
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
ibm/vira-intents | 2022-06-01T07:39:11.000Z | [
"region:us"
] | ibm | null | null | null | 1 | 23 | The COVID-19 Vaccine Intent Expressions dataset contains 7,990 varying expressions for common questions about COVID-19 vaccines.
We collaborated with a team at Johns Hopkins University to curate a list 181 such common questions.
We then showed annotators a question from the list and asked them to express it in their words, imagining they are chatting with a knowledgable friend.
A subset of 324 expressions in this dataset are utterances taken from VIRADialogs, a dataset of conversations of users with a chatbot about COVID-19 vaccines.
The data is split to 3 files, train.csv and dev.csv and test.csv.
Each file contains the following columns:
1. text - the expression written by an annotator (or taken from VIRADialogs)
2. label - the running class index associated with this label
If you use this dataset please cite:
Benchmark Data and Evaluation Framework for Intent Discovery Around COVID-19 Vaccine Hesitancy
Shai Gretz, Assaf Toledo, Roni Friedman, Dan Lahav, Rose Weeks, Naor Bar-Zeev, João Sedoc, Pooja Sangha, Yoav Katz, Noam Slonim.
arXiv. 2022.
============================
License: Community Data License Agreement - Sharing - Version 1.0
https://cdla.dev/sharing-1-0/
This dataset contains parts of VIRADialogs as-is. All credit for VIRADialogs belongs to Johns Hopkins University, they are the sole owners of VIRADialogs. VIRADialogs is available at vaxchat.org/research. |
rjac/all-the-news-2-1-Component-one | 2022-07-28T21:01:39.000Z | [
"annotations_creators:Andrew Thompson",
"annotations_creators:components.one",
"language:en",
"region:us"
] | rjac | null | null | null | 0 | 23 | ---
annotations_creators:
- Andrew Thompson
- components.one
language:
- en
---
# 2.7 million news articles and essays
## Table of Contents
- [Dataset Description](#dataset-description)
## Dataset Description
2.7 million news articles and essays from 27 American publications. Includes date, title, publication, article text, publication name, year, month, and URL (for some). Articles mostly span from 2016 to early 2020.
- Type: CSV
- Size: 3.4 GB compressed, 8.8 GB uncompressed
- Created by: Andrew Thompson
- Date added: 4/3/2020
- Date modified: 4/3/2020
- source: [Component one Datasets 2.7 Millions](https://components.one/datasets/all-the-news-2-news-articles-dataset)
- Date of Download and processed: 19/6/2022
- Header was modified with the respective columns
- Row number 2,324,812 was removed |
projecte-aina/catalanqa | 2023-09-13T12:45:53.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ca",
"license:cc-by-sa-4.0",
"arxiv:1606.05250",
"region:us"
] | projecte-aina | CatalanQA: an extractive QA dataset from original Catalan Sources: Wikipedia and VilaWeb newswire.
It is an aggregation and balancing of 2 previous datasets: VilaQUAD and ViquiQUAD, which were described in
This dataset can be used to build extractive-QA and Language Models.
Splts have been balanced by kind of question, and unlike other datasets like SQUAD, it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times.
- test.json contains 2135 question/answer pairs
- train.json contains 17135 question/answer pairs
- dev.json contains 2157 question/answer pairs
Funded by the Generalitat de Catalunya, Departament de Polítiques Digitals i Administració Pública (AINA),
and Plan de Impulso de las Tecnologías del Lenguaje (Plan TL). | None | null | 1 | 23 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: catalanqa
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
---
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
# Dataset Card for CatalanQA
## Dataset Description
- **Homepage:** https://github.com/projecte-aina
- **Point of Contact:** [Carlos Rodríguez-Penagos](mailto:carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](mailto:carme.armentano@bsc.es)
### Dataset Summary
This dataset can be used to build extractive-QA and Language Models. It is an aggregation and balancing of 2 previous datasets: [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) and [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad).
Splits have been balanced by kind of question, and unlike other datasets like [SQuAD](http://arxiv.org/abs/1606.05250), it only contains, per record, one question and one answer for each context, although the contexts can repeat multiple times.
This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
### Supported Tasks and Leaderboards
Extractive-QA, Language Model.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
```
{
"title": "Els 521 policies espanyols amb més mala nota a les oposicions seran enviats a Catalunya",
"paragraphs": [
{
"context": "El Ministeri d'Interior espanyol enviarà a Catalunya els 521 policies espanyols que han obtingut més mala nota a les oposicions. Segons que explica El País, hi havia mig miler de places vacants que s'havien de cobrir, però els agents amb més bones puntuacions han elegit destinacions diferents. En total van aprovar les oposicions 2.600 aspirants. D'aquests, en seran destinats al Principat 521 dels 560 amb més mala nota. Per l'altra banda, entre els 500 agents amb més bona nota, només 8 han triat Catalunya. Fonts de la policia espanyola que esmenta el diari ho atribueixen al procés d'independència, al Primer d'Octubre i a la 'situació social' que se'n deriva.",
"qas": [
{
"question": "Quants policies enviaran a Catalunya?",
"id": "0.5961700408283691",
"answers": [
{
"text": "521",
"answer_start": 57
}
]
}
]
}
]
},
```
### Data Fields
Follows [(Rajpurkar, Pranav et al., 2016)](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets:
- `id` (str): Unique ID assigned to the question.
- `title` (str): Title of the article.
- `context` (str): Article text.
- `question` (str): Question.
- `answers` (list): Answer to the question, containing:
- `text` (str): Span text answering to the question.
- `answer_start` Starting offset of the span text answering to the question.
### Data Splits
- train.json: 17135 question/answer pairs
- dev.json: 2157 question/answer pairs
- test.json: 2135 question/answer pairs
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
- [VilaWeb](https://www.vilaweb.cat/) and [Catalan Wikipedia](https://ca.wikipedia.org).
#### Initial Data Collection and Normalization
This dataset is a balanced aggregation from [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad) and [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) datasets.
#### Who are the source language producers?
Volunteers from [Catalan Wikipedia](https://ca.wikipedia.org) and professional journalists from [VilaWeb](https://www.vilaweb.cat/).
### Annotations
#### Annotation process
We did an aggregation and balancing from [ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad) and [VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad) datasets.
To annotate those datasets, we commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 [(Rajpurkar, Pranav et al., 2016)](http://arxiv.org/abs/1606.05250).
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
#### Who are the annotators?
Annotation was commissioned by a specialized company that hired a team of native language speakers.
### Personal and Sensitive Information
No personal or sensitive information is included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
### Contributions
[N/A] |
embedding-data/altlex | 2022-08-02T01:53:24.000Z | [
"language:en",
"license:mit",
"region:us"
] | embedding-data | null | null | null | 0 | 23 | ---
license: mit
language:
- en
paperswithcode_id: embedding-data/altlex
pretty_name: altlex
---
# Dataset Card for "altlex"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/chridey/altlex](https://github.com/chridey/altlex)
- **Repository:** [More Information Needed](https://github.com/chridey/altlex)
- **Paper:** [https://aclanthology.org/P16-1135.pdf](https://aclanthology.org/P16-1135.pdf)
- **Point of Contact:** [Christopher Hidey](ch3085@columbia.edu)
### Dataset Summary
Git repository for software associated with the 2016 ACL paper "Identifying Causal Relations Using Parallel Wikipedia Articles."
Disclaimer: The team releasing altlex did not upload the dataset to the Hub and did not write a dataset card.
These steps were done by the Hugging Face team.
### Supported Tasks
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
### Languages
- English.
## Dataset Structure
Each example in the dataset contains a pair of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value":
```
{"set": [sentence_1, sentence_2]}
{"set": [sentence_1, sentence_2]}
...
{"set": [sentence_1, sentence_2]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/altlex")
```
The dataset is loaded as a `DatasetDict` and has the format:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: 112696
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/chridey/altlex)
#### Who are the source language producers?
[More Information Needed](https://github.com/chridey/altlex)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/chridey/altlex)
#### Who are the annotators?
[More Information Needed](https://github.com/chridey/altlex)
### Personal and Sensitive Information
[More Information Needed](https://github.com/chridey/altlex)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/chridey/altlex)
### Discussion of Biases
[More Information Needed](https://github.com/chridey/altlex)
### Other Known Limitations
[More Information Needed](https://github.com/chridey/altlex)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/chridey/altlex)
### Licensing Information
[More Information Needed](https://github.com/chridey/altlex)
### Citation Information
### Contributions
- [@chridey](https://github.com/chridey/altlex/commits?author=chridey) for adding this dataset to Github.
---
|
frgfm/imagewoof | 2022-12-11T22:26:18.000Z | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:en",
"license:apache-2.0",
"region:us"
] | frgfm | Imagewoof is a subset of 10 classes from Imagenet that aren't so
easy to classify, since they're all dog breeds. The breeds are:
Australian terrier, Border terrier, Samoyed, Beagle, Shih-Tzu,
English foxhound, Rhodesian ridgeback, Dingo, Golden retriever,
Old English sheepdog. | @software{Howard_Imagewoof_2019,
title={Imagewoof: a subset of 10 classes from Imagenet that aren't so easy to classify},
author={Jeremy Howard},
year={2019},
month={March},
publisher = {GitHub},
url = {https://github.com/fastai/imagenette#imagewoof}
} | null | 2 | 23 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality: []
size_categories:
- 1K<n<10K
source_datasets:
- extended
task_categories:
- image-classification
task_ids: []
paperswithcode_id: imagewoof
pretty_name: Imagewoof
---
# Dataset Card for Imagewoof
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/fastai/imagenette#imagewoof
- **Repository:** https://github.com/fastai/imagenette
- **Leaderboard:** https://paperswithcode.com/sota/image-classification-on-imagewoof
### Dataset Summary
A smaller subset of 10 classes from [Imagenet](https://huggingface.co/datasets/imagenet-1k#dataset-summary) that aren't so easy to classify, since they're all dog breeds.
This dataset was created by [Jeremy Howard](https://twitter.com/jeremyphoward), and this repository is only there to share his work on this platform. The repository owner takes no credit of any kind in the creation, curation or packaging of the dataset.
### Supported Tasks and Leaderboards
- `image-classification`: The dataset can be used to train a model for Image Classification.
### Languages
The class labels in the dataset are in English.
## Dataset Structure
### Data Instances
A data point comprises an image URL and its classification label.
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=320x320 at 0x19FA12186D8>,
'label': 'Beagle',
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the image.
- `label`: the expected class label of the image.
### Data Splits
| |train|validation|
|---------|----:|---------:|
|imagewoof| 9025| 3929|
## Dataset Creation
### Curation Rationale
cf. https://huggingface.co/datasets/imagenet-1k#curation-rationale
### Source Data
#### Initial Data Collection and Normalization
Imagewoof is a subset of [ImageNet](https://huggingface.co/datasets/imagenet-1k). Information about data collection of the source data can be found [here](https://huggingface.co/datasets/imagenet-1k#initial-data-collection-and-normalization).
### Annotations
#### Annotation process
cf. https://huggingface.co/datasets/imagenet-1k#annotation-process
#### Who are the annotators?
cf. https://huggingface.co/datasets/imagenet-1k#who-are-the-annotators
### Personal and Sensitive Information
cf. https://huggingface.co/datasets/imagenet-1k#personal-and-sensitive-information
## Considerations for Using the Data
### Social Impact of Dataset
cf. https://huggingface.co/datasets/imagenet-1k#social-impact-of-dataset
### Discussion of Biases
cf. https://huggingface.co/datasets/imagenet-1k#discussion-of-biases
### Other Known Limitations
cf. https://huggingface.co/datasets/imagenet-1k#other-known-limitations
## Additional Information
### Dataset Curators
cf. https://huggingface.co/datasets/imagenet-1k#dataset-curators
and Jeremy Howard
### Licensing Information
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@software{Howard_Imagewoof_2019,
title={Imagewoof: a subset of 10 classes from Imagenet that aren't so easy to classify},
author={Jeremy Howard},
year={2019},
month={March},
publisher = {GitHub},
url = {https://github.com/fastai/imagenette#imagewoof}
}
```
### Contributions
This dataset was created by [Jeremy Howard](https://twitter.com/jeremyphoward) and published on [Github](https://github.com/fastai/imagenette). It was then only integrated into HuggingFace Datasets by [@frgfm](https://huggingface.co/frgfm).
|
allenai/multixscience_dense_oracle | 2022-11-18T19:57:37.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | allenai | null | null | null | 1 | 23 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
---
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of the `train`, `validation`, and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5270 | 0.2005 | 0.2005 | 0.2005 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5310 | 0.2026 | 0.2026 | 0.2026 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5229 | 0.2081 | 0.2081 | 0.2081 | |
bigbio/mlee | 2022-12-22T15:45:39.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-sa-3.0",
"region:us"
] | bigbio | MLEE is an event extraction corpus consisting of manually annotated abstracts of papers
on angiogenesis. It contains annotations for entities, relations, events and coreferences
The annotations span molecular, cellular, tissue, and organ-level processes. | @article{pyysalo2012event,
title={Event extraction across multiple levels of biological organization},
author={Pyysalo, Sampo and Ohta, Tomoko and Miwa, Makoto and Cho, Han-Cheol and Tsujii, Jun'ichi and Ananiadou, Sophia},
journal={Bioinformatics},
volume={28},
number={18},
pages={i575--i581},
year={2012},
publisher={Oxford University Press}
} | null | 0 | 23 |
---
language:
- en
bigbio_language:
- English
license: cc-by-nc-sa-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_NC_SA_3p0
pretty_name: MLEE
homepage: http://www.nactem.ac.uk/MLEE/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- EVENT_EXTRACTION
- NAMED_ENTITY_RECOGNITION
- RELATION_EXTRACTION
- COREFERENCE_RESOLUTION
---
# Dataset Card for MLEE
## Dataset Description
- **Homepage:** http://www.nactem.ac.uk/MLEE/
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,RE,COREF
MLEE is an event extraction corpus consisting of manually annotated abstracts of papers
on angiogenesis. It contains annotations for entities, relations, events and coreferences
The annotations span molecular, cellular, tissue, and organ-level processes.
## Citation Information
```
@article{pyysalo2012event,
title={Event extraction across multiple levels of biological organization},
author={Pyysalo, Sampo and Ohta, Tomoko and Miwa, Makoto and Cho, Han-Cheol and Tsujii, Jun'ichi and Ananiadou, Sophia},
journal={Bioinformatics},
volume={28},
number={18},
pages={i575--i581},
year={2012},
publisher={Oxford University Press}
}
```
|
bigbio/scai_chemical | 2022-12-22T15:46:32.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | SCAI Chemical is a corpus of MEDLINE abstracts that has been annotated
to give an overview of the different chemical name classes
found in MEDLINE text. | @inproceedings{kolarik:lrec-ws08,
author = {Kol{\'a}{\vr}ik, Corinna and Klinger, Roman and Friedrich, Christoph M and Hofmann-Apitius, Martin and Fluck, Juliane},
title = {Chemical Names: {T}erminological Resources and Corpora Annotation},
booktitle = {LREC Workshop on Building and Evaluating Resources for Biomedical Text Mining},
year = {2008},
} | null | 1 | 23 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: SCAI Chemical
homepage: https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpora-for-chemical-entity-recognition.html
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for SCAI Chemical
## Dataset Description
- **Homepage:** https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpora-for-chemical-entity-recognition.html
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
SCAI Chemical is a corpus of MEDLINE abstracts that has been annotated
to give an overview of the different chemical name classes
found in MEDLINE text.
## Citation Information
```
@inproceedings{kolarik:lrec-ws08,
author = {Kol{'a}{r}ik, Corinna and Klinger, Roman and Friedrich, Christoph M and Hofmann-Apitius, Martin and Fluck, Juliane},
title = {Chemical Names: {T}erminological Resources and Corpora Annotation},
booktitle = {LREC Workshop on Building and Evaluating Resources for Biomedical Text Mining},
year = {2008},
}
```
|
texturedesign/td01_natural-ground-textures | 2023-09-02T10:21:04.000Z | [
"task_categories:unconditional-image-generation",
"annotations_creators:expert-generated",
"size_categories:n<1K",
"source_datasets:original",
"license:cc-by-nc-4.0",
"texture-synthesis",
"photography",
"non-infringing",
"region:us"
] | texturedesign | null | null | null | 3 | 23 | ---
annotations_creators:
- expert-generated
language: []
language_creators: []
license:
- cc-by-nc-4.0
multilinguality: []
pretty_name: 'TD01: Natural Ground Texture Photos'
size_categories:
- n<1K
source_datasets:
- original
tags:
- texture-synthesis
- photography
- non-infringing
task_categories:
- unconditional-image-generation
task_ids: []
viewer: false
---
_The Dataset Teaser is now enabled instead! Isn't this better?_

# TD 01: Natural Ground Textures
This dataset contains multi-photo texture captures in outdoor nature scenes — all focusing on the ground. Each set has different photos that showcase texture variety, making them ideal for training a domain-specific image generator!
Overall information about this dataset:
* **Format** — JPEG-XL, lossless RGB
* **Resolution** — 4032 × 2268
* **Device** — mobile camera
* **Technique** — hand-held
* **Orientation** — portrait or landscape
* **Author**: Alex J. Champandard
* **Configurations**: 4K, 2K (default), 1K
To load the medium- and high-resolution images of the dataset, you'll need to install `jxlpy` from [PyPI](https://pypi.org/project/jxlpy/) with `pip install jxlpy`:
```python
# Recommended use, JXL at high-quality.
from jxlpy import JXLImagePlugin
from datasets import load_dataset
d = load_dataset('texturedesign/td01_natural-ground-textures', 'JXL@4K', num_proc=4)
print(len(d['train']), len(d['test']))
```
The lowest-resolution images are available as PNG with a regular installation of `pillow`:
```python
# Alternative use, PNG at low-quality.
from datasets import load_dataset
d = load_dataset('texturedesign/td01_natural-ground-textures', 'PNG@1K', num_proc=4)
# EXAMPLE: Discard all other sets except Set #1.
dataset = dataset.filter(lambda s: s['set'] == 1)
# EXAMPLE: Only keep images with index 0 and 2.
dataset = dataset.select([0, 2])
```
Use built-in dataset `filter()` and `select()` to narrow down the loaded dataset for training, or to ease with development.
## Set #1: Rock and Gravel

* **Description**:
- surface rocks with gravel and coarse sand
- strong sunlight from the left, sharp shadows
* **Number of Photos**:
- 7 train
- 2 test
* **Edits**:
- rotated photos to align sunlight
- removed infrequent objects
* **Size**: 77.8 Mb
## Set #2: Dry Grass with Pine Needles

* **Description**:
- field of dry grass and pine needles
- sunlight from the top right, some shadows
* **Number of Photos**:
- 6 train
- 1 test
* **Edits**:
- removed dry leaves and large plants
- removed sticks, rocks and sporadic daisies
* **Size**: 95.2 Mb
## Set #3: Chipped Stones, Broken Leaves and Twiglets

* **Description**:
- autumn path with chipped stones and dry broken leaves
- diffuse light on a cloudy day, very soft shadows
* **Number of Photos**:
- 9 train
- 3 test
* **Edits**:
- removed anything that looks green, fresh leaves
- removed long sticks and large/odd stones
* **Size**: 126.9 Mb
## Set #4: Grass Clumps and Cracked Dirt

* **Description**:
- clumps of green grass, clover and patches of cracked dirt
- diffuse light on cloudy day, shadows under large blades of grass
* **Number of Photos**:
- 9 train
- 2 test
* **Edits**:
- removed dry leaves, sporadic dandelions, and large objects
- histogram matching for two of the photos so the colors look similar
* **Size**: 126.8 Mb
## Set #5: Dirt, Stones, Rock, Twigs...

* **Description**:
- intricate micro-scene with grey dirt, surface rock, stones, twigs and organic debris
- diffuse light on cloudy day, soft shadows around the larger objects
* **Number of Photos**:
- 9 train
- 3 test
* **Edits**:
- removed odd objects that felt out-of-distribution
* **Size**: 102.1 Mb
## Set #6: Plants with Flowers on Dry Leaves

* **Description**:
- leafy plants with white flowers on a bed of dry brown leaves
- soft diffuse light, shaded areas under the plants
* **Number of Photos**:
- 9 train
- 2 test
* **Edits**:
- none yet, inpainting doesn't work well enough
- would remove long sticks and pieces of wood
* **Size**: 105.1 Mb
## Set #7: Frozen Footpath with Snow

* **Description**:
- frozen ground on a path with footprints
- areas with snow and dark brown ground beneath
- diffuse lighting on a cloudy day
* **Number of Photos**:
- 11 train
- 3 test
* **Size**: 95.5 Mb
## Set #8: Pine Needles Forest Floor

* **Description**:
- forest floor with a mix of brown soil and grass
- variety of dry white leaves, sticks, pinecones, pine needles
- diffuse lighting on a cloudy day
* **Number of Photos**:
- 15 train
- 4 test
* **Size**: 160.6 Mb
## Set #9: Snow on Grass and Dried Leaves

* **Description**:
- field in a park with short green grass
- large dried brown leaves and fallen snow on top
- diffuse lighting on a cloudy day
* **Number of Photos**:
- 8 train
- 3 test
* **Size**: 99.8 Mb
## Set #10: Brown Leaves on Wet Ground

* **Description**:
- fallew brown leaves on wet ground
- occasional tree root and twiglets
- diffuse lighting on a rainy day
* **Number of Photos**:
- 17 train
- 4 test
* **Size**: 186.2 Mb
## Set #11: Wet Sand Path with Debris

* **Description**:
- hard sandy path in the rain
- decomposing leaves and other organic debris
- diffuse lighting on a rainy day
* **Number of Photos**:
- 17 train
- 4 test
* **Size**: 186.2 Mb
## Set #12: Wood Chips & Sawdust Sprinkled on Forest Path

* **Description**:
- wood chips, sawdust, twigs and roots on forest path
- intermittent sunlight with shadows of trees
* **Number of Photos**:
- 8 train
- 2 test
* **Size**: 110.4 Mb
## Set #13: Young Grass Growing in the Dog Park

* **Description**:
- young grass growing in a dog park after overnight rain
- occasional stones, sticks and twigs, pine needles
- diffuse lighting on a cloudy day
* **Number of Photos**:
- 17 train
- 4 test
* **Size**: 193.4 Mb
## Set #14: Wavy Wet Beach Sand

* **Description**:
- wavy wet sand on the beach after the tide retreated
- some dirt and large pieces algae debris
- diffuse lighting on a cloudy day
* **Number of Photos**:
- 11 train
- 3 test
* **Size**: 86.5 Mb
## Set #15: Dry Dirt Road and Debris from Trees

* **Description**:
- dirt road of dry compacted sand with debris on top
- old pine needles and dry brown leaves
- diffuse lighting on a cloudy day
* **Number of Photos**:
- 8 train
- 2 test
* **Size**: 86.9 Mb
## Set #16: Sandy Beach Path with Grass Clumps

* **Description**:
- path with sand and clumps grass heading towards the beach
- occasional blueish stones, leafy weeds, and yellow flowers
- diffuse lighting on a cloudy day
* **Number of Photos**:
- 10 train
- 3 test
* **Size**: 118.8 Mb
## Set #17: Pine Needles and Brown Leaves on Park Floor

* **Description**:
- park floor with predominantly pine needles
- brown leaves from nearby trees, green grass underneath
- diffuse lighting on a cloudy day
* **Number of Photos**:
- 8 train
- 2 test
* **Size**: 99.9 Mb
|
cjlovering/natural-questions-short | 2022-12-04T21:15:26.000Z | [
"license:apache-2.0",
"region:us"
] | cjlovering | null | null | null | 1 | 23 | ---
license: apache-2.0
---
|
dvilasuero/banking_app | 2022-12-29T13:25:35.000Z | [
"region:us"
] | dvilasuero | null | null | null | 0 | 23 | Entry not found |
sustcsenlp/bn_emotion_speech_corpus | 2023-01-11T09:00:32.000Z | [
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"language:bn",
"license:cc-by-4.0",
"region:us"
] | sustcsenlp | SUST Bangla Emotional Speech Coropus Dataset | @dataset{sadia_sultana_2021_4526477,
author = {Sadia Sultana},
title = {SUST Bangla Emotional Speech Corpus (SUBESCO)},
month = feb,
year = 2021,
note = {{This database was created as a part of PhD thesis
project of the author Sadia Sultana. It was
designed and developed by the author in the
Department of Computer Science and Engineering of
Shahjalal University of Science and Technology.
Financial grant was supported by the university.
If you use the dataset please cite SUBESCO and the
corresponding academic journal publication in Plos
One.}},
publisher = {Zenodo},
version = {version - 1.1},
doi = {10.5281/zenodo.4526477},
url = {https://doi.org/10.5281/zenodo.4526477}
} | null | 4 | 23 | ---
license: cc-by-4.0
task_categories:
- audio-classification
language:
- bn
pretty_name: SUST BANGLA EMOTIONAL SPEECH CORPUS
size_categories:
- 1K<n<10K
---
# SUST BANGLA EMOTIONAL SPEECH CORPUS
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [SUBESCO PAPER](https://doi.org/10.1371/journal.pone.0250173)
- **Leaderboard:**
- **Point of Contact:** [Sadia Sultana](sadia-cse@sust.edu)
### Dataset Summary
SUBESCO is an audio-only emotional speech corpus of 7000 sentence-level utterances of the Bangla language. 20 professional actors (10 males and 10 females) participated in the recordings of 10 sentences for 7 target emotions. The emotions are Anger, Disgust, Fear, Happiness, Neutral, Sadness and Surprise. Total duration of the corpus is 7 hours 40 min 40 sec. Total size of the dataset is 2.03 GB. The dataset was evaluated by 50 raters (25 males, 25 females). Human perception test achieved a raw accuracy of 71%. All the details relating to creation, evaluation and analysis of SUBESCO have been described in the corresponding journal paper which has been published in Plos One.
https://doi.org/10.1371/journal.pone.0250173
### Downloading the data
```
from datasets import load_dataset
train = load_dataset("sustcsenlp/bn_emotion_speech_corpus",split="train")
```
### Naming Convention
Each audio file in the dataset has a unique name. There are eight parts in the file name where all the parts are connected by underscores. The order of all the parts is organized as: Gender-Speaker's serial number-Speaker's name-Unit of recording-Unit number- Emotion name- Repeating number and the File format.
For example, the filename F_02_MONIKA_S_1_NEUTRAL_5.wav refers to:
| Symbol | Meaning |
| ----------- | ----------- |
| F | Speaker Gender |
| 02 | Speaker Number |
| MONIKA | Speaker Name |
| S_1 | Sentence Number |
| NEUTRAL | Emotion |
| 5 | Take Number |
### Languages
This dataset contains Bangla Audio Data.
## Dataset Creation
This database was created as a part of PhD thesis project of the author Sadia Sultana. It was designed and developed by the author in the Department of Computer Science and Engineering of Shahjalal University of Science and Technology. Financial grant was supported by the university. If you use the dataset please cite SUBESCO and the corresponding academic journal publication in Plos One.
### Citation Information
```
@dataset{sadia_sultana_2021_4526477,
author = {Sadia Sultana},
title = {SUST Bangla Emotional Speech Corpus (SUBESCO)},
month = feb,
year = 2021,
note = {{This database was created as a part of PhD thesis
project of the author Sadia Sultana. It was
designed and developed by the author in the
Department of Computer Science and Engineering of
Shahjalal University of Science and Technology.
Financial grant was supported by the university.
If you use the dataset please cite SUBESCO and the
corresponding academic journal publication in Plos
One.}},
publisher = {Zenodo},
version = {version - 1.1},
doi = {10.5281/zenodo.4526477},
url = {https://doi.org/10.5281/zenodo.4526477}
}
```
### Contributors
| Name | University |
| ----------- | ----------- |
| Sadia Sultana | Shahjalal University of Science and Technology |
| Dr. M. Zafar Iqbal | Shahjalal University of Science and Technology |
| Dr. M. Shahidur Rahman | Shahjalal University of Science and Technology |
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed] |
cartesinus/iva_mt_wslot | 2023-07-21T15:40:44.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:en",
"language:pl",
"language:de",
"language:es",
"language:sv",
"language:fr",
"language:pt",
"license:cc-by-4.0",
"machine translation",
"nlu",
"natural-language-understanding",
"virtual assistant",
"region:us"
] | cartesinus | \ | null | null | 0 | 23 | ---
dataset_info:
features:
- name: id
dtype: string
- name: locale
dtype: string
- name: origin
dtype: string
- name: partition
dtype: string
- name: translation_utt
dtype:
translation:
languages:
- en
- pl
- name: translation_xml
dtype:
translation:
languages:
- en
- pl
- name: src_bio
dtype: string
- name: tgt_bio
dtype: string
splits:
- name: train
num_bytes: 6187206
num_examples: 20362
- name: validation
num_bytes: 1115480
num_examples: 3681
- name: test
num_bytes: 1587613
num_examples: 5394
download_size: 3851892
dataset_size: 8890299
task_categories:
- translation
language:
- en
- pl
- de
- es
- sv
- fr
- pt
tags:
- machine translation
- nlu
- natural-language-understanding
- virtual assistant
pretty_name: Machine translation for NLU with slot transfer
size_categories:
- 10K<n<100K
license: cc-by-4.0
---
# Machine translation dataset for NLU (Virual Assistant) with slot transfer between languages
## Dataset Summary
Disclaimer: This is for research purposes only. Please have a look at the license section below. Some of the datasets used to construct IVA_MT have an unknown license.
IVA_MT is a machine translation dataset that can be used to train, adapt and evaluate MT models used in Virtual Assistant NLU context (e.g. to translate trainig corpus of NLU).
## Dataset Composition
### en-pl
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 11514 | 2033 | 2974 |
| [Leyzer 0.2.0](https://github.com/cartesinus/leyzer/tree/0.2.0) | 3974 | 701 | 1380 |
| [OpenSubtitles from OPUS](https://opus.nlpl.eu/OpenSubtitles-v1.php) | 2329 | 411 | 500 |
| [KDE from OPUS](https://opus.nlpl.eu/KDE4.php) | 1154 | 241 | 241 |
| [CCMatrix from Opus](https://opus.nlpl.eu/CCMatrix.php) | 1096 | 232 | 237 |
| [Ubuntu from OPUS](https://opus.nlpl.eu/Ubuntu.php) | 281 | 60 | 59 |
| [Gnome from OPUS](https://opus.nlpl.eu/GNOME.php) | 14 | 3 | 3 |
| *total* | 20362 | 3681 | 5394 |
### en-de
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 7536 | 1346 | 1955 |
### en-es
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 8415 | 1526 | 2202 |
### en-sv
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 7540 | 1360 | 1921 |
### en-fr
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 6800 | 1203 | 1757 |
### en-pt
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 7368 | 1296 | 1885 |
### en-hi
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 6702 | 1175 | 1747 |
### en-tr
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 8269 | 1474 | 2170 |
### en-ja
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 8066 | 1434 | 2085 |
### en-zh
| Corpus | Train | Dev | Test |
|----------------------------------------------------------------------|--------|-------|-------|
| [Massive 1.1](https://huggingface.co/datasets/AmazonScience/massive) | 8433 | 1513 | 2179 |
## Tools
Scripts used to generate this dataset can be found on [github](https://github.com/cartesinus/iva_mt).
## Citation
If you use this models please cite:
```
@article{Sowanski2023SlotLI,
title={Slot Lost in Translation? Not Anymore: A Machine Translation Model for Virtual Assistants with Type-Independent Slot Transfer},
author={Marcin Sowanski and Artur Janicki},
journal={2023 30th International Conference on Systems, Signals and Image Processing (IWSSIP)},
year={2023},
pages={1-5}
}
```
## License
This is a composition of 7 datasets, and the license is as defined in original release:
- MASSIVE: [CC-BY 4.0](https://huggingface.co/datasets/AmazonScience/massive/blob/main/LICENSE)
- Leyzer: [CC BY-NC 4.0](https://github.com/cartesinus/leyzer/blob/master/LICENSE)
- OpenSubtitles: unknown
- KDE: [GNU Public License](https://l10n.kde.org/about.php)
- CCMatrix: no license given, therefore assuming it is LASER project license [BSD](https://github.com/facebookresearch/LASER/blob/main/LICENSE)
- Ubuntu: [GNU Public License](https://help.launchpad.net/Legal)
- Gnome: unknown
|
AyoubChLin/20NewsGroup-AgNews-CnnNews | 2023-04-08T11:33:23.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"region:us"
] | AyoubChLin | null | null | null | 0 | 23 | ---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': auto
'1': business
'2': entertainment
'3': health
'4': news
'5': politics
'6': sci/tech
'7': sport
'8': world
splits:
- name: train
num_bytes: 227672680
num_examples: 162076
download_size: 134277697
dataset_size: 227672680
task_categories:
- text-classification
language:
- en
size_categories:
- n<1K
---
|
Olec/cyber-threat-intelligence_v2 | 2023-04-15T11:00:18.000Z | [
"region:us"
] | Olec | null | null | null | 4 | 23 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: entities
list:
- name: end_offset
dtype: int64
- name: id
dtype: int64
- name: label
dtype: string
- name: start_offset
dtype: int64
- name: relations
list:
- name: from_id
dtype: int64
- name: id
dtype: int64
- name: to_id
dtype: int64
- name: type
dtype: string
splits:
- name: test
num_bytes: 29518
num_examples: 72
- name: train
num_bytes: 147723
num_examples: 332
- name: validation
num_bytes: 36580
num_examples: 76
download_size: 119557
dataset_size: 213821
---
# Dataset Card for "cyber-threat-intelligence_v2"
updated version of mrmoor/cyber-threat-intelligence
RE and NER Dataset for Cyber Threat Intelegence (CTI)
T5 Model trained on NYT and this dataset: Olec/cyber_rebel
This dataset only contains sentences with realtions.
Full dataset is available at mrmoor/cyber-threat-intelligence. |
mstz/iris | 2023-04-28T13:35:36.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1k",
"language:en",
"license:cc",
"iris",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_iris_53,
author = {Fisher,R. A. & Fisher,R.A.},
title = {{Iris}},
year = {1988},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C56C76}}
} | null | 1 | 23 | ---
language:
- en
tags:
- iris
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Iris
size_categories:
- n<1k
task_categories:
- tabular-classification
configs:
- iris
- setosa
- versicolor
- virginica
license: cc
---
# Iris
The [Iris dataset](https://archive-beta.ics.uci.edu/dataset/53/iris) from the [UCI repository](https://archive-beta.ics.uci.edu).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------------|
| iris | Multiclass classification | Classify iris type. |
| setosa | Binary classification | Is this a iris-setosa? |
| versicolor | Binary classification | Is this a iris-versicolor? |
| virginica | Binary classification | Is this a iris-virginica? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/iris", "iris")["train"]
``` |
h2oai/openassistant_oasst1_h2ogpt | 2023-04-24T18:07:44.000Z | [
"language:en",
"license:apache-2.0",
"gpt",
"llm",
"large language model",
"open-source",
"region:us"
] | h2oai | null | null | null | 3 | 23 | ---
license: apache-2.0
language:
- en
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- open-source
---
# h2oGPT Data Card
## Summary
H2O.ai's `openassistant_oasst1_h2ogpt` is an open-source instruct-type dataset for fine-tuning of large language models, licensed for commercial use.
- Number of rows: `48307`
- Number of columns: `3`
- Column names: `['input', 'prompt_type', 'source']`
## Source
- [Original Open Assistant data in tree structure](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [This flattened dataset created by script in h2oGPT repository](https://github.com/h2oai/h2ogpt/blob/83857fcf7d3b712aad5db32207e6db0ab0f780f9/create_data.py#L1252)
|
iamketan25/roleplay-instructions-dataset | 2023-04-24T22:32:40.000Z | [
"region:us"
] | iamketan25 | null | null | null | 10 | 23 | Entry not found |
Harsit/xnli2.0_assamese | 2023-04-26T19:01:07.000Z | [
"region:us"
] | Harsit | null | null | null | 0 | 23 | Entry not found |
liuhaotian/LLaVA-Pretrain | 2023-07-06T08:47:38.000Z | [
"language:en",
"license:other",
"region:us"
] | liuhaotian | null | null | null | 9 | 23 | ---
license: other
language:
- en
pretty_name: LLaVA Pretrain
---
# LLaVA Visual Instruct Pretrain Dataset Card
## Dataset details
**Dataset type:**
LLaVA Visual Instruct Pretrain LCS-558K is a subset of LAION/CC/SBU dataset, filtered with a more balanced concept coverage distribution.
Captions are also associated with [BLIP synthetic caption](https://github.com/salesforce/BLIP#pre-training-datasets-download) for reference.
It is constructed for the pretraining stage for feature alignment in visual instruction tuning.
We aim to build large multimodal towards GPT-4 vision/language capability.
**Dataset date:**
LLaVA Visual Instruct CC3M Pretrain 595K was created in May 2023.
**Dataset structure:**
- `blip_laion_cc_sbu_558k.json` contains the multimodal synthesized conversation from the image-caption pairs, by adding randomly selected instructions like: "Describe this image". It is used for pretraining in LLaVA. We use the raw CC-3M caption as the default answer.
- `blip_laion_cc_sbu_558k_meta.json` contains the meta data of the image file name, image URL, synthetic BLIP caption.
- `images.zip` contains all raw images of the filtered subset from LAION/CC/SBU. Important notice: Upon the request from the community, as ~15% images of the original LAION/CC/SBU dataset are no longer accessible, we upload images.zip for better reproducing our work in research community. It should not be used for any other purpose. The use of these images must comply with the LAION/CC/SBU license. This may be taken down when requested by the original LAION/CC/SBU dataset owner or owners of the referenced images.
**Paper or resources for more information:**
https://llava-vl.github.io/
**License:**
Must comply with license of [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE), [BLIP](https://github.com/salesforce/BLIP/blob/main/LICENSE.txt) (if you use their synthetic caption).
CC-3M
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. |
roborovski/diffusiondb-masked-no-descriptors | 2023-05-04T01:58:57.000Z | [
"region:us"
] | roborovski | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: masked
dtype: string
splits:
- name: train
num_bytes: 457934422
num_examples: 1819808
download_size: 170883933
dataset_size: 457934422
---
# Dataset Card for "diffusiondb-masked-no-descriptors"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
switmer/MBTI-Sentiment | 2023-05-14T16:27:30.000Z | [
"region:us"
] | switmer | null | null | null | 0 | 23 | Entry not found |
deedax/UTK-Face-Revised | 2023-05-16T02:05:28.000Z | [
"region:us"
] | deedax | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: image
dtype: image
- name: age
dtype: int64
- name: gender
dtype: string
- name: race
dtype: string
- name: age_group
dtype: string
splits:
- name: train
num_bytes: 352669015.125
num_examples: 7623
- name: valid
num_bytes: 39348419.0
num_examples: 846
download_size: 391281119
dataset_size: 392017434.125
---
# Dataset Card for "UTK-Face-Revised"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jlh/home-credit-example-raw | 2023-05-26T02:29:12.000Z | [
"region:us"
] | jlh | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: SK_ID_CURR
dtype: int64
- name: TARGET
dtype: int64
- name: NAME_CONTRACT_TYPE
dtype: string
- name: CODE_GENDER
dtype: string
- name: FLAG_OWN_CAR
dtype: string
- name: FLAG_OWN_REALTY
dtype: string
- name: CNT_CHILDREN
dtype: int64
- name: AMT_INCOME_TOTAL
dtype: float64
- name: AMT_CREDIT
dtype: float64
- name: AMT_ANNUITY
dtype: float64
- name: AMT_GOODS_PRICE
dtype: float64
- name: NAME_TYPE_SUITE
dtype: string
- name: NAME_INCOME_TYPE
dtype: string
- name: NAME_EDUCATION_TYPE
dtype: string
- name: NAME_FAMILY_STATUS
dtype: string
- name: NAME_HOUSING_TYPE
dtype: string
- name: REGION_POPULATION_RELATIVE
dtype: float64
- name: DAYS_BIRTH
dtype: int64
- name: DAYS_EMPLOYED
dtype: int64
- name: DAYS_REGISTRATION
dtype: float64
- name: DAYS_ID_PUBLISH
dtype: int64
- name: OWN_CAR_AGE
dtype: float64
- name: FLAG_MOBIL
dtype: int64
- name: FLAG_EMP_PHONE
dtype: int64
- name: FLAG_WORK_PHONE
dtype: int64
- name: FLAG_CONT_MOBILE
dtype: int64
- name: FLAG_PHONE
dtype: int64
- name: FLAG_EMAIL
dtype: int64
- name: OCCUPATION_TYPE
dtype: string
- name: CNT_FAM_MEMBERS
dtype: float64
- name: REGION_RATING_CLIENT
dtype: int64
- name: REGION_RATING_CLIENT_W_CITY
dtype: int64
- name: WEEKDAY_APPR_PROCESS_START
dtype: string
- name: HOUR_APPR_PROCESS_START
dtype: int64
- name: REG_REGION_NOT_LIVE_REGION
dtype: int64
- name: REG_REGION_NOT_WORK_REGION
dtype: int64
- name: LIVE_REGION_NOT_WORK_REGION
dtype: int64
- name: REG_CITY_NOT_LIVE_CITY
dtype: int64
- name: REG_CITY_NOT_WORK_CITY
dtype: int64
- name: LIVE_CITY_NOT_WORK_CITY
dtype: int64
- name: ORGANIZATION_TYPE
dtype: string
- name: EXT_SOURCE_1
dtype: float64
- name: EXT_SOURCE_2
dtype: float64
- name: EXT_SOURCE_3
dtype: float64
- name: APARTMENTS_AVG
dtype: float64
- name: BASEMENTAREA_AVG
dtype: float64
- name: YEARS_BEGINEXPLUATATION_AVG
dtype: float64
- name: YEARS_BUILD_AVG
dtype: float64
- name: COMMONAREA_AVG
dtype: float64
- name: ELEVATORS_AVG
dtype: float64
- name: ENTRANCES_AVG
dtype: float64
- name: FLOORSMAX_AVG
dtype: float64
- name: FLOORSMIN_AVG
dtype: float64
- name: LANDAREA_AVG
dtype: float64
- name: LIVINGAPARTMENTS_AVG
dtype: float64
- name: LIVINGAREA_AVG
dtype: float64
- name: NONLIVINGAPARTMENTS_AVG
dtype: float64
- name: NONLIVINGAREA_AVG
dtype: float64
- name: APARTMENTS_MODE
dtype: float64
- name: BASEMENTAREA_MODE
dtype: float64
- name: YEARS_BEGINEXPLUATATION_MODE
dtype: float64
- name: YEARS_BUILD_MODE
dtype: float64
- name: COMMONAREA_MODE
dtype: float64
- name: ELEVATORS_MODE
dtype: float64
- name: ENTRANCES_MODE
dtype: float64
- name: FLOORSMAX_MODE
dtype: float64
- name: FLOORSMIN_MODE
dtype: float64
- name: LANDAREA_MODE
dtype: float64
- name: LIVINGAPARTMENTS_MODE
dtype: float64
- name: LIVINGAREA_MODE
dtype: float64
- name: NONLIVINGAPARTMENTS_MODE
dtype: float64
- name: NONLIVINGAREA_MODE
dtype: float64
- name: APARTMENTS_MEDI
dtype: float64
- name: BASEMENTAREA_MEDI
dtype: float64
- name: YEARS_BEGINEXPLUATATION_MEDI
dtype: float64
- name: YEARS_BUILD_MEDI
dtype: float64
- name: COMMONAREA_MEDI
dtype: float64
- name: ELEVATORS_MEDI
dtype: float64
- name: ENTRANCES_MEDI
dtype: float64
- name: FLOORSMAX_MEDI
dtype: float64
- name: FLOORSMIN_MEDI
dtype: float64
- name: LANDAREA_MEDI
dtype: float64
- name: LIVINGAPARTMENTS_MEDI
dtype: float64
- name: LIVINGAREA_MEDI
dtype: float64
- name: NONLIVINGAPARTMENTS_MEDI
dtype: float64
- name: NONLIVINGAREA_MEDI
dtype: float64
- name: FONDKAPREMONT_MODE
dtype: string
- name: HOUSETYPE_MODE
dtype: string
- name: TOTALAREA_MODE
dtype: float64
- name: WALLSMATERIAL_MODE
dtype: string
- name: EMERGENCYSTATE_MODE
dtype: string
- name: OBS_30_CNT_SOCIAL_CIRCLE
dtype: float64
- name: DEF_30_CNT_SOCIAL_CIRCLE
dtype: float64
- name: OBS_60_CNT_SOCIAL_CIRCLE
dtype: float64
- name: DEF_60_CNT_SOCIAL_CIRCLE
dtype: float64
- name: DAYS_LAST_PHONE_CHANGE
dtype: float64
- name: FLAG_DOCUMENT_2
dtype: int64
- name: FLAG_DOCUMENT_3
dtype: int64
- name: FLAG_DOCUMENT_4
dtype: int64
- name: FLAG_DOCUMENT_5
dtype: int64
- name: FLAG_DOCUMENT_6
dtype: int64
- name: FLAG_DOCUMENT_7
dtype: int64
- name: FLAG_DOCUMENT_8
dtype: int64
- name: FLAG_DOCUMENT_9
dtype: int64
- name: FLAG_DOCUMENT_10
dtype: int64
- name: FLAG_DOCUMENT_11
dtype: int64
- name: FLAG_DOCUMENT_12
dtype: int64
- name: FLAG_DOCUMENT_13
dtype: int64
- name: FLAG_DOCUMENT_14
dtype: int64
- name: FLAG_DOCUMENT_15
dtype: int64
- name: FLAG_DOCUMENT_16
dtype: int64
- name: FLAG_DOCUMENT_17
dtype: int64
- name: FLAG_DOCUMENT_18
dtype: int64
- name: FLAG_DOCUMENT_19
dtype: int64
- name: FLAG_DOCUMENT_20
dtype: int64
- name: FLAG_DOCUMENT_21
dtype: int64
- name: AMT_REQ_CREDIT_BUREAU_HOUR
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_DAY
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_WEEK
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_MON
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_QRT
dtype: float64
- name: AMT_REQ_CREDIT_BUREAU_YEAR
dtype: float64
splits:
- name: raw
num_bytes: 10681044
num_examples: 10000
download_size: 1985577
dataset_size: 10681044
---
# Dataset Card for "home-credit-example-raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gorilla-llm/APIBench | 2023-05-29T06:31:49.000Z | [
"language:en",
"license:apache-2.0",
"api",
"arxiv:2305.15334",
"region:us"
] | gorilla-llm | null | null | null | 31 | 23 | ---
license: apache-2.0
language:
- en
tags:
- api
---
# Gorilla: Large Language Model Connected with Massive APIs
By Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez ([Project Website](https://shishirpatil.github.io/gorilla/))
[](https://arxiv.org/abs/2305.15334) [](https://discord.gg/3apqwwME) [](https://colab.research.google.com/drive/1DEBPsccVLF_aUnmD0FwPeHFrtdC0QIUP?usp=sharing)
`Gorilla` enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla can write a semantically- and syntactically- correct API to invoke. With Gorilla, we are the first to demonstrate how to use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. We also release APIBench, the largest collection of APIs, curated and easy to be trained on! Join us, as we try to expand the largest API store and teach LLMs how to write them! Hop on our Discord, or open a PR, or email us if you would like to have your API incorporated as well.
### Dataset Date
05/28/2023
### Organization
Gorilla LLM (UC Berkeley)
---
license: apache-2.0
--- |
tchebonenko/MedicalTranscriptions | 2023-05-29T19:39:18.000Z | [
"region:us"
] | tchebonenko | null | null | null | 4 | 23 | # Medical Transcriptions
Medical transcription data scraped from mtsamples.com
### Content
This dataset contains sample medical transcriptions for various medical specialties.
<br>
More information can be found [here](https://www.kaggle.com/datasets/tboyle10/medicaltranscriptions?resource=download)
Due to data availability only transcripts for the following medical specialties were selected for the model training:
- Surgery
- Cardiovascular / Pulmonary
- Orthopedic
- Radiology
- General Medicine
- Gastroenterology
- Neurology
- Obstetrics / Gynecology
- Urology
---
**task_categories:**
- text-classification
- feature-extraction
**language:** en <br>
**tags:** medical <br>
**size_categories:** 1K<n<10K |
lampent/IRFL | 2023-06-02T15:02:05.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"figurative-language",
"multimodal-figurative-language",
" commonsense-reasoning",
"visual-reasoning",
"arxiv:2303.15445",
"region:us"
] | lampent | null | null | null | 1 | 23 | ---
license: cc-by-4.0
language:
- en
tags:
- figurative-language
- multimodal-figurative-language
- ' commonsense-reasoning'
- visual-reasoning
size_categories:
- 1K<n<10K
---
# Dataset Card for IRFL
- [Dataset Description](#dataset-description)
- [Leaderboards](#leaderboards)
- [Colab notebook code for IRFL evaluation](#colab-notebook-code-for-irfl-evaluation)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
The IRFL dataset consists of idioms, similes, and metaphors with matching figurative and literal images, as well as two novel tasks of multimodal figurative understanding and preference.
We collected figurative and literal images for textual idioms, metaphors, and similes using an automatic pipeline we created (idioms) and manually (metaphors + similes). We annotated the relations between these images and the figurative phrase they originated from. Using these images we created two novel tasks of figurative understanding and preference.
The figurative understanding task evaluates Vision and Language Pre-Trained Models’ (VL-PTMs) ability to understand the relation between an image and a figurative phrase. The task is to choose the image that best visualizes the figurative phrase out of X candidates. The preference task examines VL-PTMs' preference for figurative images. In this task, the model needs to classify phrase images of different categories correctly based on their ranking by the model matching score.
The figurative understanding task evaluates Vision and Language Pre-Trained Models’ (VL-PTMs) ability to understand the relation between an image and a figurative phrase. The task is to choose the image that best visualizes the figurative phrase out of X candidates. The preference task examines VL-PTMs' preference for figurative images. In this task, the model needs to classify phrase images of different categories correctly based on their ranking by the model matching score.
We evaluated state-of-the-art VL models and found that the best models achieved 22%, 30%, and 66% accuracy vs. humans 97%, 99.7%, and 100% on our understanding task for idioms, metaphors, and similes respectively. The best model achieved an F1 score of 61 on the preference task.
- **Homepage:**
https://irfl-dataset.github.io/
- **Repository:**
https://github.com/irfl-dataset/IRFL
- **Paper:**
https://arxiv.org/abs/2303.15445
- **Leaderboard:**
https://irfl-dataset.github.io/leaderboard
- **Point of Contact:**
irfl.dataset@gmail.com; ron.yosef@mail.huji.ac.il
### Leaderboards
https://irfl-dataset.github.io/leaderboard
### Colab notebook code for IRFL evaluation
https://colab.research.google.com/drive/1zbW7R8Cn9sXICV3x_FGKjKIKu8GCrCme?usp=sharing
### Languages
English.
## Dataset Structure
### Data Fields
★ - refers to idiom-only fields
Understanding task
- query (★): the idiom definition the answer image originated from.
- distractors: the distractor images
- answer: the correct image
- figurative_type: idiom | metaphor | simile
- images_metadata: the metadata of the distractors and asnwer images.
- type: the correct image type (Figurative or Figurative Literal).
- definition (★): list of all the definitions of the idiom
- phrase: the figurative phrase.
Preference task
- type: the rival categories FvsPO (Figurative images vs. Partial Objects) or FLvsPO (Figurative Literal images vs. Partial Objects)
- figurative_type: idiom | metaphor | simile
- first_category: the first category images (Figurative images if FvsPO, Figurative Literal images if FLvsPO)
- second_category: the second category images (Partial Objects)
- definition (★): list of all the definitions of the idiom
- phrase: the figurative phrase.
The idioms, metaphor, and similes datasets contain all the figurative phrases, annotated images, and corresponding metadata. <br/>
## Dataset Collection
We collected figurative and literal images for textual idioms, metaphors, and similes using an automatic pipeline we created (idioms) and manually (metaphors + similes). We annotated the relations between these images and the figurative phrase they originated from.
#### Annotation process
We paid Amazon Mechanical Turk Workers to annotate the relations between each image and phrase (Figurative vs. Literal).
## Considerations for Using the Data
5 annotators annotated all of the data releases.
### Licensing Information
CC-By 4.0
### Citation Information
@misc{yosef2023irfl,
title={IRFL: Image Recognition of Figurative Language},
author={Ron Yosef and Yonatan Bitton and Dafna Shahaf},
year={2023},
eprint={2303.15445},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
emad12/stock_tweets_sentiment | 2023-06-04T09:48:20.000Z | [
"region:us"
] | emad12 | null | null | null | 3 | 23 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: post_date
dtype: string
- name: tweet
dtype: string
- name: sentiment
dtype: int64
- name: ticker_symbol
dtype: string
- name: tweet_cleaned
dtype: string
- name: __index_level_0__
dtype: int64
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 321710487
num_examples: 96000
- name: test
num_bytes: 80421371
num_examples: 24000
download_size: 32053237
dataset_size: 402131858
---
# Dataset Card for "stock_tweets_sentiment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shibing624/nli-zh-all | 2023-06-22T06:39:46.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"annotations_creators:shibing624",
"language_creators:shibing624",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:https://github... | shibing624 | The SNLI corpus (version 1.0) is a merged chinese sentence similarity dataset, supporting the task of natural language
inference (NLI), also known as recognizing textual entailment (RTE). | https://github.com/shibing624/text2vec | null | 16 | 23 | ---
annotations_creators:
- shibing624
language_creators:
- shibing624
language:
- zh
license: cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- https://github.com/shibing624/text2vec
task_categories:
- text-classification
task_ids:
- natural-language-inference
- semantic-similarity-scoring
- text-scoring
paperswithcode_id: nli
pretty_name: Chinese Natural Language Inference
---
# Dataset Card for nli-zh-all
## Dataset Description
- **Repository:** [Chinese NLI dataset](https://github.com/shibing624/text2vec)
- **Dataset:** [zh NLI](https://huggingface.co/datasets/shibing624/nli-zh-all)
- **Size of downloaded dataset files:** 4.7 GB
- **Total amount of disk used:** 4.7 GB
### Dataset Summary
中文自然语言推理(NLI)数据合集(nli-zh-all)
整合了文本推理,相似,摘要,问答,指令微调等任务的820万高质量数据,并转化为匹配格式数据集。
### Supported Tasks and Leaderboards
Supported Tasks: 支持中文文本匹配任务,文本相似度计算等相关任务。
中文匹配任务的结果目前在顶会paper上出现较少,我罗列一个我自己训练的结果:
**Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec)
### Languages
数据集均是简体中文文本。
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{"text1":"借款后多长时间给打电话","text2":"借款后多久打电话啊","label":1}
{"text1":"没看到微粒贷","text2":"我借那么久也没有提升啊","label":0}
```
- label 有2个标签,1表示相似,0表示不相似。
### Data Fields
The data fields are the same among all splits.
- `text1`: a `string` feature.
- `text2`: a `string` feature.
- `label`: a classification label, with possible values including entailment(1), contradiction(0)。
### Data Splits
after remove None and len(text) < 1 data:
```shell
$ wc -l nli-zh-all/*
48818 nli-zh-all/alpaca_gpt4-train.jsonl
5000 nli-zh-all/amazon_reviews-train.jsonl
519255 nli-zh-all/belle-train.jsonl
16000 nli-zh-all/cblue_chip_sts-train.jsonl
549326 nli-zh-all/chatmed_consult-train.jsonl
10142 nli-zh-all/cmrc2018-train.jsonl
395927 nli-zh-all/csl-train.jsonl
50000 nli-zh-all/dureader_robust-train.jsonl
709761 nli-zh-all/firefly-train.jsonl
9568 nli-zh-all/mlqa-train.jsonl
455875 nli-zh-all/nli_zh-train.jsonl
50486 nli-zh-all/ocnli-train.jsonl
2678694 nli-zh-all/simclue-train.jsonl
419402 nli-zh-all/snli_zh-train.jsonl
3024 nli-zh-all/webqa-train.jsonl
1213780 nli-zh-all/wiki_atomic_edits-train.jsonl
93404 nli-zh-all/xlsum-train.jsonl
1006218 nli-zh-all/zhihu_kol-train.jsonl
8234680 total
```
### Data Length

count text length script: https://github.com/shibing624/text2vec/blob/master/examples/data/count_text_length.py
## Dataset Creation
### Curation Rationale
受[m3e-base](https://huggingface.co/moka-ai/m3e-base#M3E%E6%95%B0%E6%8D%AE%E9%9B%86)启发,合并了中文高质量NLI(natural langauge inference)数据集,
这里把这个数据集上传到huggingface的datasets,方便大家使用。
### Source Data
#### Initial Data Collection and Normalization
如果您想要查看数据集的构建方法,你可以在 [https://github.com/shibing624/text2vec/blob/master/examples/data/build_zh_nli_dataset.py](https://github.com/shibing624/text2vec/blob/master/examples/data/build_zh_nli_dataset.py) 中找到生成 nli-zh-all 数据集的脚本,所有数据均上传到 huggingface datasets。
| 数据集名称 | 领域 | 数量 | 任务类型 | Prompt | 质量 | 数据提供者 | 说明 | 是否开源/研究使用 | 是否商用 | 脚本 | Done | URL | 是否同质 |
|:---------------------| :---- |:-----------|:---------------- |:------ |:----|:-----------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------- |:------|:---- |:---- |:---------------------------------------------------------------------------------------------|:------|
| cmrc2018 | 百科 | 14,363 | 问答 | 问答 | 优 | Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, Guoping Hu | https://github.com/ymcui/cmrc2018/blob/master/README_CN.md 专家标注的基于维基百科的中文阅读理解数据集,将问题和上下文视为正例 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/cmrc2018 | 否 |
| belle_0.5m | 百科 | 500,000 | 指令微调 | 无 | 优 | LianjiaTech/BELLE | belle 的指令微调数据集,使用 self instruct 方法基于 gpt3.5 生成 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/BelleGroup/ | 否 |
| firefily | 百科 | 1,649,399 | 指令微调 | 无 | 优 | YeungNLP | Firefly(流萤) 是一个开源的中文对话式大语言模型,使用指令微调(Instruction Tuning)在中文数据集上进行调优。使用了词表裁剪、ZeRO等技术,有效降低显存消耗和提高训练效率。 在训练中,我们使用了更小的模型参数量,以及更少的计算资源。 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M | 否 |
| alpaca_gpt4 | 百科 | 48,818 | 指令微调 | 无 | 优 | Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao | 本数据集是参考Alpaca方法基于GPT4得到的self-instruct数据,约5万条。 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/alpaca-zh | 否 |
| zhihu_kol | 百科 | 1,006,218 | 问答 | 问答 | 优 | wangrui6 | 知乎问答 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/wangrui6/Zhihu-KOL | 否 |
| amazon_reviews_multi | 电商 | 210,000 | 问答 文本分类 | 摘要 | 优 | 亚马逊 | 亚马逊产品评论数据集 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/amazon_reviews_multi/viewer/zh/train?row=8 | 否 |
| mlqa | 百科 | 85,853 | 问答 | 问答 | 良 | patrickvonplaten | 一个用于评估跨语言问答性能的基准数据集 | 是 | 未说明 | 是 | 是 | https://huggingface.co/datasets/mlqa/viewer/mlqa-translate-train.zh/train?p=2 | 否 |
| xlsum | 新闻 | 93,404 | 摘要 | 摘要 | 良 | BUET CSE NLP Group | BBC的专业注释文章摘要对 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/csebuetnlp/xlsum/viewer/chinese_simplified/train?row=259 | 否 |
| ocnli | 口语 | 17,726 | 自然语言推理 | 推理 | 良 | Thomas Wolf | 自然语言推理数据集 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/clue/viewer/ocnli | 是 |
| BQ | 金融 | 60,000 | 文本分类 | 相似 | 优 | Intelligent Computing Research Center, Harbin Institute of Technology(Shenzhen) | http://icrc.hitsz.edu.cn/info/1037/1162.htm BQ 语料库包含来自网上银行自定义服务日志的 120,000 个问题对。它分为三部分:100,000 对用于训练,10,000 对用于验证,10,000 对用于测试。 数据提供者: 哈尔滨工业大学(深圳)智能计算研究中心 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/nli_zh/viewer/BQ | 是 |
| lcqmc | 口语 | 149,226 | 文本分类 | 相似 | 优 | Ming Xu | 哈工大文本匹配数据集,LCQMC 是哈尔滨工业大学在自然语言处理国际顶会 COLING2018 构建的问题语义匹配数据集,其目标是判断两个问题的语义是否相同 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/nli_zh/viewer/LCQMC/train | 是 |
| paws-x | 百科 | 23,576 | 文本分类 | 相似 | 优 | Bhavitvya Malik | PAWS Wiki中的示例 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/paws-x/viewer/zh/train | 是 |
| wiki_atomic_edit | 百科 | 1,213,780 | 平行语义 | 相似 | 优 | abhishek thakur | 基于中文维基百科的编辑记录收集的数据集 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/wiki_atomic_edits | 是 |
| chatmed_consult | 医药 | 549,326 | 问答 | 问答 | 优 | Wei Zhu | 真实世界的医学相关的问题,使用 gpt3.5 进行回答 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/michaelwzhu/ChatMed_Consult_Dataset | 否 |
| webqa | 百科 | 42,216 | 问答 | 问答 | 优 | suolyer | 百度于2016年开源的数据集,数据来自于百度知道;格式为一个问题多篇意思基本一致的文章,分为人为标注以及浏览器检索;数据整体质量中,因为混合了很多检索而来的文章 | 是 | 未说明 | 是 | 是 | https://huggingface.co/datasets/suolyer/webqa/viewer/suolyer--webqa/train?p=3 | 否 |
| dureader_robust | 百科 | 65,937 | 机器阅读理解 问答 | 问答 | 优 | 百度 | DuReader robust旨在利用真实应用中的数据样本来衡量阅读理解模型的鲁棒性,评测模型的过敏感性、过稳定性以及泛化能力,是首个中文阅读理解鲁棒性数据集。 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/PaddlePaddle/dureader_robust/viewer/plain_text/train?row=96 | 否 |
| csl | 学术 | 395,927 | 语料 | 摘要 | 优 | Yudong Li, Yuqing Zhang, Zhe Zhao, Linlin Shen, Weijie Liu, Weiquan Mao and Hui Zhang | 提供首个中文科学文献数据集(CSL),包含 396,209 篇中文核心期刊论文元信息 (标题、摘要、关键词、学科、门类)。CSL 数据集可以作为预训练语料,也可以构建许多NLP任务,例如文本摘要(标题预测)、 关键词生成和文本分类等。 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/neuclir/csl | 否 |
| snli-zh | 口语 | 419,402 | 文本分类 | 推理 | 优 | liuhuanyong | 中文SNLI数据集,翻译自英文SNLI | 是 | 否 | 是 | 是 | https://github.com/liuhuanyong/ChineseTextualInference/ | 是 |
| SimCLUE | 百科 | 2,678,694 | 平行语义 | 相似 | 优 | 数据集合,请在 simCLUE 中查看 | 整合了中文领域绝大多数可用的开源的语义相似度和自然语言推理的数据集,并重新做了数据拆分和整理。 | 是 | 否 | 否 | 是 | https://github.com/CLUEbenchmark/SimCLUE | 是 |
#### Who are the source language producers?
数据集的版权归原作者所有,使用各数据集时请尊重原数据集的版权。
SNLI:
@inproceedings{snli:emnlp2015,
Author = {Bowman, Samuel R. and Angeli, Gabor and Potts, Christopher, and Manning, Christopher D.},
Booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
Publisher = {Association for Computational Linguistics},
Title = {A large annotated corpus for learning natural language inference},
Year = {2015}
}
#### Who are the annotators?
原作者。
### Social Impact of Dataset
This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context.
Systems that are successful at such a task may be more successful in modeling semantic representations.
### Licensing Information
for reasearch
用于学术研究
### Contributions
[shibing624](https://github.com/shibing624) add this dataset.
|
Abdelkareem/arabic-article-summarization | 2023-06-18T13:51:05.000Z | [
"license:apache-2.0",
"region:us"
] | Abdelkareem | null | null | null | 0 | 23 | ---
license: apache-2.0
---
|
FreedomIntelligence/alpaca-gpt4-japanese | 2023-08-06T08:10:29.000Z | [
"license:apache-2.0",
"region:us"
] | FreedomIntelligence | null | null | null | 2 | 23 | ---
license: apache-2.0
---
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). |
renumics/f1_demo_dataset | 2023-07-19T10:05:28.000Z | [
"region:us"
] | renumics | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: Time
dtype: duration[ns]
- name: Driver
dtype: string
- name: DriverNumber
dtype: string
- name: LapTime
dtype: float64
- name: LapNumber
dtype: float64
- name: Stint
dtype: float64
- name: PitOutTime
dtype: duration[ns]
- name: PitInTime
dtype: duration[ns]
- name: Sector1Time
dtype: float64
- name: Sector2Time
dtype: float64
- name: Sector3Time
dtype: float64
- name: Sector1SessionTime
dtype: duration[ns]
- name: Sector2SessionTime
dtype: duration[ns]
- name: Sector3SessionTime
dtype: duration[ns]
- name: SpeedI1
dtype: float64
- name: SpeedI2
dtype: float64
- name: SpeedFL
dtype: float64
- name: SpeedST
dtype: float64
- name: IsPersonalBest
dtype: bool
- name: Compound
dtype: string
- name: TyreLife
dtype: float64
- name: FreshTyre
dtype: bool
- name: Team
dtype: string
- name: LapStartTime
dtype: duration[ns]
- name: LapStartDate
dtype: timestamp[ns]
- name: TrackStatus
dtype: string
- name: Position
dtype: float64
- name: Deleted
dtype: bool
- name: DeletedReason
dtype: string
- name: FastF1Generated
dtype: bool
- name: IsAccurate
dtype: bool
- name: speed
sequence:
sequence: float64
- name: throttle
sequence:
sequence: float64
- name: drs
sequence:
sequence: float64
- name: nGear
sequence:
sequence: float64
- name: brake
sequence:
sequence: float64
- name: x
sequence:
sequence: float64
- name: y
sequence:
sequence: float64
- name: z
sequence:
sequence: float64
- name: distance_driver
sequence:
sequence: float64
- name: speed_emb
sequence: float64
- name: brake_emb
sequence: float64
- name: throttle_emb
sequence: float64
- name: x_emb
dtype: float64
- name: y_emb
dtype: float64
- name: z_emb
dtype: float64
- name: gear_vis
dtype: string
- name: speed_vis
dtype: string
- name: portrait
dtype: string
- name: brake_emb_reduced
sequence: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 22426400
num_examples: 201
download_size: 15371945
dataset_size: 22426400
---
# Dataset Card for "f1_demo_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Andyrasika/image_captioning | 2023-07-12T05:08:26.000Z | [
"region:us"
] | Andyrasika | null | null | null | 1 | 23 | Entry not found |
lavita/medical-qa-shared-task-v1-all | 2023-07-20T00:31:23.000Z | [
"region:us"
] | lavita | null | null | null | 1 | 23 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: ending0
dtype: string
- name: ending1
dtype: string
- name: ending2
dtype: string
- name: ending3
dtype: string
- name: ending4
dtype: string
- name: label
dtype: int64
- name: sent1
dtype: string
- name: sent2
dtype: string
- name: startphrase
dtype: string
splits:
- name: train
num_bytes: 16691926
num_examples: 10178
- name: dev
num_bytes: 2086503
num_examples: 1272
download_size: 10556685
dataset_size: 18778429
---
# Dataset Card for "medical-qa-shared-task-v1-all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pykeio/oshichats-v1-2308 | 2023-09-06T23:07:19.000Z | [
"task_categories:text-classification",
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:token-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-nc-sa-4.0",
"livestream",
... | pykeio | null | null | null | 2 | 23 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
- conversational
- text-generation
- token-classification
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
tags:
- livestream
- stream
- chat
- messages
- vtuber
- vtubers
pretty_name: OSHIChats v1
size_categories:
- 1M<n<10M
---
## OSHIChats v1 (August 2023)
OSHIChats v1 is a dataset of 8.06 million high-quality filtered English chat messages collected from various [VTuber](https://en.wikipedia.org/wiki/VTuber) live streams.
Compared to our previous dataset, [pykeio/vtuber-chats-2023-filtered-en-8.7M](https://huggingface.co/datasets/pykeio/vtuber-chats-2023-filtered-en-8.7M), we make the following improvements:
- Include stream topic information
- Far more accurate nickname detection using NLP
- Previously we did not match names like "dad" (nickname for Mori Calliope) or "mom" (nickname for Nina Kosaka) because they were too general. Now, we analyze the context and other information about the stream to determine whether to match such nicknames.
- Detect and normalize fan names like takodachi or pentomo
## Usage
Once you gain access to the dataset, you'll also need to log in to Hugging Face CLI with `huggingface-cli login`.
```py
from datasets import load_dataset
chats_dataset = load_dataset('pykeio/oshichats-v1-2308', split='train', revision='refs/convert/parquet')
chats_dataset[0]
# {'liver': 'FgXWZOUZA2oYHNr6qDmsTQ', 'stream': {'id': 'JHBv4BA_Y84', 'topic': 'Twisted_Wonderland'}, 'is_super': False, 'message': "i think i've grown to dislike them ", 'author': 'chxrry_head', 'time': [1660106235135797, 2126652]}
```
## Samples
```json
{
"liver": "kieJGn3pgJikVW8gmMXE2w",
"stream": {
"id": "dMUhbAcI5gk",
"topic": "minecraft"
},
"is_super": false,
"message": "yay <|liver:bW9t|> is streaming while I'm awake!",
"author": "Redribbon Vicky",
"time": [1651976493761550, 44936]
}
{
"liver": "yl1z3jo3XHR1riLFKG5UAg",
"stream": {
"id": "TgEX7HFqTYc",
"topic": "Donkey_Kong"
},
"is_super": false,
"message": "Stop running <|liver:QW1l|><|:ameHeh:|><|:ameHeh:|><|:ameHeh:|>",
"author": "Anon",
"time": [1616291612238864, 889273]
}
```
## Data fields
- `liver`: ID of the YouTube channel hosting the stream which the chat message came from.
- `stream`: Information about the stream.
- `id`: Video ID of the YouTube stream.
- `topic`: Topic of the stream (or `null` if a topic could not be determined). This can be things like `talk`, `Minecraft`, `Singing`, `GTA`, `Asmr`, etc.
- `is_super`: Whether or not the message is a Superchat (donation).
- `message`: Contents of the message. For consistency and ease of use on downstream tasks, we replace certain words with easily matchable special tokens:
* `<|liver:{b64}|>`: The substring refers to the host of the stream.
* `<|liver-fans:{b64}|>`: The substring refers to a nickname given to the fanbase of the host of the stream, e.g. aloupeeps or takodachis.
* `<|known-collaborator:{channelID}:{b64}|>`: The substring refers to a fellow VTuber that is present in the stream.
* `<|maybe-collaborator:{channelID}:{b64}|>`: The substring refers to a fellow VTuber that may or may not be part of the stream.
* `<|collaborator-fans:{channelID}:{b64}|>`: The substring refers to the fanbase of a collaborator present in the stream.
* `<|:{emote}:|>`: Represents a channel emote.
* Note that `channelID` is a YouTube channel ID, and `b64` is the original substring encoded as base64.
- `author`: The username of the author.
- `time`: A tuple containing the Unix timestamp of when the message was sent, and the relative time since the start of the stream.
## License
Licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/); you must give attribution, you may not use the dataset for commercial purposes, and you must distribute any transformations or copies of the dataset under the same license. [Contact us](mailto:contact@pyke.io) for alternative/commercial licensing. |
adityarra07/ATC_2 | 2023-08-06T05:38:14.000Z | [
"region:us"
] | adityarra07 | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: test
num_bytes: 113797125.0
num_examples: 871
download_size: 113447323
dataset_size: 113797125.0
---
# Dataset Card for "ATC_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
glaiveai/glaive-function-calling | 2023-09-27T18:04:36.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] | glaiveai | null | null | null | 25 | 23 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
size_categories:
- 10K<n<100K
---
This dataset consists of 52k samples generated through [Glaive](https://glaive.ai) for the task of function calling, in the following format-
```
SYSTEM: You are an helpful assistant who has access to the following functions to help the user, you can use the functions if needed-
{
JSON function definiton
}
USER: user message
ASSISTANT: assistant message
Function call invocations are formatted as-
ASSISTANT: <functioncall> {json function call}
Response to the function call is formatted as-
FUNCTION RESPONSE: {json function response}
```
There are also samples which do not have any function invocations, multiple invocations and samples with no functions presented and invoked to keep the data balanced. |
amankhandelia/test_namo_dataset | 2023-08-09T12:24:12.000Z | [
"region:us"
] | amankhandelia | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 51912340.0
num_examples: 754
download_size: 51373764
dataset_size: 51912340.0
---
# Dataset Card for "test_namo_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PL-MTEB/psc-pairclassification | 2023-08-11T13:08:44.000Z | [
"license:cc-by-sa-3.0",
"region:us"
] | PL-MTEB | null | null | null | 0 | 23 | ---
license: cc-by-sa-3.0
---
|
BuroIdentidadDigital/recibos_izzi | 2023-10-02T21:57:59.000Z | [
"license:c-uda",
"region:us"
] | BuroIdentidadDigital | null | null | null | 1 | 23 | ---
license: c-uda
---
|
pin-lpt/lora_sd_xl_test_230814 | 2023-08-21T14:22:53.000Z | [
"region:us"
] | pin-lpt | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 126043455.0
num_examples: 914
download_size: 125936815
dataset_size: 126043455.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "lora_sd_xl_test_230814"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
larryvrh/ShareGPT-Zh_Only | 2023-08-22T08:25:50.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:zh",
"region:us"
] | larryvrh | null | null | null | 2 | 23 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: src
dtype: string
splits:
- name: train
num_bytes: 69835231
num_examples: 8631
download_size: 32862465
dataset_size: 69835231
task_categories:
- text-generation
- conversational
language:
- zh
size_categories:
- 1K<n<10K
---
# Dataset Card for "sharegpt"
Combined and filtered from [shibing624/sharegpt_gpt4](https://huggingface.co/datasets/shibing624/sharegpt_gpt4) and [zetavg/ShareGPT-Processed](https://huggingface.co/datasets/zetavg/ShareGPT-Processed). |
lowem1/cc_news_images | 2023-08-30T03:49:05.000Z | [
"region:us"
] | lowem1 | null | null | null | 0 | 23 | ---
configs:
- config_name: default
data_files:
- split: sample
path: data/sample-*
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: sample
num_bytes: 111345446.0
num_examples: 439
- name: train
num_bytes: 781148720.208
num_examples: 3072
- name: test
num_bytes: 319260197.166
num_examples: 1317
download_size: 1172645418
dataset_size: 1211754363.374
---
# Dataset Card for "cc_news_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nikchar/20k_claims_train_final | 2023-09-01T19:52:30.000Z | [
"region:us"
] | nikchar | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: claim
dtype: string
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 30738751.0
num_examples: 19998
download_size: 17098290
dataset_size: 30738751.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "20k_claims_train_final"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
glacierscopessegmentation/scopes | 2023-09-07T00:46:32.000Z | [
"region:us"
] | glacierscopessegmentation | null | null | null | 0 | 23 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
- name: img_path
dtype: string
- name: mask_path
dtype: string
splits:
- name: test
num_bytes: 133809772.46884431
num_examples: 1848
- name: train
num_bytes: 2541585890.6731553
num_examples: 35101
download_size: 2648655351
dataset_size: 2675395663.1419997
---
# Dataset Card for "scopes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
legacy107/bioasq10b-factoid | 2023-09-06T13:45:03.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"medical",
"region:us"
] | legacy107 | null | null | null | 1 | 23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: long_answer
dtype: string
- name: answer
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 3321906
num_examples: 1252
- name: test
num_bytes: 318200
num_examples: 166
download_size: 1758966
dataset_size: 3640106
task_categories:
- question-answering
language:
- en
tags:
- medical
pretty_name: BioASQ10b (factoid only)
size_categories:
- 1K<n<10K
---
# Dataset Card for "bioasq10b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
erfanzar/UltraChat-Mixin | 2023-09-07T11:28:29.000Z | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:translation",
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:100M<n<1B",
"language:en",
"language:zh",
"code",
"region:us"
] | erfanzar | null | null | null | 6 | 23 | ---
language:
- en
- zh
size_categories:
- 100M<n<1B
task_categories:
- summarization
- text-generation
- translation
- question-answering
- conversational
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: dialog
sequence: string
- name: user
sequence: string
- name: assistant
sequence: string
- name: system
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 18717804334
num_examples: 1478011
download_size: 9422710168
dataset_size: 18717804334
tags:
- code
---
# Dataset Card for "UltraChat-Mixin"
# UltraChat-Mixin Dataset
## Overview
UltraChat-Mixin is a dataset created by Me, which is a mix of three datasets: 'stingning/ultrachat', 'jondurbin/airoboros-2.1', and 'erfanzar/GPT4-8K'. This dataset is designed for training conversational AI models.
## Dataset Configuration
The dataset is configured as follows:
```yaml
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: dialog
sequence: string
- name: user
sequence: string
- name: assistant
sequence: string
- name: system
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 18719148590
num_examples: 1478011
download_size: 9422934646
dataset_size: 18719148590
```
## Features
The UltraChat-Mixin dataset consists of the following features:
- **dialog**: A sequence of strings representing the conversation dialog.
- **user**: A sequence of strings representing the user's messages.
- **assistant**: A sequence of strings representing the assistant's responses.
- **system**: A string representing the system's message.
- **id**: An integer representing the unique identifier for each example.
## Splits
The dataset contains a single split:
- **train**: This split is used for training conversational AI models. It consists of 1,478,011 examples and has a size of approximately 18,719,148,590 bytes.
## Download Size
The download size of the UltraChat-Mixin dataset is approximately 9,422,934,646 bytes.
## Dataset Size
The total size of the UltraChat-Mixin dataset is approximately 18,719,148,590 bytes.
Please note that the dataset configuration and statistics provided above are based on the information provided by Erfan. |
minoruskore/wlkjokj3454sd45sc45 | 2023-09-09T21:55:35.000Z | [
"license:other",
"region:us"
] | minoruskore | null | null | null | 0 | 23 | ---
license: other
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: user_id
dtype: int64
- name: name
dtype: string
- name: anime_id
dtype: int64
- name: anime
dtype: string
- name: rating
dtype: int64
splits:
- name: train
num_bytes: 1386784355
num_examples: 19460153
- name: test
num_bytes: 354541207
num_examples: 4865038
- name: train100k
num_bytes: 5716739
num_examples: 80000
- name: test100k
num_bytes: 1453191
num_examples: 20000
- name: train500k
num_bytes: 28547903
num_examples: 400000
- name: test500k
num_bytes: 7235060
num_examples: 100000
- name: train1kk
num_bytes: 57023319
num_examples: 800000
- name: test1kk
num_bytes: 14562005
num_examples: 200000
download_size: 832651093
dataset_size: 1855863779
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: train100k
path: data/train100k-*
- split: test100k
path: data/test100k-*
- split: train500k
path: data/train500k-*
- split: test500k
path: data/test500k-*
- split: train1kk
path: data/train1kk-*
- split: test1kk
path: data/test1kk-*
---
|
ashwincv0112/SAS_Python_Conversion | 2023-09-08T08:23:29.000Z | [
"region:us"
] | ashwincv0112 | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: SAS Code
dtype: string
- name: Converted Python Code
dtype: string
splits:
- name: train
num_bytes: 6362
num_examples: 30
download_size: 5247
dataset_size: 6362
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SAS_Python_Conversion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vikp/hydra_inst_labeled | 2023-09-15T14:04:56.000Z | [
"region:us"
] | vikp | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: unique_conversation_id
dtype: string
- name: rendered
dtype: string
- name: dataset_id
dtype: string
- name: inst_prob
dtype: float64
splits:
- name: train
num_bytes: 4796996141
num_examples: 2527636
download_size: 0
dataset_size: 4796996141
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hydra_inst_labeled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
edbeeching/gia-dataset-parquet-debug | 2023-09-10T19:33:32.000Z | [
"region:us"
] | edbeeching | null | null | null | 0 | 23 | ---
dataset_info:
- config_name: atari-alien
features:
- name: image_observations
sequence: image
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
splits:
- name: test
num_bytes: 26566416.0
num_examples: 2
- name: train
num_bytes: 22539851.0
num_examples: 2
download_size: 49578302
dataset_size: 49106267.0
- config_name: atari-breakout
features:
- name: image_observations
sequence: image
- name: rewards
sequence: float32
- name: discrete_actions
sequence: int64
splits:
- name: test
num_bytes: 17689596.0
num_examples: 2
- name: train
num_bytes: 9524039.0
num_examples: 2
download_size: 25423698
dataset_size: 27213635.0
- config_name: mujoco-ant
features:
- name: continuous_observations
sequence:
sequence: float32
length: 27
- name: continuous_actions
sequence:
sequence: float32
length: 8
- name: rewards
sequence: float32
splits:
- name: test
num_bytes: 288024
num_examples: 2
- name: train
num_bytes: 288024
num_examples: 2
download_size: 858378
dataset_size: 576048
configs:
- config_name: atari-alien
data_files:
- split: test
path: atari-alien/test-*
- split: train
path: atari-alien/train-*
- config_name: atari-breakout
data_files:
- split: test
path: atari-breakout/test-*
- split: train
path: atari-breakout/train-*
- config_name: mujoco-ant
data_files:
- split: test
path: mujoco-ant/test-*
- split: train
path: mujoco-ant/train-*
---
# Dataset Card for "gia-dataset-parquet-debug"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BEE-spoke-data/bees-v0 | 2023-09-13T19:59:27.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"bees",
"pollen",
"honey",
"bzz",
"region:us"
] | BEE-spoke-data | null | null | null | 0 | 23 | ---
language:
- en
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- text-generation
- fill-mask
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 15077487
num_examples: 48561
download_size: 8856859
dataset_size: 15077487
tags:
- bees
- pollen
- honey
- bzz
---
# Dataset Card for "bees-v0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) 🐝 |
HydraLM/corpus_1_classifier_data | 2023-09-17T23:08:38.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1103014425
num_examples: 1472917
download_size: 669772750
dataset_size: 1103014425
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "corpus_1_classifier_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ura-hcmut/synthetic_reasoning_natural | 2023-09-19T02:35:59.000Z | [
"task_categories:text2text-generation",
"language:vi",
"license:cc-by-nc-sa-4.0",
"region:us"
] | ura-hcmut | null | null | null | 0 | 23 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text2text-generation
language:
- vi
configs:
- config_name: easy_gcp
data_files:
- split: train
path: synthetic_reasoning_gcp_natural_training.csv
- split: test
path: synthetic_reasoning_gcp_natural.csv
- config_name: easy_azr
data_files:
- split: train
path: synthetic_reasoning_azr_natural_training.csv
- split: test
path: synthetic_reasoning_azr_natural.csv
---
# Synthetic reasoning dataset
Original version:
- https://huggingface.co/datasets/lighteval/synthetic_reasoning_natural
Translation source code: https://github.com/martinakaduc/ura-llama/tree/main/dataset_scripts/custom_datasets |
mychen76/wildreceipts_ocr_train | 2023-09-21T10:10:56.000Z | [
"region:us"
] | mychen76 | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 132661697.28
num_examples: 1265
download_size: 118220818
dataset_size: 132661697.28
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wildreceipts_ocr_train"
Dataset Summary
-----------------------------
This is collection of receipts images with enhanced text information source from Wildreceipts and additional curated receipt images.
It contains photo and OCRs information of each image including words, bounding box, labels and key information extraction data in json and xml format.
Features and Data Structure
-----------------------------
visual data
- Receipt image represent complex layouts, the effects are well demonstrated on each image.
text data
- ocr_json - represent extracted receipt key information data in json format
- ocr_boxes - represent up-to-date ocr scan result as grouth truth in raw format
- ocr_words - represent ocr detected and recognized words from the receipt image
- ocr_labels - represent original mapping of labels class and text position (may deviate from actual ocr scan result)
- ocr_xml - represent xml format of the key information
- ocr_kie - represent extraction of key information from the receipt image
Languages
The language of the data is primarily English.
Data Instances
A data instance in this dataset represents entries from the Receipt collection which have been augmented.
Data Samples
-----------------------------
Image:
file_name: receipt_0.jpeg
Sample: ocr_words
-----------------------------
['CHO EUN', 'KOREAN RESTAURANT', '2621 ORANGETHORPE AVE,FULLERTON.', '714879-3574', 'THANKYOU!!', 'DATE12/30/2016 FRI', 'TIME19:19', 'BIBIM.OCTOPU T1', '$13.99', 'S-FOODP.CAKT1', '$14.99', 'PORK DUMPLIN T1', '$8.99', 'LA BEEF RIB T1', '$17.99', '4.00xITEMS', 'SUBTOTAL', '$55.96', 'TAX1', '$4.48', 'TOTAL', '$60.44', '$60AA']
Sample: ocr_json
-----------------------------
{"store_name": "CHOEUN KOREANRESTAURANT", "store_addr": "2621ORANGETHORPEAVE,FULLERTON.", "telephone": "(714)879-3574", "date": "12/30/2016FRI", "time": "19:19", "subtotal": "$55.96", "tax": "$4.48", "total": "$60.44", "ignore": " ", "tips": "", "line_items": [{"item_key": "", "item_name": "BIBIM.OCTOPUT1", "item_value": "$13.99", "item_quantity": "1"}, {"item_key": "", "item_name": "S-FOODP.CAKT1", "item_value": "$14.99", "item_quantity": "1"}, {"item_key": "", "item_name": "PORKDUMPLINT1", "item_value": "$8.99", "item_quantity": "1"}, {"item_key": "", "item_name": "LABEEFRIBT1", "item_value": "\uffe517.99", "item_quantity": "1"}, {"item_key": "4.00xITEMS", "item_name": "", "item_value": "", "item_quantity": ""}]}
Sample: ocr_xml
-----------------------------
<s_receipt><s_total>$60.44</s_total><s_tips></s_tips><s_time>19:19</s_time><s_telephone>(714)879-3574</s_telephone><s_tax>$4.48</s_tax><s_subtotal>$55.96</s_subtotal><s_store_name>CHOEUN KOREANRESTAURANT</s_store_name><s_store_addr>2621ORANGETHORPEAVE,FULLERTON.</s_store_addr><s_line_items><s_item_value>$13.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>BIBIM.OCTOPUT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value>$14.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>S-FOODP.CAKT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value>$8.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>PORKDUMPLINT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value>¥17.99</s_item_value><s_item_quantity>1</s_item_quantity><s_item_name>LABEEFRIBT1</s_item_name><s_item_key></s_item_key><sep/><s_item_value></s_item_value><s_item_quantity></s_item_quantity><s_item_name></s_item_name><s_item_key>4.00xITEMS</s_item_key></s_line_items><s_ignore> </s_ignore><s_date>12/30/2016FRI</s_date></s_receipt>
Sample: ocr_kie
-----------------------------
[{'label': 'Store_name_value', 'transcription': 'CHOEUN'}, {'label': 'Store_name_value', 'transcription': 'KOREANRESTAURANT'}, {'label': 'Store_addr_value', 'transcription': '2621ORANGETHORPEAVE,FULLERTON.'}, {'label': 'Tel_value', 'transcription': '(714)879-3574'}, {'label': 'Others', 'transcription': 'THANKYOU!!'}, {'label': 'Date_key', 'transcription': 'DATE'}, {'label': 'Date_value', 'transcription': '12/30/2016FRI'}, {'label': 'Time_value', 'transcription': '19:19'}, {'label': 'Prod_item_value', 'transcription': 'BIBIM.OCTOPUT1'}, {'label': 'Prod_item_value', 'transcription': 'S-FOODP.CAKT1'}, {'label': 'Prod_item_value', 'transcription': 'PORKDUMPLINT1'}, {'label': 'Prod_item_value', 'transcription': 'LABEEFRIBT1'}, {'label': 'Prod_price_value', 'transcription': '$13.99'}, {'label': 'Prod_price_value', 'transcription': '$14.99'}, {'label': 'Prod_price_value', 'transcription': '$8.99'}, {'label': 'Prod_price_value', 'transcription': '¥17.99'}, {'label': 'Prod_item_key', 'transcription': '4.00xITEMS'}, {'label': 'Subtotal_key', 'transcription': 'SUBTOTAL'}, {'label': 'Tax_key', 'transcription': 'TAX1'}, {'label': 'Total_key', 'transcription': 'TOTAL'}, {'label': 'Subtotal_value', 'transcription': '$55.96'}, {'label': 'Tax_value', 'transcription': '$4.48'}, {'label': 'Total_value', 'transcription': '$60.44'}, {'label': 'Ignore', 'transcription': ''}, {'label': 'Ignore', 'transcription': ''}, {'label': 'Time_key', 'transcription': 'TIME'}]
Sample: ocr_labels
-----------------------------
[{'label': 'Store_name_value', 'transcription': 'CHOEUN', 'points': [[114.0, 19.0], [230.0, 19.0], [230.0, 1.0], [114.0, 1.0]]}, {'label': 'Store_name_value', 'transcription': 'KOREANRESTAURANT', 'points': [[97.0, 35.0], [236.0, 35.0], [236.0, 19.0], [97.0, 19.0]]}, {'label': 'Store_addr_value', 'transcription': '2621ORANGETHORPEAVE,FULLERTON.', 'points': [[29.0, 56.0], [295.0, 56.0], [295.0, 34.0], [29.0, 34.0]]}, {'label': 'Tel_value', 'transcription': '(714)879-3574', 'points': [[48.0, 73.0], [280.0, 73.0], [280.0, 54.0], [48.0, 54.0]]}, {'label': 'Others', 'transcription': 'THANKYOU!!', 'points': [[79.0, 92.0], [259.0, 92.0], [259.0, 74.0], [79.0, 74.0]]}, {'label': 'Date_key', 'transcription': 'DATE', 'points': [[22.0, 130.0], [61.0, 130.0], [61.0, 112.0], [22.0, 112.0]]}, {'label': 'Date_value', 'transcription': '12/30/2016FRI', 'points': [[70.0, 131.0], [192.0, 131.0], [192.0, 112.0], [70.0, 112.0]]}, {'label': 'Time_value', 'transcription': '19:19', 'points': [[263.0, 128.0], [307.0, 128.0], [307.0, 111.0], [263.0, 111.0]]}, {'label': 'Prod_item_value', 'transcription': 'BIBIM.OCTOPUT1', 'points': [[19.0, 168.0], [157.0, 168.0], [157.0, 149.0], [19.0, 149.0]]}, {'label': 'Prod_item_value', 'transcription': 'S-FOODP.CAKT1', 'points': [[17.0, 190.0], [158.0, 190.0], [158.0, 171.0], [17.0, 171.0]]}, {'label': 'Prod_item_value', 'transcription': 'PORKDUMPLINT1', 'points': [[14.0, 214.0], [158.0, 214.0], [158.0, 192.0], [14.0, 192.0]]}, {'label': 'Prod_item_value', 'transcription': 'LABEEFRIBT1', 'points': [[14.0, 236.0], [151.0, 236.0], [151.0, 215.0], [14.0, 215.0]]}, {'transcription': '$13.99', 'points': [[254.0, 168.0], [312.0, 168.0], [312.0, 149.0], [254.0, 149.0]]}, {'transcription': '$14.99', 'points': [[257.0, 189.0], [314.0, 189.0], [314.0, 170.0], [257.0, 170.0]]}, {'transcription': '$8.99', 'points': [[268.0, 212.0], [316.0, 212.0], [316.0, 191.0], [268.0, 191.0]]}, {'transcription': '¥17.99', 'points': [[261.0, 234.0], [318.0, 234.0], [318.0, 213.0], [261.0, 213.0]]}, {'label': 'Prod_item_key', 'transcription': '4.00xITEMS', 'points': [[118.0, 260.0], [217.0, 260.0], [217.0, 239.0], [118.0, 239.0]]}, {'label': 'Subtotal_key', 'transcription': 'SUBTOTAL', 'points': [[8.0, 285.0], [91.0, 285.0], [91.0, 264.0], [8.0, 264.0]]}, {'label': 'Tax_key', 'transcription': 'TAX1', 'points': [[8.0, 312.0], [49.0, 312.0], [49.0, 291.0], [8.0, 291.0]]}, {'label': 'Total_key', 'transcription': 'TOTAL', 'points': [[8.0, 336.0], [61.0, 336.0], [61.0, 316.0], [8.0, 316.0]]}, {'label': 'Subtotal_value', 'transcription': '$55.96', 'points': [[263.0, 283.0], [325.0, 283.0], [325.0, 260.0], [263.0, 260.0]]}, {'label': 'Tax_value', 'transcription': '$4.48', 'points': [[274.0, 308.0], [326.0, 308.0], [326.0, 286.0], [274.0, 286.0]]}, {'label': 'Total_value', 'transcription': '$60.44', 'points': [[267.0, 334.0], [328.0, 334.0], [328.0, 310.0], [267.0, 310.0]]}, {'label': 'Ignore', 'transcription': '', 'points': [[269.0, 347.0], [328.0, 347.0], [328.0, 336.0], [269.0, 336.0]]}, {'label': 'Ignore', 'transcription': '', 'points': [[11.0, 347.0], [50.0, 347.0], [50.0, 342.0], [11.0, 342.0]]}, {'label': 'Time_key', 'transcription': 'TIME', 'points': [[215.0, 128.0], [253.0, 128.0], [253.0, 112.0], [215.0, 112.0]]}]
Sample: ocr_boxes
-----------------------------
[[[[113.0, 0.0], [228.0, 3.0], [227.0, 20.0], [113.0, 17.0]], ('CHO EUN', 0.9466678500175476)], [[[96.0, 17.0], [236.0, 21.0], [236.0, 38.0], [96.0, 33.0]], ('KOREAN RESTAURANT', 0.9685913324356079)], [[[28.0, 32.0], [293.0, 37.0], [292.0, 56.0], [28.0, 51.0]], ('2621 ORANGETHORPE AVE,FULLERTON.', 0.951709508895874)], [[[48.0, 53.0], [279.0, 56.0], [279.0, 73.0], [47.0, 70.0]], ('714879-3574', 0.9919183850288391)], [[[81.0, 75.0], [256.0, 75.0], [256.0, 89.0], [81.0, 89.0]], ('THANKYOU!!', 0.9518492817878723)], [[[24.0, 113.0], [191.0, 113.0], [191.0, 127.0], [24.0, 127.0]], ('DATE12/30/2016 FRI', 0.9638745784759521)], [[[214.0, 111.0], [305.0, 109.0], [306.0, 125.0], [215.0, 128.0]], ('TIME19:19', 0.9523274898529053)], [[[18.0, 150.0], [156.0, 149.0], [156.0, 167.0], [18.0, 168.0]], ('BIBIM.OCTOPU T1', 0.9491282105445862)], [[[253.0, 147.0], [312.0, 144.0], [313.0, 166.0], [254.0, 168.0]], ('$13.99', 0.9204174876213074)], [[[16.0, 172.0], [157.0, 170.0], [157.0, 187.0], [16.0, 189.0]], ('S-FOODP.CAKT1', 0.9633263945579529)], [[[255.0, 168.0], [313.0, 168.0], [313.0, 189.0], [255.0, 189.0]], ('$14.99', 0.9975371956825256)], [[[15.0, 194.0], [157.0, 192.0], [157.0, 210.0], [15.0, 212.0]], ('PORK DUMPLIN T1', 0.9503927826881409)], [[[265.0, 190.0], [317.0, 188.0], [318.0, 209.0], [266.0, 212.0]], ('$8.99', 0.9171518087387085)], [[[12.0, 217.0], [149.0, 213.0], [149.0, 233.0], [12.0, 236.0]], ('LA BEEF RIB T1', 0.925663948059082)], [[[258.0, 213.0], [319.0, 210.0], [320.0, 232.0], [259.0, 235.0]], ('$17.99', 0.9976120591163635)], [[[119.0, 237.0], [217.0, 237.0], [217.0, 258.0], [119.0, 258.0]], ('4.00xITEMS', 0.9557921290397644)], [[[9.0, 264.0], [90.0, 262.0], [90.0, 284.0], [9.0, 286.0]], ('SUBTOTAL', 0.9968011379241943)], [[[263.0, 261.0], [324.0, 259.0], [325.0, 281.0], [264.0, 283.0]], ('$55.96', 0.9971590042114258)], [[[8.0, 289.0], [50.0, 289.0], [50.0, 311.0], [8.0, 311.0]], ('TAX1', 0.9973537921905518)], [[[273.0, 286.0], [326.0, 283.0], [328.0, 306.0], [274.0, 309.0]], ('$4.48', 0.991606593132019)], [[[9.0, 315.0], [61.0, 315.0], [61.0, 337.0], [9.0, 337.0]], ('TOTAL', 0.9985822439193726)], [[[266.0, 312.0], [328.0, 309.0], [328.0, 331.0], [267.0, 333.0]], ('$60.44', 0.9942547678947449)], [[[269.0, 334.0], [326.0, 334.0], [326.0, 347.0], [269.0, 347.0]], ('$60AA', 0.7674070596694946)]]
Curation Rationale
-----------------------------
The curated dataset was created to provide a source of OCR augmented text data for own personal AI research use. The datapoints are intended primarily to provide an enhancement of the core Receipt Image Collection data which relies upon the key information extraction from receipt image.
Data Source and Prepratation
-----------------------------
1) This dataset use the great work from WildReceipt is a large receipt dataset collected from document images of unseen templates in the wild. It contains 25 key information categories, a total of about 69000 text boxes. Offical dataset: https://download.openmmlab.com/mmocr/data/wildreceipt.tar
2) OCR text data is generated using techniques OCR scaned on each image.
3) Additional Post progressing OCR result into XML, JSON and Words format
License:
Please check out the license of each subset in our curated dataset.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nicolas-BZRD/Parallel_Global_Voices_English_French | 2023-09-21T15:40:05.000Z | [
"task_categories:translation",
"size_categories:100K<n<1M",
"language:en",
"language:fr",
"license:cc-by-3.0",
"parallel",
"parallel data",
"region:us"
] | Nicolas-BZRD | null | null | null | 0 | 23 | ---
license: cc-by-3.0
dataset_info:
features:
- name: en
dtype: string
- name: fr
dtype: string
splits:
- name: train
num_bytes: 89720129
num_examples: 342060
download_size: 57746668
dataset_size: 89720129
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- translation
language:
- en
- fr
tags:
- parallel
- parallel data
size_categories:
- 100K<n<1M
---
# Parallel Global Voices (English-French)
Parallel Global Voices EN-FR is a parallel corpus generated from the Global Voices multilingual group of websites (http://globalvoices.org/), where volunteers publish and translate news stories in more than 40 languages. The original content from the Global Voices websites is available by the authors and publishers under a Creative Commons Attribution license. The content was crawled in July-August 2015 by researchers at the NLP group of the Institute for Language and Speech Processing. Documents that are translations of each other were paired on the basis of their link information. After document pairing, segment alignments were automatically extracted. The results of the automatic alignment at document and segment level are distributed under a Creative Commons Attribution license.
### Attribution details
Parallel Global Voices (English - French) was created for the European Language Resources Coordination Action (ELRC) (http://lr-coordination.eu/) by researchers at the NLP group of the Institute for Language and Speech Processing (http://www.ilsp.gr/) with primary data copyrighted by Parallel Global Voices (https://globalvoices.org/) and is licensed under "CC-BY 3.0" (https://creativecommons.org/licenses/by/3.0/). |
dim/forum_uristov_rf_prompts | 2023-09-21T23:06:22.000Z | [
"region:us"
] | dim | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: solution
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 3043144
num_examples: 1849
download_size: 1343977
dataset_size: 3043144
---
# Dataset Card for "forum_uristov_rf_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
securecodegen/SecurePy150k | 2023-09-22T07:58:48.000Z | [
"license:mit",
"region:us"
] | securecodegen | null | null | null | 0 | 23 | ---
license: mit
---
|
JonasWeinert/in-intdev-jd | 2023-09-22T22:59:33.000Z | [
"task_categories:zero-shot-classification",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | JonasWeinert | null | null | null | 0 | 23 | ---
task_categories:
- zero-shot-classification
language:
- en
pretty_name: skills in nternational development job descriptions
size_categories:
- 1K<n<10K
--- |
ASIRI25/cdrgen | 2023-09-30T01:52:52.000Z | [
"region:us"
] | ASIRI25 | null | null | null | 0 | 23 | Entry not found |
chrisgru/openassistant-guanaco | 2023-09-25T11:49:07.000Z | [
"region:us"
] | chrisgru | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: text
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 31656614
num_examples: 9846
download_size: 18390557
dataset_size: 31656614
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "openassistant-guanaco"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MLNTeam-Unical/NFT-70M_text | 2023-09-28T15:33:32.000Z | [
"task_categories:time-series-forecasting",
"task_categories:text-classification",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:sentence-similarity",
"task_categories:image-c... | MLNTeam-Unical | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: id
dtype: string
- name: emb
sequence: float32
splits:
- name: train
num_bytes: 98031916170
num_examples: 31749685
download_size: 9751089154
dataset_size: 98031916170
size_categories:
- 10M<n<100M
license: cc-by-nc-4.0
task_categories:
- time-series-forecasting
- text-classification
- feature-extraction
- text-generation
- zero-shot-classification
- text2text-generation
- sentence-similarity
- image-classification
- image-to-text
- text-to-image
- text-retrieval
language:
- en
tags:
- Non-fungible Tokens
- Crypto
- Web3
- Art
- Multimodal Learning
pretty_name: NFT-70M_text
---
# Dataset Card for "NFT-70M_text"
## Dataset summary
The *NFT-70M_text* dataset is a companion for our released [**NFT-70M_transactions**](https://huggingface.co/datasets/MLNTeam-Unical/NFT-70M_transactions) dataset,
which is the largest and most up-to-date collection of Non-Fungible Tokens (NFT) transactions between 2021 and 2023 sourced from [OpenSea](https://opensea.io).
As we also reported in the "Data anonymization" section of the dataset card of *NFT-70M_transactions*,
the textual contents associated with the NFT data were replaced by identifiers to numerical vectors that represent an encrypted
representation (i.e., embeddings) of the text contents obtained via the [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) neural network model.
## Ethical use of data and informed consent
This data repository is made available for research and informational purposes only.
Any finding that might be drawn from the data provided within this repository should be intended to support decision-making regarding actions made on NFTs, and not to replace the human specialists.
*The authors are not responsible for any issues related to trading failures based on the data provided within this repository.*
## Terms of Usage
Please cite the following papers in any research product whose findings are based on the data provided within this repository:
- L. La Cava, D. Costa, A. Tagarelli: SONAR: Web-based Tool for Multimodal Exploration of Non-Fungible Token Inspiration Networks. In: Proc. ACM SIGIR 2023. Taipei, Taiwan, July 23-27 2023. DOI: https://doi.org/10.1145/3539618.3591821
- L. La Cava, D. Costa, A. Tagarelli: Visually Wired NFTs: Exploring the Role of Inspiration in Non-Fungible Tokens. CoRR abs/2303.17031 (2023). DOI: https://doi.org/10.48550/arXiv.2303.17031
- D. Costa, L. La Cava, A. Tagarelli: Show me your NFT and I tell you how it will perform: Multimodal representation learning for NFT selling price prediction. In: Proc. ACM WebConf 2023, pp. 1875-1885. Austin, TX, USA, 30 April 2023 – 4 May 2023. DOI: https://doi.org/10.1145/3543507.3583520
Data within this repository were fetched using the REST APIs provided by OpenSea. You should also acknowledge [OpenSea API]("https://docs.opensea.io/reference/api-overview).
## Liability statement
The authors hereby declare that they are not responsible for any harmful or objectionable content that may be contained within the data provided within this repository.
Users of the dataset are expected to exercise due diligence and responsibility when using the data, including but not limited to:
(i) Content Review: Users should review the dataset's contents carefully and assess its suitability for their intended purposes; (ii) Compliance: Users are responsible for ensuring that their use of the dataset complies with all applicable laws, regulations, and ethical standards;
(iii) Data Processing: Users may need to apply data preprocessing, filtering, or other techniques to remove or address any objectionable or harmful content as needed.
The authors of this dataset disclaim any liability for the accuracy, completeness, or suitability of the data and shall not be held responsible for any consequences resulting from the use or misuse of the dataset.
*By accessing and using this dataset, users acknowledge and accept this disclaimer.* |
jhuang14/Labeled_Data | 2023-09-28T08:32:36.000Z | [
"region:us"
] | jhuang14 | null | null | null | 0 | 23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': bustruck
'2': other
'3': rail
splits:
- name: train
num_bytes: 1652124.1515151516
num_examples: 92
- name: test
num_bytes: 718314.8484848485
num_examples: 40
download_size: 2372957
dataset_size: 2370439.0
---
# Dataset Card for "Labeled_Data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ashiyakatuka11/corpus1_dataset | 2023-10-03T12:01:15.000Z | [
"region:us"
] | ashiyakatuka11 | null | null | null | 0 | 23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Session_ID
dtype: int64
- name: 'Speaker '
dtype: string
- name: UserID
dtype: string
- name: prev_Utterance
dtype: string
- name: Utterance
dtype: string
- name: prevUtt_TAG
dtype: string
- name: TAG
dtype: string
- name: new_TAG
dtype: string
- name: new_TAG_name
dtype: string
- name: labels
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 826401
num_examples: 4964
- name: test
num_bytes: 207557
num_examples: 1241
download_size: 426039
dataset_size: 1033958
---
# Dataset Card for "corpus1_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ashiyakatuka11/corpus2_dataset | 2023-10-03T12:01:21.000Z | [
"region:us"
] | ashiyakatuka11 | null | null | null | 0 | 23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: 'Corpus Utterance #'
dtype: int64
- name: 'Session Utterance #'
dtype: string
- name: Time
dtype: string
- name: User
dtype: string
- name: Utterance
dtype: string
- name: TAG
dtype: string
- name: Session ID
dtype: string
- name: new_TAG
dtype: string
- name: new_TAG_name
dtype: string
- name: labels
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 327599
num_examples: 2720
- name: test
num_bytes: 81553
num_examples: 681
download_size: 165842
dataset_size: 409152
---
# Dataset Card for "corpus2_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
woo2/mc2sql | 2023-09-28T13:49:18.000Z | [
"region:us"
] | woo2 | null | null | null | 0 | 23 | Entry not found |
AnikaBasu/CyberbullyingDataset | 2023-09-29T17:59:07.000Z | [
"region:us"
] | AnikaBasu | null | null | null | 1 | 23 | Entry not found |
VuongQuoc/60k_dataset_multichoice_384 | 2023-09-30T05:17:36.000Z | [
"region:us"
] | VuongQuoc | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: token_type_ids
sequence:
sequence: int8
- name: attention_mask
sequence:
sequence: int8
- name: label
dtype: int64
splits:
- name: train
num_bytes: 695952828
num_examples: 60000
- name: test
num_bytes: 2320000
num_examples: 200
download_size: 71338055
dataset_size: 698272828
---
# Dataset Card for "60k_dataset_multichoice_384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shossain/govreport-qa-512 | 2023-10-02T05:09:04.000Z | [
"region:us"
] | shossain | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 33340
num_examples: 5
download_size: 15680
dataset_size: 33340
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "govreport-qa-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Spiderman01/Domestic_violence_info_support_fromposts | 2023-10-02T10:22:42.000Z | [
"region:us"
] | Spiderman01 | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: train
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 945794
num_examples: 273
download_size: 527319
dataset_size: 945794
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Domestic_violence_info_support_fromposts"
This is the dataset of posts from domestic violence victim, which include the content of the post and its kinds of information support.
There are total 14 kinds of info support need:\
(1) Shelters/ DV center/ Agency\
(2) Legal\
(3) Childbearing\
(4) Police\
(5) Wound assessment/record\
(6) DV report procedure/Documentation\
(7) Safety planning\
(8) Finance\
(9) Housing\
(10) Healthcare information (counselling, psychiatrist, doctor etc.)\
(11) DV survivors’ network/ (Online) support groups\
(12) DV knowledge\
(13) Communication\
(14) Miscellaneous (Other)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Dloring1/Mini-10K-Recipes | 2023-10-02T21:40:08.000Z | [
"region:us"
] | Dloring1 | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: input
dtype: string
splits:
- name: train
num_bytes: 7307080.393135772
num_examples: 10000
download_size: 3870373
dataset_size: 7307080.393135772
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Mini-10K-Recipes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hanifabdlh/quac-lamini-instruction-indo-10k-20k | 2023-10-05T04:36:15.000Z | [
"region:us"
] | hanifabdlh | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: context
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: instruction_source
dtype: string
splits:
- name: train
num_bytes: 4148177
num_examples: 10000
download_size: 2392334
dataset_size: 4148177
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "quac-lamini-instruction-indo-10k-20k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BirdL/DONOTUSEDATA-SideA | 2023-10-07T21:59:31.000Z | [
"not-for-all-audiences",
"region:us"
] | BirdL | null | null | null | 0 | 23 | ---
dataset_info:
features:
- name: text
dtype: string
- name: sexual
dtype: float64
- name: hate
dtype: float64
- name: violence
dtype: float64
- name: self-harm
dtype: float64
- name: sexual/minors
dtype: float64
- name: hate/threatening
dtype: float64
- name: violence/graphic
dtype: float64
splits:
- name: train
num_bytes: 8256999
num_examples: 30002
download_size: 6382984
dataset_size: 8256999
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- not-for-all-audiences
---
# Dataset Card for "DONOTUSEDATA"
Studying the effects of harmful data on LLMs. Side A.
Filtered Subset of [kjj0/4chanpol-openai](https://huggingface.co/datasets/kjj0/4chanpol-openaimod)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.