id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Otter-AI/MMVet | Otter-AI | 2023-10-06T15:44:13Z | 58 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-06T15:44:13Z | 2023-10-06T14:54:36.000Z | 2023-10-06T14:54:36 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
VishalCh/sql-parsed | VishalCh | 2023-10-09T13:11:05Z | 58 | 1 | null | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:table-question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"SQL",
"code",
"NLP",
"text-to-sql",
"context-sql",
"spider",
"wikisql",
"sqlglot",
"region:us"
] | 2023-10-09T13:11:05Z | 2023-10-09T13:07:21.000Z | 2023-10-09T13:07:21 | ---
license: cc-by-4.0
task_categories:
- text-generation
- question-answering
- table-question-answering
language:
- en
tags:
- SQL
- code
- NLP
- text-to-sql
- context-sql
- spider
- wikisql
- sqlglot
pretty_name: sql-create-context
size_categories:
- 10K<n<100K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
phatjk/viquad | phatjk | 2023-10-14T07:30:19Z | 58 | 0 | null | [
"region:us"
] | 2023-10-14T07:30:19Z | 2023-10-14T07:30:17.000Z | 2023-10-14T07:30:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: ans_start
dtype: int64
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 24944836
num_examples: 19240
- name: test
num_bytes: 5091238
num_examples: 4065
download_size: 4943526
dataset_size: 30036074
---
# Dataset Card for "viquad"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5953199863433838,
-0.4597916305065155,
0.1155574694275856,
0.23821499943733215,
-0.34080055356025696,
-0.0022382079623639584,
0.33218997716903687,
-0.013704433105885983,
0.888518750667572,
0.525883138179779,
-0.6547464728355408,
-0.8475683927536011,
-0.43659159541130066,
-0.473832130432... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zkdeng/combinedSpidersBalanced | zkdeng | 2023-10-19T01:35:11Z | 58 | 0 | null | [
"region:us"
] | 2023-10-19T01:35:11Z | 2023-10-19T00:46:30.000Z | 2023-10-19T00:46:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Acantholycosa_lignaria
'1': Aculepeira_ceropegia
'2': Agalenatea_redii
'3': Agelena_labyrinthica
'4': Aglaoctenus_castaneus
'5': Aglaoctenus_lagotis
'6': Allocosa_funerea
'7': Allotrochosina_schauinslandi
'8': Alopecosa_albofasciata
'9': Alopecosa_barbipes
'10': Alopecosa_cuneata
'11': Alopecosa_inquilina
'12': Alopecosa_kochi
'13': Alopecosa_pulverulenta
'14': Anahita_punctulata
'15': Anasaitis_canosa
'16': Ancylometes_bogotensis
'17': Ancylometes_concolor
'18': Ancylometes_rufus
'19': Anoteropsis_hilaris
'20': Anoteropsis_litoralis
'21': Anyphaena_accentuata
'22': Aphonopelma_hentzi
'23': Araneus_diadematus
'24': Araneus_marmoreus
'25': Araneus_quadratus
'26': Araneus_trifolium
'27': Araniella_displicata
'28': Arctosa_cinerea
'29': Arctosa_leopardus
'30': Arctosa_littoralis
'31': Arctosa_perita
'32': Arctosa_personata
'33': Argiope_argentata
'34': Argiope_aurantia
'35': Argiope_bruennichi
'36': Argiope_keyserlingi
'37': Argiope_lobata
'38': Argiope_trifasciata
'39': Asthenoctenus_borellii
'40': Attulus_fasciger
'41': Aulonia_albimana
'42': Austracantha_minax
'43': Badumna_longinqua
'44': Carrhotus_xanthogramma
'45': Centroctenus_brevipes
'46': Cheiracanthium_erraticum
'47': Cheiracanthium_gracile
'48': Cheiracanthium_inclusum
'49': Cheiracanthium_mildei
'50': Cheiracanthium_punctorium
'51': Colonus_hesperus
'52': Colonus_sylvanus
'53': Ctenus_amphora
'54': Ctenus_hibernalis
'55': Ctenus_medius
'56': Ctenus_ornatus
'57': Cupiennius_coccineus
'58': Cupiennius_getazi
'59': Cupiennius_salei
'60': Cyclosa_conica
'61': Cyrtophora_citricola
'62': Diapontia_uruguayensis
'63': Dolomedes_albineus
'64': Dolomedes_minor
'65': Dolomedes_scriptus
'66': Dolomedes_tenebrosus
'67': Dolomedes_triton
'68': Dysdera_crocata
'69': Ebrechtella_tricuspidata
'70': Enoplognatha_ovata
'71': Eratigena_agrestis
'72': Eratigena_duellica
'73': Eriophora_ravilla
'74': Eris_militaris
'75': Evarcha_arcuata
'76': Gasteracantha_cancriformis
'77': Geolycosa_vultuosa
'78': Gladicosa_gulosa
'79': Gladicosa_pulchra
'80': Habronattus_pyrrithrix
'81': Hasarius_adansoni
'82': Helpis_minitabunda
'83': Hentzia_mitrata
'84': Hentzia_palmarum
'85': Herpyllus_ecclesiasticus
'86': Heteropoda_venatoria
'87': Hippasa_holmerae
'88': Hogna_antelucana
'89': Hogna_baltimoriana
'90': Hogna_bivittata
'91': Hogna_carolinensis
'92': Hogna_crispipes
'93': Hogna_frondicola
'94': Hogna_gumia
'95': Hogna_radiata
'96': Holocnemus_pluchei
'97': Kukulcania_hibernalis
'98': Lampona_cylindrata
'99': Larinioides_cornutus
'100': Larinioides_sclopetarius
'101': Latrodectus_bishopi
'102': Latrodectus_curacaviensis
'103': Latrodectus_geometricus
'104': Latrodectus_hasselti
'105': Latrodectus_hesperus
'106': Latrodectus_katipo
'107': Latrodectus_mactans
'108': Latrodectus_mirabilis
'109': Latrodectus_renivulvatus
'110': Latrodectus_tredecimguttatus
'111': Latrodectus_variolus
'112': Leucauge_argyra
'113': Leucauge_argyrobapta
'114': Leucauge_dromedaria
'115': Leucauge_venusta
'116': Loxosceles_amazonica
'117': Loxosceles_deserta
'118': Loxosceles_laeta
'119': Loxosceles_reclusa
'120': Loxosceles_rufescens
'121': Loxosceles_tenochtitlan
'122': Loxosceles_yucatana
'123': Lycosa_erythrognatha
'124': Lycosa_hispanica
'125': Lycosa_pampeana
'126': Lycosa_praegrandis
'127': Lycosa_singoriensis
'128': Lycosa_tarantula
'129': Lyssomanes_viridis
'130': Maevia_inclemens
'131': Mangora_acalypha
'132': Maratus_griseus
'133': Marpissa_muscosa
'134': Mecynogea_lemniscata
'135': Menemerus_bivittatus
'136': Menemerus_semilimbatus
'137': Micrathena_gracilis
'138': Micrathena_sagittata
'139': Micrommata_virescens
'140': Missulena_bradleyi
'141': Missulena_occatoria
'142': Misumena_vatia
'143': Misumenoides_formosipes
'144': Misumessus_oblongus
'145': Naphrys_pulex
'146': Neoscona_arabesca
'147': Neoscona_crucifera
'148': Neoscona_oaxacensis
'149': Nephila_pilipes
'150': Neriene_radiata
'151': Nesticodes_rufipes
'152': Nuctenea_umbratica
'153': Oxyopes_salticus
'154': Oxyopes_scalaris
'155': Paraphidippus_aurantius
'156': Parasteatoda_tepidariorum
'157': Paratrochosina_amica
'158': Pardosa_amentata
'159': Pardosa_lapidicina
'160': Pardosa_mercurialis
'161': Pardosa_moesta
'162': Pardosa_wagleri
'163': Peucetia_viridans
'164': Phidippus_audax
'165': Phidippus_clarus
'166': Phidippus_johnsoni
'167': Phidippus_putnami
'168': Philaeus_chrysops
'169': Philodromus_dispar
'170': Pholcus_phalangioides
'171': Phoneutria_boliviensis
'172': Phoneutria_depilata
'173': Phoneutria_fera
'174': Phoneutria_nigriventer
'175': Phoneutria_pertyi
'176': Phoneutria_reidyi
'177': Pirata_piraticus
'178': Pisaura_mirabilis
'179': Pisaurina_mira
'180': Platycryptus_californicus
'181': Platycryptus_undatus
'182': Plebs_eburnus
'183': Plexippus_paykulli
'184': Portacosa_cinerea
'185': Rabidosa_hentzi
'186': Rabidosa_punctulata
'187': Rabidosa_rabida
'188': Salticus_scenicus
'189': Sassacus_vitis
'190': Schizocosa_avida
'191': Schizocosa_malitiosa
'192': Schizocosa_mccooki
'193': Scytodes_thoracica
'194': Sicarius_thomisoides
'195': Socca_pustulosa
'196': Sosippus_californicus
'197': Steatoda_grossa
'198': Steatoda_nobilis
'199': Steatoda_triangulosa
'200': Synema_globosum
'201': Thomisus_onustus
'202': Tigrosa_annexa
'203': Tigrosa_aspersa
'204': Tigrosa_georgicola
'205': Tigrosa_helluo
'206': Trichonephila_clavata
'207': Trichonephila_clavipes
'208': Trichonephila_edulis
'209': Trichonephila_plumipes
'210': Trochosa_ruricola
'211': Trochosa_sepulchralis
'212': Trochosa_terricola
'213': Tropicosa_moesta
'214': Venator_immansuetus
'215': Venator_spenceri
'216': Venatrix_furcillata
'217': Verrucosa_arenata
'218': Wadicosa_fidelis
'219': Xerolycosa_miniata
'220': Xerolycosa_nemoralis
'221': Zoropsis_spinimana
'222': Zygiella_x-notata
splits:
- name: train
num_bytes: 6290926802.784
num_examples: 269172
download_size: 5634209308
dataset_size: 6290926802.784
---
# Dataset Card for "combinedSpidersBalanced"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6495504379272461,
-0.2807183861732483,
-0.07170408219099045,
0.39155423641204834,
-0.4454924464225769,
0.1869812309741974,
0.1550508737564087,
-0.2208620011806488,
1.0811725854873657,
0.7067230939865112,
-0.9106855392456055,
-0.6692607402801514,
-0.6716372966766357,
-0.5073955059051514,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Isamu136/penetration_testing_scraped_dataset | Isamu136 | 2023-10-30T07:35:25Z | 58 | 1 | null | [
"region:us"
] | 2023-10-30T07:35:25Z | 2023-10-29T17:44:34.000Z | 2023-10-29T17:44:34 | ---
dataset_info:
features:
- name: text
dtype: string
- name: embedding
sequence: float32
- name: tokens
sequence: int64
- name: database
dtype: string
- name: file
dtype: string
- name: chunk
dtype: int64
splits:
- name: train
num_bytes: 1005293572
num_examples: 107542
download_size: 663206603
dataset_size: 1005293572
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "penetration_testing_scraped_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5156393051147461,
-0.6197367310523987,
0.11665841937065125,
0.31207552552223206,
-0.03218816593289375,
0.07461009174585342,
0.5326234698295593,
0.029686391353607178,
0.6598290205001831,
0.5220224261283875,
-0.5869645476341248,
-0.9479371905326843,
-0.4177311062812805,
-0.314866453409194... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deep-learning-analytics/arxiv_small_nougat | deep-learning-analytics | 2023-10-30T18:02:58Z | 58 | 0 | null | [
"region:us"
] | 2023-10-30T18:02:58Z | 2023-10-30T17:51:26.000Z | 2023-10-30T17:51:26 | ---
dataset:
name: arxiv_small_nougat
description: A dataset containing 108 recent papers from arXiv related to LLM (Large Language Models) and Transformers, parsed and processed using Meta's Nougat model to preserve tables and math equations.
license: [MIT]
task_categories: [Natural Language Processing, Machine Learning]
languages: [English]
size: 108 papers
download_size: [21.9MB]
---
## Dataset Description
The "arxiv_small_nougat" dataset is a collection of 108 recent papers sourced from arXiv, focusing on topics related to Large Language Models (LLM) and Transformers. These papers have been meticulously processed and parsed using Meta's Nougat model, which is specifically designed to retain the integrity of complex elements such as tables and mathematical equations.
## Data Format
The dataset contains the parsed content of the selected papers, with special attention given to the preservation of formatting, tables, and mathematical expressions. Each paper is provided as plain text.
## Usage
Researchers, academics, and natural language processing practitioners can leverage this dataset for various tasks related to LLM and Transformers, including:
- Language modeling
- Text summarization
- Information retrieval
- Table and equation extraction
## Acknowledgments
We acknowledge the arXiv platform for providing open access to a wealth of research papers in the field of machine learning and natural language processing.
## License
[mit]
---
| [
-0.3871130347251892,
-0.7736002802848816,
0.40916532278060913,
0.17220322787761688,
-0.20367372035980225,
-0.25077563524246216,
-0.059655070304870605,
-0.05889071151614189,
0.03701486438512802,
0.9217371344566345,
-0.5459408164024353,
-0.5452658534049988,
-0.43877872824668884,
0.2735658884... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tourist800/orkg-llama2 | tourist800 | 2023-11-17T10:55:11Z | 58 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-17T10:55:11Z | 2023-11-08T14:44:25.000Z | 2023-11-08T14:44:25 | ---
license: mit
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigheiniuJ/JimmyLuAugRestChat | bigheiniuJ | 2023-11-09T16:12:58Z | 58 | 0 | null | [
"region:us"
] | 2023-11-09T16:12:58Z | 2023-11-09T16:12:57.000Z | 2023-11-09T16:12:57 | ---
dataset_info:
features:
- name: output
dtype: string
- name: input
dtype: string
- name: seed
dtype: string
- name: split
dtype: string
- name: task
dtype: string
- name: options
sequence: string
- name: id
dtype: int64
- name: aug_type
dtype: string
- name: aug_time
dtype: int64
splits:
- name: train
num_bytes: 71287951
num_examples: 143531
download_size: 11818939
dataset_size: 71287951
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "JimmyLuAugRestChat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5147846341133118,
-0.42461955547332764,
0.10873156785964966,
0.17445343732833862,
-0.1671869456768036,
0.2579178214073181,
0.2087937444448471,
-0.3085699677467346,
1.204540729522705,
0.4300644099712372,
-0.9361323118209839,
-0.6063437461853027,
-0.39580148458480835,
-0.42946118116378784... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lhallee/HumanPPI_reg | lhallee | 2023-11-14T20:36:00Z | 58 | 0 | null | [
"region:us"
] | 2023-11-14T20:36:00Z | 2023-11-14T20:35:55.000Z | 2023-11-14T20:35:55 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: seqs
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 51590813
num_examples: 26319
- name: valid
num_bytes: 475534
num_examples: 234
- name: test
num_bytes: 343668
num_examples: 180
download_size: 28561787
dataset_size: 52410015
---
# Dataset Card for "HumanPPI_reg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5544173121452332,
-0.05042664334177971,
-0.05332258716225624,
0.3210531771183014,
-0.1189948171377182,
0.019466890022158623,
0.1516941785812378,
-0.25206780433654785,
0.7768818140029907,
0.35991397500038147,
-0.8042298555374146,
-0.6217067837715149,
-0.2929861545562744,
-0.0223066993057... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
monology/VMware-open-instruct-higgsfield | monology | 2023-11-20T19:39:05Z | 58 | 0 | null | [
"region:us"
] | 2023-11-20T19:39:05Z | 2023-11-15T23:28:41.000Z | 2023-11-15T23:28:41 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 91035395
num_examples: 142622
download_size: 42482187
dataset_size: 91035395
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "VMware-open-instruct-higgsfield"
Just a port of the [VMware/open-instruct](https://huggingface.co/datasets/VMware/open-instruct) dataset to [Higgsfield's prompt-completion format](https://github.com/higgsfield-ai/higgsfield/blob/main/tutorials/prompt_completion.ipynb). | [
-0.6796625852584839,
-0.3258512318134308,
0.5663166642189026,
-0.18504203855991364,
0.03015245497226715,
-0.10522168129682541,
0.12271381914615631,
0.006382261868566275,
0.3785102963447571,
0.6875301003456116,
-1.0326297283172607,
-0.7110127210617065,
-0.048524919897317886,
-0.074381455779... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Elfsong/Wino_Bias | Elfsong | 2023-11-19T07:36:01Z | 58 | 0 | null | [
"region:us"
] | 2023-11-19T07:36:01Z | 2023-11-16T17:37:05.000Z | 2023-11-16T17:37:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: reference
dtype: string
- name: gender
dtype: string
- name: polarity
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 335127
num_examples: 1584
- name: test
num_bytes: 346559
num_examples: 1584
download_size: 217833
dataset_size: 681686
---
# Dataset Card for "Wino_Bias"
Winograd-schema dataset for detecting gender bias (WinoBias)
More info can be found [here](https://uclanlp.github.io/corefBias/overview)
| [
-0.45797523856163025,
-0.18359272181987762,
0.13462777435779572,
0.009624145925045013,
0.11523575335741043,
0.16345497965812683,
0.15830551087856293,
-0.2664521336555481,
0.14631371200084686,
0.6090264320373535,
-0.8893539905548096,
-0.5991278290748596,
-0.4969451129436493,
-0.324303507804... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kaniam/invoice | kaniam | 2023-11-28T09:29:33Z | 58 | 0 | null | [
"region:us"
] | 2023-11-28T09:29:33Z | 2023-11-20T07:26:10.000Z | 2023-11-20T07:26:10 | Invoice Data | [
0.23559381067752838,
-0.026568839326500893,
0.2322060465812683,
0.20299144089221954,
-0.2014259248971939,
-0.14836472272872925,
0.5650803446769714,
-0.4993217885494232,
0.20321626961231232,
1.0697394609451294,
-0.1809057742357254,
-0.3411869704723358,
-0.3142402172088623,
-0.34945282340049... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
katanaml/cord | katanaml | 2022-03-06T15:02:45Z | 57 | 1 | null | [
"region:us"
] | 2022-03-06T15:02:45Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # CORD: A Consolidated Receipt Dataset for Post-OCR Parsing
CORD dataset is cloned from [clovaai](https://github.com/clovaai/cord) GitHub repo
- Box coordinates are normalized against image width/height
- Labels with very few occurrences are replaced with O:
```
replacing_labels = ['menu.etc', 'menu.itemsubtotal',
'menu.sub_etc', 'menu.sub_unitprice',
'menu.vatyn', 'void_menu.nm',
'void_menu.price', 'sub_total.othersvc_price']
```
Check for more info [Sparrow](https://github.com/katanaml/sparrow)
## Citation
### CORD: A Consolidated Receipt Dataset for Post-OCR Parsing
```
@article{park2019cord,
title={CORD: A Consolidated Receipt Dataset for Post-OCR Parsing},
author={Park, Seunghyun and Shin, Seung and Lee, Bado and Lee, Junyeop and Surh, Jaeheung and Seo, Minjoon and Lee, Hwalsuk}
booktitle={Document Intelligence Workshop at Neural Information Processing Systems}
year={2019}
}
```
### Post-OCR parsing: building simple and robust parser via BIO tagging
```
@article{hwang2019post,
title={Post-OCR parsing: building simple and robust parser via BIO tagging},
author={Hwang, Wonseok and Kim, Seonghyeon and Yim, Jinyeong and Seo, Minjoon and Park, Seunghyun and Park, Sungrae and Lee, Junyeop and Lee, Bado and Lee, Hwalsuk}
booktitle={Document Intelligence Workshop at Neural Information Processing Systems}
year={2019}
}
``` | [
-0.3395134210586548,
-0.33613070845603943,
0.3041880428791046,
-0.3810166120529175,
-0.5167205929756165,
-0.004268249962478876,
-0.05042753368616104,
-0.5062779784202576,
0.3379923403263092,
0.3505150079727173,
-0.20901213586330414,
-0.7882446050643921,
-0.2966754138469696,
0.2221048623323... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nielsr/FUNSD_layoutlmv2 | nielsr | 2022-10-25T09:51:20Z | 57 | 4 | funsd | [
"language:en",
"arxiv:1905.13538",
"region:us"
] | 2022-10-25T09:51:20Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
language:
- en
paperswithcode_id: funsd
---
# Dataset Card for "FUNSD"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
The [FUNSD](https://guillaumejaume.github.io/FUNSD/) dataset, with one difference compared to the original dataset, each document image is resized to 224x224.
The FUNSD dataset is a collection of annotated forms.
This dataset loading script is taken from the [official LayoutLMv2 implementation](https://github.com/microsoft/unilm/blob/master/layoutlmft/layoutlmft/data/datasets/funsd.py), and updated to not include any Detectron2 dependencies.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
### Data Instances
#### conll2000
- **Size of downloaded dataset files:** 3.32 MB
- **Size of the generated dataset:** 6.25 MB
- **Total amount of disk used:** 9.57 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"chunk_tags": [11, 13, 11, 12, 21, 22, 22, 22, 22, 11, 12, 12, 17, 11, 12, 13, 11, 0, 1, 13, 11, 11, 0, 21, 22, 22, 11, 12, 12, 13, 11, 12, 12, 11, 12, 12, 0],
"id": "0",
"pos_tags": [19, 14, 11, 19, 39, 27, 37, 32, 34, 11, 15, 19, 14, 19, 22, 14, 20, 5, 15, 14, 19, 19, 5, 34, 32, 34, 11, 15, 19, 14, 20, 9, 20, 24, 15, 22, 6],
"tokens": "[\"Confidence\", \"in\", \"the\", \"pound\", \"is\", \"widely\", \"expected\", \"to\", \"take\", \"another\", \"sharp\", \"dive\", \"if\", \"trade\", \"figur..."
}
```
### Data Fields
The data fields are the same among all splits.
### Data Splits
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/corr/abs-1905-13538,
author = {Guillaume Jaume and
Hazim Kemal Ekenel and
Jean{-}Philippe Thiran},
title = {{FUNSD:} {A} Dataset for Form Understanding in Noisy Scanned Documents},
journal = {CoRR},
volume = {abs/1905.13538},
year = {2019},
url = {http://arxiv.org/abs/1905.13538},
archivePrefix = {arXiv},
eprint = {1905.13538},
timestamp = {Mon, 03 Jun 2019 13:42:33 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1905-13538.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@vblagoje](https://github.com/vblagoje), [@jplu](https://github.com/jplu) for adding this dataset. | [
-0.49900010228157043,
-0.47346609830856323,
0.21269747614860535,
0.18384680151939392,
-0.25745055079460144,
-0.09904734790325165,
-0.2739960551261902,
-0.4028224050998688,
0.6366791725158691,
0.4554176926612854,
-0.8608560562133789,
-0.8622919917106628,
-0.44880762696266174,
-0.08511342108... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nreimers/trec-covid | nreimers | 2022-03-23T12:55:44Z | 57 | 0 | null | [
"region:us"
] | 2022-03-23T12:55:44Z | 2022-03-22T22:14:03.000Z | 2022-03-22T22:14:03 | This is the corpus file from the [BEIR benchmark](https://github.com/beir-cellar/beir) for the [TREC-COVID 19 dataset](https://ir.nist.gov/trec-covid/).
| [
-0.3697713315486908,
-0.7541778087615967,
-0.17206047475337982,
0.04526711627840996,
0.027022765949368477,
0.42890307307243347,
0.12330055236816406,
-0.2388947457075119,
0.2164037674665451,
0.6954271793365479,
-0.3840341567993164,
-0.6128011345863342,
-0.26017460227012634,
0.06689208745956... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lmqg/qg_subjqa | lmqg | 2022-12-02T18:56:32Z | 57 | 0 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:subjqa",
"language:en",
"license:cc-by-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | 2022-12-02T18:56:32Z | 2022-05-11T11:16:13.000Z | 2022-05-11T11:16:13 | ---
license: cc-by-4.0
pretty_name: SubjQA for question generation
language: en
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets: subjqa
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qg_subjqa"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
Modified version of [SubjQA](https://github.com/megagonlabs/SubjQA) for question generation (QG) task.
### Supported Tasks and Leaderboards
* `question-generation`: The dataset can be used to train a model for question generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"question": "How is book?",
"paragraph": "I am giving "Gone Girl" 3 stars, but only begrudgingly. In my mind, any book that takes me 3 months and 20 different tries to read is not worth 3 stars, especially a book written by an author I already respect. And I am not kidding, for me the first half of "Gone Girl" was a PURE TORTURE to read.Amy Dunn disappears on the day of her 5th wedding anniversary. All gradually uncovered evidence suggests that her husband, Nick, is somehow involved. Did he kill her? Was she kidnapped? What happened to Amy? One thing is clear, Nick and Amy's marriage wasn't as perfect as everybody thought.The first part of the novel is all about the investigation into Amy's disappearance, slow unraveling of Nick's dirty secrets, reminiscing about the troubled history of Nick and Amy's marriage as told in Amy's hidden diary. I strained and strained my brain trying to understand why this chunk of Gone Girl had no appeal to me whatsoever. The only answer I have is this: I am really not into reading about rich white people's problems. You want to whine to me about your dwindling trust fund? Losing your cushy New York job? Moving south and "only" renting a mansion there? Being unhappy because you have too much free time on your hands and you are used to only work as a hobby? You want to make fun of your lowly, un-posh neighbors and their casseroles? Well, I am not interested. I'd rather read about someone not necessarily likable, but at least worthy of my empathy, not waste my time on self-centered, spoiled, pathetic people who don't know what real problems are. Granted, characters in Flynn's previous novels ("Sharp Objects" and "Dark Places") are pretty pathetic and and at times revolting too, but I always felt some strange empathy towards them, not annoyance and boredom, like I felt reading about Amy and Nick's marriage voes.But then second part, with its wicked twist, changed everything. The story became much more exciting, dangerous and deranged. The main characters revealed sides to them that were quite shocking and VERY entertaining. I thought the Gillian Flynn I knew before finally unleashed her talent for writing utterly unlikable and crafty women. THEN I got invested in the story, THEN I cared.Was it too little too late though? I think it was. Something needed to be done to make "Gone Girl" a better read. Make it shorter? Cut out first part completely? I don't know. But because of my uneven experience with this novel I won't be able to recommend "Gone Girl" as readily as I did Flynn's earlier novels, even though I think this horror marriage story (it's not a true mystery, IMO) has some brilliantly written psycho goodness in it and an absolutely messed up ending that many loathed but I LOVED. I wish it didn't take so much time and patience to get to all of that...",
"answer": "any book that takes me 3 months and 20 different tries to read is not worth 3 stars",
"sentence": "In my mind, any book that takes me 3 months and 20 different tries to read is not worth 3 stars , especially a book written by an author I already respect.",
"paragraph_sentence": "I am giving "Gone Girl" 3 stars, but only begrudgingly. <hl> In my mind, any book that takes me 3 months and 20 different tries to read is not worth 3 stars , especially a book written by an author I already respect. <hl> And I am not kidding, for me the first half of "Gone Girl" was a PURE TORTURE to read. Amy Dunn disappears on the day of her 5th wedding anniversary. All gradually uncovered evidence suggests that her husband, Nick, is somehow involved. Did he kill her? Was she kidnapped? What happened to Amy? One thing is clear, Nick and Amy's marriage wasn't as perfect as everybody thought. The first part of the novel is all about the investigation into Amy's disappearance, slow unraveling of Nick's dirty secrets, reminiscing about the troubled history of Nick and Amy's marriage as told in Amy's hidden diary. I strained and strained my brain trying to understand why this chunk of Gone Girl had no appeal to me whatsoever. The only answer I have is this: I am really not into reading about rich white people's problems. You want to whine to me about your dwindling trust fund? Losing your cushy New York job? Moving south and "only" renting a mansion there? Being unhappy because you have too much free time on your hands and you are used to only work as a hobby? You want to make fun of your lowly, un-posh neighbors and their casseroles? Well, I am not interested. I'd rather read about someone not necessarily likable, but at least worthy of my empathy, not waste my time on self-centered, spoiled, pathetic people who don't know what real problems are. Granted, characters in Flynn's previous novels ("Sharp Objects" and "Dark Places") are pretty pathetic and and at times revolting too, but I always felt some strange empathy towards them, not annoyance and boredom, like I felt reading about Amy and Nick's marriage voes. But then second part, with its wicked twist, changed everything. The story became much more exciting, dangerous and deranged. The main characters revealed sides to them that were quite shocking and VERY entertaining. I thought the Gillian Flynn I knew before finally unleashed her talent for writing utterly unlikable and crafty women. THEN I got invested in the story, THEN I cared. Was it too little too late though? I think it was. Something needed to be done to make "Gone Girl" a better read. Make it shorter? Cut out first part completely? I don't know. But because of my uneven experience with this novel I won't be able to recommend "Gone Girl" as readily as I did Flynn's earlier novels, even though I think this horror marriage story (it's not a true mystery, IMO) has some brilliantly written psycho goodness in it and an absolutely messed up ending that many loathed but I LOVED. I wish it didn't take so much time and patience to get to all of that...",
"paragraph_answer": "I am giving "Gone Girl" 3 stars, but only begrudgingly. In my mind, <hl> any book that takes me 3 months and 20 different tries to read is not worth 3 stars <hl>, especially a book written by an author I already respect. And I am not kidding, for me the first half of "Gone Girl" was a PURE TORTURE to read.Amy Dunn disappears on the day of her 5th wedding anniversary. All gradually uncovered evidence suggests that her husband, Nick, is somehow involved. Did he kill her? Was she kidnapped? What happened to Amy? One thing is clear, Nick and Amy's marriage wasn't as perfect as everybody thought.The first part of the novel is all about the investigation into Amy's disappearance, slow unraveling of Nick's dirty secrets, reminiscing about the troubled history of Nick and Amy's marriage as told in Amy's hidden diary. I strained and strained my brain trying to understand why this chunk of Gone Girl had no appeal to me whatsoever. The only answer I have is this: I am really not into reading about rich white people's problems. You want to whine to me about your dwindling trust fund? Losing your cushy New York job? Moving south and "only" renting a mansion there? Being unhappy because you have too much free time on your hands and you are used to only work as a hobby? You want to make fun of your lowly, un-posh neighbors and their casseroles? Well, I am not interested. I'd rather read about someone not necessarily likable, but at least worthy of my empathy, not waste my time on self-centered, spoiled, pathetic people who don't know what real problems are. Granted, characters in Flynn's previous novels ("Sharp Objects" and "Dark Places") are pretty pathetic and and at times revolting too, but I always felt some strange empathy towards them, not annoyance and boredom, like I felt reading about Amy and Nick's marriage voes.But then second part, with its wicked twist, changed everything. The story became much more exciting, dangerous and deranged. The main characters revealed sides to them that were quite shocking and VERY entertaining. I thought the Gillian Flynn I knew before finally unleashed her talent for writing utterly unlikable and crafty women. THEN I got invested in the story, THEN I cared.Was it too little too late though? I think it was. Something needed to be done to make "Gone Girl" a better read. Make it shorter? Cut out first part completely? I don't know. But because of my uneven experience with this novel I won't be able to recommend "Gone Girl" as readily as I did Flynn's earlier novels, even though I think this horror marriage story (it's not a true mystery, IMO) has some brilliantly written psycho goodness in it and an absolutely messed up ending that many loathed but I LOVED. I wish it didn't take so much time and patience to get to all of that...",
"sentence_answer": "In my mind, <hl> any book that takes me 3 months and 20 different tries to read is not worth 3 stars <hl> , especially a book written by an author I already respect.",
"paragraph_id": "1b7cc3db9ec681edd253a41a2785b5a9",
"question_subj_level": 1,
"answer_subj_level": 1,
"domain": "books"
}
```
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
### Data Splits
| name |train|validation|test |
|-------------|----:|---------:|----:|
|default (all)|4437 | 659 |1489 |
| books |636 | 91 |190 |
| electronics |696 | 98 |237 |
| movies |723 | 100 |153 |
| grocery |686 | 100 |378 |
| restaurants |822 | 128 |135 |
| tripadvisor |874 | 142 |396 |
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | [
-0.43886348605155945,
-0.7612143754959106,
0.8233140110969543,
-0.22928668558597565,
-0.503141462802887,
-0.3253493309020996,
0.16841746866703033,
-0.19631189107894897,
0.23627233505249023,
0.4321618974208832,
-0.6978715062141418,
-0.28062164783477783,
-0.3895891010761261,
0.15053161978721... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ywchoi/pubmed_abstract_7 | ywchoi | 2022-09-13T01:12:17Z | 57 | 0 | null | [
"region:us"
] | 2022-09-13T01:12:17Z | 2022-09-13T01:10:37.000Z | 2022-09-13T01:10:37 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nick-carroll1/lyrics_dataset | nick-carroll1 | 2022-10-23T17:56:11Z | 57 | 0 | null | [
"region:us"
] | 2022-10-23T17:56:11Z | 2022-10-22T19:59:04.000Z | 2022-10-22T19:59:04 | ---
dataset_info:
features:
- name: Artist
dtype: string
- name: Song
dtype: string
- name: Lyrics
dtype: string
splits:
- name: train
num_bytes: 371464
num_examples: 237
download_size: 166829
dataset_size: 371464
---
# Dataset Card for "lyrics_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.538036048412323,
-0.07396311312913895,
0.14324937760829926,
0.3774208128452301,
-0.06813681870698929,
0.14562064409255981,
-0.046564847230911255,
-0.18230736255645752,
0.9003204107284546,
0.6054000854492188,
-1.0866379737854004,
-0.9446256756782532,
-0.6803431510925293,
-0.1647995561361... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mertcobanov/nature128_1k | mertcobanov | 2023-01-06T10:37:33Z | 57 | 1 | null | [
"region:us"
] | 2023-01-06T10:37:33Z | 2023-01-06T10:35:28.000Z | 2023-01-06T10:35:28 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': 07968_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hardenbergia_violacea
'1': 07969_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hedysarum_alpinum
'2': 07970_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hedysarum_boreale
'3': 07971_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hoffmannseggia_glauca
'4': 07972_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hoffmannseggia_microphylla
'5': 07973_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hosackia_gracilis
'6': 07974_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hylodesmum_glutinosum
'7': 07975_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Hylodesmum_nudiflorum
'8': 07976_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Indigofera_miniata
'9': 07977_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Kennedia_prostrata
'10': 07978_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Laburnum_anagyroides
'11': 07979_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_hirsutus
'12': 07980_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_japonicus
'13': 07986_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_tuberosus
'14': 07987_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_vernus
'15': 07988_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lathyrus_vestitus
'16': 07989_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lespedeza_capitata
'17': 07990_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lespedeza_cuneata
'18': 07991_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lespedeza_virginica
'19': 07992_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lessertia_frutescens
'20': 08013_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_texensis
'21': 08014_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Lupinus_truncatus
'22': 08015_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Macroptilium_atropurpureum
'23': 08016_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Macroptilium_gibbosifolium
'24': 08017_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Macroptilium_lathyroides
'25': 08018_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_arabica
'26': 08019_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_falcata
'27': 08020_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_lupulina
'28': 08021_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_minima
'29': 08022_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_polymorpha
'30': 08023_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Medicago_sativa
'31': 08024_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Melilotus_albus
'32': 08025_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Melilotus_indicus
'33': 08026_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Melilotus_officinalis
'34': 08049_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Prosopis_laevigata
'35': 08050_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Prosopis_pubescens
'36': 08051_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Prosopis_velutina
'37': 08052_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Psorothamnus_emoryi
'38': 08053_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Psorothamnus_schottii
'39': 08054_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Psorothamnus_spinosus
'40': 08055_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Pueraria_montana
'41': 08056_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Robinia_neomexicana
'42': 08057_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Robinia_pseudoacacia
'43': 08058_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Rupertia_physodes
'44': 08059_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Securigera_varia
'45': 08060_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senegalia_greggii
'46': 08061_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_alata
'47': 08062_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_armata
'48': 08063_Plantae_Tracheophyta_Magnoliopsida_Fabales_Fabaceae_Senna_covesii
'49': 09930_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Hypolepis_ambigua
'50': 09931_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Paesia_scaberula
'51': 09932_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Pteridium_aquilinum
'52': 09933_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Pteridium_esculentum
'53': 09934_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dennstaedtiaceae_Pteridium_pinetorum
'54': 09935_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Diplaziopsidaceae_Homalosorus_pycnocarpos
'55': 09936_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Cyrtomium_falcatum
'56': 09937_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_arguta
'57': 09938_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_carthusiana
'58': 09939_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_cristata
'59': 09940_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_expansa
'60': 09941_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_filix-mas
'61': 09942_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_fragrans
'62': 09943_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_intermedia
'63': 09944_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Dryopteris_marginalis
'64': 09945_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_acrostichoides
'65': 09946_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_lonchitis
'66': 09947_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_munitum
'67': 09948_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_neozelandicum
'68': 09949_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Polystichum_vestitum
'69': 09950_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Dryopteridaceae_Rumohra_adiantiformis
'70': 09951_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Nephrolepidaceae_Nephrolepis_cordifolia
'71': 09952_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Onocleaceae_Matteuccia_struthiopteris
'72': 09953_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Onocleaceae_Onoclea_sensibilis
'73': 09954_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Microsorum_pustulatum
'74': 09955_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Microsorum_scandens
'75': 09956_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Notogrammitis_heterophylla
'76': 09957_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Phlebodium_aureum
'77': 09958_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Pleopeltis_michauxiana
'78': 09959_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_californicum
'79': 09960_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_glycyrrhiza
'80': 09961_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_scouleri
'81': 09962_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_virginianum
'82': 09963_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Polypodium_vulgare
'83': 09964_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Polypodiaceae_Pyrrosia_eleagnifolia
'84': 09965_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Acrostichum_danaeifolium
'85': 09966_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_aleuticum
'86': 09967_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_capillus-veneris
'87': 09968_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_cunninghamii
'88': 09969_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_hispidulum
'89': 09970_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_jordanii
'90': 09971_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Adiantum_pedatum
'91': 09972_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Aspidotis_densa
'92': 09973_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Astrolepis_sinuata
'93': 09974_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Cryptogramma_acrostichoides
'94': 09975_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Myriopteris_alabamensis
'95': 09976_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Myriopteris_aurea
'96': 09977_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Myriopteris_parryi
'97': 09978_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_andromedifolia
'98': 09979_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_atropurpurea
'99': 09980_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_glabella
'100': 09981_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_mucronata
'101': 09982_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pellaea_rotundifolia
'102': 09983_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pentagramma_triangularis
'103': 09984_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pteris_cretica
'104': 09985_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pteris_macilenta
'105': 09986_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pteris_tremula
'106': 09987_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Pteridaceae_Pteris_vittata
'107': 09988_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Parathelypteris_noveboracensis
'108': 09989_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Phegopteris_connectilis
'109': 09990_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Phegopteris_hexagonoptera
'110': 09991_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Pneumatopteris_pennigera
'111': 09992_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Thelypteridaceae_Thelypteris_palustris
'112': 09993_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Woodsiaceae_Woodsia_ilvensis
'113': 09994_Plantae_Tracheophyta_Polypodiopsida_Polypodiales_Woodsiaceae_Woodsia_obtusa
'114': 09995_Plantae_Tracheophyta_Polypodiopsida_Psilotales_Psilotaceae_Psilotum_nudum
'115': 09996_Plantae_Tracheophyta_Polypodiopsida_Psilotales_Psilotaceae_Tmesipteris_elongata
'116': 09997_Plantae_Tracheophyta_Polypodiopsida_Salviniales_Salviniaceae_Azolla_filiculoides
'117': 09998_Plantae_Tracheophyta_Polypodiopsida_Salviniales_Salviniaceae_Salvinia_minima
'118': 09999_Plantae_Tracheophyta_Polypodiopsida_Schizaeales_Lygodiaceae_Lygodium_japonicum
splits:
- name: train
num_bytes: 130554746.56
num_examples: 1190
download_size: 132054218
dataset_size: 130554746.56
---
# Dataset Card for "nature128_1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6308282017707825,
-0.21182850003242493,
0.07166992872953415,
0.3951491117477417,
-0.4444766044616699,
-0.06898929178714752,
0.41907280683517456,
-0.36089372634887695,
1.052390694618225,
0.5327479243278503,
-1.0059306621551514,
-0.6828731894493103,
-0.5877653360366821,
0.0303305666893720... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
urialon/gov_report_test | urialon | 2023-02-28T15:42:26Z | 57 | 0 | null | [
"region:us"
] | 2023-02-28T15:42:26Z | 2023-02-28T15:42:18.000Z | 2023-02-28T15:42:18 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vietgpt/daily_dialog_vi | vietgpt | 2023-06-21T14:11:16Z | 57 | 1 | null | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:vi",
"SFT",
"region:us"
] | 2023-06-21T14:11:16Z | 2023-03-29T14:57:48.000Z | 2023-03-29T14:57:48 | ---
dataset_info:
features:
- name: dialog
sequence: string
splits:
- name: train
num_bytes: 7803227
num_examples: 11118
- name: validation
num_bytes: 718575
num_examples: 1000
- name: test
num_bytes: 698896
num_examples: 1000
download_size: 4841457
dataset_size: 9220698
task_categories:
- conversational
language:
- vi
tags:
- SFT
size_categories:
- 10K<n<100K
---
# DailyDialog
- Source: https://huggingface.co/datasets/daily_dialog
- Num examples:
- 11,118 (train)
- 1,000 (validation)
- 1,000 (test)
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("vietgpt/daily_dialog_vi")
``` | [
-0.17046129703521729,
-0.72080397605896,
0.20112575590610504,
0.5660653710365295,
-0.26199784874916077,
-0.44792473316192627,
0.1543726623058319,
-0.13033193349838257,
-0.015238216146826744,
0.6170012354850769,
-0.8905160427093506,
-0.5500780940055847,
-0.3789544105529785,
0.58582544326782... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/vertebral_column | mstz | 2023-04-16T18:03:50Z | 57 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"vertebral_column",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | 2023-04-16T18:03:50Z | 2023-04-07T10:20:48.000Z | 2023-04-07T10:20:48 | ---
language:
- en
tags:
- vertebral_column
- tabular_classification
- binary_classification
- UCI
pretty_name: Vertebral Column
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- vertebral
license: cc
---
# Vertebral Column
The [Vertebral Column dataset](https://archive.ics.uci.edu/ml/datasets/vertebral+column) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| abnormal | Binary classification | Is the spine abnormal?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/vertebral_column")["train"]
``` | [
0.013676251284778118,
-0.31839197874069214,
0.013985984958708286,
0.56679368019104,
-0.46768510341644287,
-0.08042607456445694,
0.45159056782722473,
-0.41869425773620605,
0.5581279993057251,
0.478407621383667,
-0.5276198983192444,
-1.0129457712173462,
-0.5615602731704712,
0.195104822516441... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
slvnwhrl/tenkgnad-clustering-p2p | slvnwhrl | 2023-04-24T12:06:18Z | 57 | 0 | null | [
"size_categories:10K<n<100K",
"language:de",
"license:cc-by-nc-sa-4.0",
"embeddings",
"clustering",
"benchmark",
"arxiv:2210.07316",
"region:us"
] | 2023-04-24T12:06:18Z | 2023-04-21T14:07:23.000Z | 2023-04-21T14:07:23 | ---
license: cc-by-nc-sa-4.0
language:
- de
tags:
- embeddings
- clustering
- benchmark
size_categories:
- 10K<n<100K
---
This dataset can be used as a benchmark for clustering word embeddings for <b>German</b>.
The datasets contains news article titles and is based on the dataset of the [One Million Posts Corpus](https://ofai.github.io/million-post-corpus/) and [10kGNAD](https://github.com/tblock/10kGNAD). It contains 10'275 unique samples, 10 splits with 1'436 to 9'962 samples and 9 unique classes. Splits are built similarly to MTEB's [TwentyNewsgroupsClustering](https://huggingface.co/datasets/mteb/twentynewsgroups-clustering) ([Paper](https://arxiv.org/abs/2210.07316)).
Have a look at [German Text Embedding Clustering Benchmark](https://github.com/ClimSocAna/tecb-de) for more infos, datasets and evaluation results. | [
-0.46912530064582825,
-0.7911715507507324,
0.5464157462120056,
0.5322356820106506,
-0.598031222820282,
0.10105662792921066,
-0.27273279428482056,
-0.09570642560720444,
0.3780791759490967,
0.1725190132856369,
-0.2948519289493561,
-1.1135460138320923,
-0.8250306844711304,
0.10831508785486221... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
emozilla/quality | emozilla | 2023-07-14T00:56:02Z | 57 | 5 | null | [
"language:en",
"region:us"
] | 2023-07-14T00:56:02Z | 2023-04-30T03:31:45.000Z | 2023-04-30T03:31:45 | ---
language: en
dataset_info:
features:
- name: article
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: int64
- name: hard
dtype: bool
splits:
- name: train
num_bytes: 62597212
num_examples: 2523
- name: validation
num_bytes: 51198650
num_examples: 2086
download_size: 14352147
dataset_size: 113795862
---
# Dataset Card for "quality"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5571836829185486,
-0.21913501620292664,
0.23130285739898682,
0.116291843354702,
-0.29644644260406494,
0.09377593547105789,
0.34356313943862915,
-0.2935515344142914,
0.8470718860626221,
0.4999927878379822,
-0.666205108165741,
-0.7885845303535461,
-0.5759018063545227,
-0.29721537232398987... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MBZUAI/LaMini-Hallucination | MBZUAI | 2023-05-23T18:39:09Z | 57 | 3 | null | [
"region:us"
] | 2023-05-23T18:39:09Z | 2023-05-23T18:39:01.000Z | 2023-05-23T18:39:01 | ---
dataset_info:
features:
- name: question
dtype: string
- name: category
dtype: string
splits:
- name: test
num_bytes: 2785
num_examples: 40
download_size: 3220
dataset_size: 2785
---
# Dataset Card for "LaMini-Hallucination"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.756886899471283,
-0.35310623049736023,
0.31064876914024353,
0.3541368246078491,
-0.23792050778865814,
-0.23498070240020752,
0.20057961344718933,
-0.2275349646806717,
1.0530089139938354,
0.7512980699539185,
-0.8948982954025269,
-0.6826726198196411,
-0.5314153432846069,
-0.314687848091125... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarin-knext/scifact-pl-qrels | clarin-knext | 2023-06-07T08:25:00Z | 57 | 0 | null | [
"task_categories:sentence-similarity",
"language:pl",
"license:cc-by-sa-4.0",
"arxiv:2305.19840",
"region:us"
] | 2023-06-07T08:25:00Z | 2023-06-06T17:09:44.000Z | 2023-06-06T17:09:44 | ---
license: cc-by-sa-4.0
task_categories:
- sentence-similarity
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | [
-0.2209915816783905,
-0.9029768109321594,
0.5094643235206604,
0.2354193478822708,
-0.3185211718082428,
-0.1491904854774475,
-0.16673950850963593,
-0.4962919354438782,
-0.018960798159241676,
0.4112257659435272,
-0.5503100752830505,
-0.691356897354126,
-0.4166182279586792,
-0.048304602503776... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mtkinit/Example-Dataset-Super-2 | mtkinit | 2023-06-20T16:34:39Z | 57 | 0 | null | [
"keyword1",
"keyword2",
"region:us"
] | 2023-06-20T16:34:39Z | 2023-06-20T16:34:38.000Z | 2023-06-20T16:34:38 | ---
pretty_name: Example-Dataset-Super-2
tags:
- keyword1
- keyword2
---
# Example-Dataset-Super-2
Created from AIOD platform | [
-0.3995533585548401,
-0.11030910164117813,
-0.055132150650024414,
0.07396574318408966,
-0.41838374733924866,
0.12043283879756927,
0.7110751867294312,
-0.04448343813419342,
0.454803466796875,
0.6976367235183716,
-0.8516581058502197,
-0.18931517004966736,
-0.3648931384086609,
-0.252330332994... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Xenova/quickdraw-small | Xenova | 2023-06-25T17:08:02Z | 57 | 1 | null | [
"region:us"
] | 2023-06-25T17:08:02Z | 2023-06-23T22:04:14.000Z | 2023-06-23T22:04:14 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': aircraft carrier
'1': airplane
'2': alarm clock
'3': ambulance
'4': angel
'5': animal migration
'6': ant
'7': anvil
'8': apple
'9': arm
'10': asparagus
'11': axe
'12': backpack
'13': banana
'14': bandage
'15': barn
'16': baseball bat
'17': baseball
'18': basket
'19': basketball
'20': bat
'21': bathtub
'22': beach
'23': bear
'24': beard
'25': bed
'26': bee
'27': belt
'28': bench
'29': bicycle
'30': binoculars
'31': bird
'32': birthday cake
'33': blackberry
'34': blueberry
'35': book
'36': boomerang
'37': bottlecap
'38': bowtie
'39': bracelet
'40': brain
'41': bread
'42': bridge
'43': broccoli
'44': broom
'45': bucket
'46': bulldozer
'47': bus
'48': bush
'49': butterfly
'50': cactus
'51': cake
'52': calculator
'53': calendar
'54': camel
'55': camera
'56': camouflage
'57': campfire
'58': candle
'59': cannon
'60': canoe
'61': car
'62': carrot
'63': castle
'64': cat
'65': ceiling fan
'66': cell phone
'67': cello
'68': chair
'69': chandelier
'70': church
'71': circle
'72': clarinet
'73': clock
'74': cloud
'75': coffee cup
'76': compass
'77': computer
'78': cookie
'79': cooler
'80': couch
'81': cow
'82': crab
'83': crayon
'84': crocodile
'85': crown
'86': cruise ship
'87': cup
'88': diamond
'89': dishwasher
'90': diving board
'91': dog
'92': dolphin
'93': donut
'94': door
'95': dragon
'96': dresser
'97': drill
'98': drums
'99': duck
'100': dumbbell
'101': ear
'102': elbow
'103': elephant
'104': envelope
'105': eraser
'106': eye
'107': eyeglasses
'108': face
'109': fan
'110': feather
'111': fence
'112': finger
'113': fire hydrant
'114': fireplace
'115': firetruck
'116': fish
'117': flamingo
'118': flashlight
'119': flip flops
'120': floor lamp
'121': flower
'122': flying saucer
'123': foot
'124': fork
'125': frog
'126': frying pan
'127': garden hose
'128': garden
'129': giraffe
'130': goatee
'131': golf club
'132': grapes
'133': grass
'134': guitar
'135': hamburger
'136': hammer
'137': hand
'138': harp
'139': hat
'140': headphones
'141': hedgehog
'142': helicopter
'143': helmet
'144': hexagon
'145': hockey puck
'146': hockey stick
'147': horse
'148': hospital
'149': hot air balloon
'150': hot dog
'151': hot tub
'152': hourglass
'153': house plant
'154': house
'155': hurricane
'156': ice cream
'157': jacket
'158': jail
'159': kangaroo
'160': key
'161': keyboard
'162': knee
'163': knife
'164': ladder
'165': lantern
'166': laptop
'167': leaf
'168': leg
'169': light bulb
'170': lighter
'171': lighthouse
'172': lightning
'173': line
'174': lion
'175': lipstick
'176': lobster
'177': lollipop
'178': mailbox
'179': map
'180': marker
'181': matches
'182': megaphone
'183': mermaid
'184': microphone
'185': microwave
'186': monkey
'187': moon
'188': mosquito
'189': motorbike
'190': mountain
'191': mouse
'192': moustache
'193': mouth
'194': mug
'195': mushroom
'196': nail
'197': necklace
'198': nose
'199': ocean
'200': octagon
'201': octopus
'202': onion
'203': oven
'204': owl
'205': paint can
'206': paintbrush
'207': palm tree
'208': panda
'209': pants
'210': paper clip
'211': parachute
'212': parrot
'213': passport
'214': peanut
'215': pear
'216': peas
'217': pencil
'218': penguin
'219': piano
'220': pickup truck
'221': picture frame
'222': pig
'223': pillow
'224': pineapple
'225': pizza
'226': pliers
'227': police car
'228': pond
'229': pool
'230': popsicle
'231': postcard
'232': potato
'233': power outlet
'234': purse
'235': rabbit
'236': raccoon
'237': radio
'238': rain
'239': rainbow
'240': rake
'241': remote control
'242': rhinoceros
'243': rifle
'244': river
'245': roller coaster
'246': rollerskates
'247': sailboat
'248': sandwich
'249': saw
'250': saxophone
'251': school bus
'252': scissors
'253': scorpion
'254': screwdriver
'255': sea turtle
'256': see saw
'257': shark
'258': sheep
'259': shoe
'260': shorts
'261': shovel
'262': sink
'263': skateboard
'264': skull
'265': skyscraper
'266': sleeping bag
'267': smiley face
'268': snail
'269': snake
'270': snorkel
'271': snowflake
'272': snowman
'273': soccer ball
'274': sock
'275': speedboat
'276': spider
'277': spoon
'278': spreadsheet
'279': square
'280': squiggle
'281': squirrel
'282': stairs
'283': star
'284': steak
'285': stereo
'286': stethoscope
'287': stitches
'288': stop sign
'289': stove
'290': strawberry
'291': streetlight
'292': string bean
'293': submarine
'294': suitcase
'295': sun
'296': swan
'297': sweater
'298': swing set
'299': sword
'300': syringe
'301': t-shirt
'302': table
'303': teapot
'304': teddy-bear
'305': telephone
'306': television
'307': tennis racquet
'308': tent
'309': The Eiffel Tower
'310': The Great Wall of China
'311': The Mona Lisa
'312': tiger
'313': toaster
'314': toe
'315': toilet
'316': tooth
'317': toothbrush
'318': toothpaste
'319': tornado
'320': tractor
'321': traffic light
'322': train
'323': tree
'324': triangle
'325': trombone
'326': truck
'327': trumpet
'328': umbrella
'329': underwear
'330': van
'331': vase
'332': violin
'333': washing machine
'334': watermelon
'335': waterslide
'336': whale
'337': wheel
'338': windmill
'339': wine bottle
'340': wine glass
'341': wristwatch
'342': yoga
'343': zebra
'344': zigzag
splits:
- name: train
num_bytes: 1759529671.8579164
num_examples: 4500000
- name: valid
num_bytes: 97751648.4365509
num_examples: 250000
- name: test
num_bytes: 97751648.4365509
num_examples: 250000
download_size: 1902583033
dataset_size: 1955032968.731018
---
# Dataset Card for "quickdraw-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5871610045433044,
-0.2413017898797989,
0.33872687816619873,
0.14579302072525024,
-0.46449199318885803,
-0.06812821328639984,
0.29753372073173523,
-0.03809665888547897,
1.1186292171478271,
0.3531438112258911,
-0.8266199231147766,
-0.6893243193626404,
-0.5830479264259338,
-0.2166321128606... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Clinton/texttosqlv2_25000_v2 | Clinton | 2023-07-28T12:40:03Z | 57 | 3 | null | [
"license:apache-2.0",
"region:us"
] | 2023-07-28T12:40:03Z | 2023-07-28T12:38:41.000Z | 2023-07-28T12:38:41 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pykeio/oshichats-v1-2308 | pykeio | 2023-09-06T23:07:19Z | 57 | 3 | null | [
"task_categories:text-classification",
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:token-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-nc-sa-4.0",
"livestream",
... | 2023-09-06T23:07:19Z | 2023-08-03T14:24:05.000Z | 2023-08-03T14:24:05 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
- conversational
- text-generation
- token-classification
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
tags:
- livestream
- stream
- chat
- messages
- vtuber
- vtubers
pretty_name: OSHIChats v1
size_categories:
- 1M<n<10M
---
## OSHIChats v1 (August 2023)
OSHIChats v1 is a dataset of 8.06 million high-quality filtered English chat messages collected from various [VTuber](https://en.wikipedia.org/wiki/VTuber) live streams.
Compared to our previous dataset, [pykeio/vtuber-chats-2023-filtered-en-8.7M](https://huggingface.co/datasets/pykeio/vtuber-chats-2023-filtered-en-8.7M), we make the following improvements:
- Include stream topic information
- Far more accurate nickname detection using NLP
- Previously we did not match names like "dad" (nickname for Mori Calliope) or "mom" (nickname for Nina Kosaka) because they were too general. Now, we analyze the context and other information about the stream to determine whether to match such nicknames.
- Detect and normalize fan names like takodachi or pentomo
## Usage
Once you gain access to the dataset, you'll also need to log in to Hugging Face CLI with `huggingface-cli login`.
```py
from datasets import load_dataset
chats_dataset = load_dataset('pykeio/oshichats-v1-2308', split='train', revision='refs/convert/parquet')
chats_dataset[0]
# {'liver': 'FgXWZOUZA2oYHNr6qDmsTQ', 'stream': {'id': 'JHBv4BA_Y84', 'topic': 'Twisted_Wonderland'}, 'is_super': False, 'message': "i think i've grown to dislike them ", 'author': 'chxrry_head', 'time': [1660106235135797, 2126652]}
```
## Samples
```json
{
"liver": "kieJGn3pgJikVW8gmMXE2w",
"stream": {
"id": "dMUhbAcI5gk",
"topic": "minecraft"
},
"is_super": false,
"message": "yay <|liver:bW9t|> is streaming while I'm awake!",
"author": "Redribbon Vicky",
"time": [1651976493761550, 44936]
}
{
"liver": "yl1z3jo3XHR1riLFKG5UAg",
"stream": {
"id": "TgEX7HFqTYc",
"topic": "Donkey_Kong"
},
"is_super": false,
"message": "Stop running <|liver:QW1l|><|:ameHeh:|><|:ameHeh:|><|:ameHeh:|>",
"author": "Anon",
"time": [1616291612238864, 889273]
}
```
## Data fields
- `liver`: ID of the YouTube channel hosting the stream which the chat message came from.
- `stream`: Information about the stream.
- `id`: Video ID of the YouTube stream.
- `topic`: Topic of the stream (or `null` if a topic could not be determined). This can be things like `talk`, `Minecraft`, `Singing`, `GTA`, `Asmr`, etc.
- `is_super`: Whether or not the message is a Superchat (donation).
- `message`: Contents of the message. For consistency and ease of use on downstream tasks, we replace certain words with easily matchable special tokens:
* `<|liver:{b64}|>`: The substring refers to the host of the stream.
* `<|liver-fans:{b64}|>`: The substring refers to a nickname given to the fanbase of the host of the stream, e.g. aloupeeps or takodachis.
* `<|known-collaborator:{channelID}:{b64}|>`: The substring refers to a fellow VTuber that is present in the stream.
* `<|maybe-collaborator:{channelID}:{b64}|>`: The substring refers to a fellow VTuber that may or may not be part of the stream.
* `<|collaborator-fans:{channelID}:{b64}|>`: The substring refers to the fanbase of a collaborator present in the stream.
* `<|:{emote}:|>`: Represents a channel emote.
* Note that `channelID` is a YouTube channel ID, and `b64` is the original substring encoded as base64.
- `author`: The username of the author.
- `time`: A tuple containing the Unix timestamp of when the message was sent, and the relative time since the start of the stream.
## License
Licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/); you must give attribution, you may not use the dataset for commercial purposes, and you must distribute any transformations or copies of the dataset under the same license. [Contact us](mailto:contact@pyke.io) for alternative/commercial licensing. | [
-0.6482555866241455,
-0.9198050498962402,
0.04192158207297325,
0.15886087715625763,
-0.5158397555351257,
0.1971065253019333,
-0.33522170782089233,
-0.30128228664398193,
0.9706834554672241,
0.4248756468296051,
-1.1083474159240723,
-0.5261826515197754,
-0.7994795441627502,
0.0210026241838932... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shariqfarooq/cs323_densepred_seg256 | shariqfarooq | 2023-09-16T12:07:20Z | 57 | 0 | null | [
"region:us"
] | 2023-09-16T12:07:20Z | 2023-09-16T12:02:51.000Z | 2023-09-16T12:02:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: mask
dtype: image
splits:
- name: train
num_bytes: 187512341.0
num_examples: 1464
- name: val
num_bytes: 187805177.75
num_examples: 1449
download_size: 375496804
dataset_size: 375317518.75
---
# Dataset Card for "cs323_densepred_seg256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.578298807144165,
-0.25455430150032043,
0.23123544454574585,
0.5763404965400696,
-0.10062883049249649,
-0.03234917297959328,
0.06996726989746094,
-0.05052497982978821,
0.5833190083503723,
0.5317180752754211,
-0.7467682957649231,
-0.8193771243095398,
-0.3729773163795471,
-0.39439508318901... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pablo-moreira/wikipedia-pt | pablo-moreira | 2023-10-06T13:52:49Z | 57 | 1 | null | [
"region:us"
] | 2023-10-06T13:52:49Z | 2023-10-06T02:29:43.000Z | 2023-10-06T02:29:43 | ---
dataset_info:
- config_name: '20231001'
features:
- name: id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2150584347
num_examples: 1857355
download_size: 0
dataset_size: 2150584347
- config_name: latest
features:
- name: id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2150584347
num_examples: 1857355
download_size: 0
dataset_size: 2150584347
configs:
- config_name: '20231001'
data_files:
- split: train
path: 20231001/train-*
- config_name: latest
data_files:
- split: train
path: latest/train-*
---
# Dataset Card for Wikipedia - Portuguese
## Dataset Description
- latest
- 20231001
## Usage
```python
from datasets import load_dataset
dataset = load_dataset('pablo-moreira/wikipedia-pt', 'latest')
#dataset = load_dataset('pablo-moreira/wikipedia-pt', '20231001')
```
## Extractor
Notebook with the code for extracting documents from the Wikipedia dump based on the code from the FastAI NLP introduction course.
[Notebook](extractor.ipynb)
## Links
- **[Wikipedia dumps](https://dumps.wikimedia.org/)**
- **[A Code-First Intro to Natural Language Processing](https://github.com/fastai/course-nlp)**
- **[Extractor Code](https://github.com/fastai/course-nlp/blob/master/nlputils.py)** | [
-0.5337409377098083,
-0.4829055666923523,
-0.1806136518716812,
0.23200342059135437,
-0.4349518418312073,
-0.18771037459373474,
-0.3382714092731476,
-0.24993835389614105,
0.3876103162765503,
0.3002210557460785,
-0.4403676986694336,
-0.5764807462692261,
-0.5561147928237915,
0.563922762870788... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
metricv/metricsubs-chunktranslate | metricv | 2023-11-16T01:13:26Z | 57 | 0 | null | [
"task_categories:text2text-generation",
"size_categories:n<1K",
"language:en",
"language:zh",
"license:mit",
"region:us"
] | 2023-11-16T01:13:26Z | 2023-10-14T22:36:45.000Z | 2023-10-14T22:36:45 | ---
license: mit
task_categories:
- text2text-generation
language:
- en
- zh
size_categories:
- n<1K
configs:
- config_name: default
default: true
data_files:
- split: train
path: "train.json"
- split: test
path: "test.json"
- config_name: chatgpt
data_files:
- split: train
path: "chatgpt-train.jsonl"
- split: test
path: "chatgpt-test.jsonl"
- config_name: chatgpt-recent
data_files:
- split: train
path: "chatgpt-recent-train.jsonl"
- split: test
path: "chatgpt-recent-test.jsonl"
--- | [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Niche-Squad/balloon | Niche-Squad | 2023-10-19T21:59:08Z | 57 | 0 | null | [
"license:agpl-3.0",
"region:us"
] | 2023-10-19T21:59:08Z | 2023-10-19T19:31:21.000Z | 2023-10-19T19:31:21 | ---
license: agpl-3.0
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jin05102518/KO_EN_QA_MERGE_SHUFFLE | jin05102518 | 2023-10-20T01:30:45Z | 57 | 0 | null | [
"region:us"
] | 2023-10-20T01:30:45Z | 2023-10-20T01:28:07.000Z | 2023-10-20T01:28:07 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NomaDamas/qasper | NomaDamas | 2023-10-21T04:02:27Z | 57 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-10-21T04:02:27Z | 2023-10-21T04:02:04.000Z | 2023-10-21T04:02:04 | ---
license: cc-by-4.0
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: full_text
struct:
- name: paragraphs
sequence:
sequence: string
- name: section_name
sequence: string
- name: qas
struct:
- name: answers
list:
- name: annotation_id
sequence: string
- name: answer
list:
- name: evidence
sequence: string
- name: extractive_spans
sequence: string
- name: free_form_answer
dtype: string
- name: highlighted_evidence
sequence: string
- name: unanswerable
dtype: bool
- name: yes_no
dtype: bool
- name: worker_id
sequence: string
- name: nlp_background
sequence: string
- name: paper_read
sequence: string
- name: question
sequence: string
- name: question_id
sequence: string
- name: question_writer
sequence: string
- name: search_query
sequence: string
- name: topic_background
sequence: string
- name: figures_and_tables
struct:
- name: caption
sequence: string
- name: file
sequence: string
- name: question
sequence: string
- name: retrieval_gt
sequence:
sequence: string
- name: answer_gt
sequence: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 33747492
num_examples: 946
download_size: 16245561
dataset_size: 33747492
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
isek-ai/ak-fandom-20230821-raw | isek-ai | 2023-10-29T11:15:20Z | 57 | 0 | null | [
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-10-29T11:15:20Z | 2023-10-29T09:44:09.000Z | 2023-10-29T09:44:09 | ---
language:
- en
license: cc-by-sa-4.0
size_categories:
- 10K<n<100K
pretty_name: Arknights Fandom Wiki (Raw) 20230821
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 41839104
num_examples: 10937
download_size: 20610229
dataset_size: 41839104
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# ak-fandom-20230821-raw
A dataset generated from [the dump](https://arknights.fandom.com/wiki/Special:Statistics) of [Arknights Fandom wiki](https://arknights.fandom.com/wiki/Arknights_Wiki). | [
-0.7772579193115234,
-0.23484604060649872,
0.06152386590838432,
-0.002686829539015889,
-0.4713920056819916,
-0.11309415847063065,
0.4593973755836487,
-0.0033207680098712444,
1.1050502061843872,
0.9778169989585876,
-1.2141835689544678,
-0.48822715878486633,
-0.21631236374378204,
0.525026559... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ajax-law/contrastive-matters-2 | ajax-law | 2023-11-06T19:35:19Z | 57 | 0 | null | [
"region:us"
] | 2023-11-06T19:35:19Z | 2023-11-06T19:35:17.000Z | 2023-11-06T19:35:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text_a
dtype: string
- name: text_b
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 3232779
num_examples: 6549
- name: test
num_bytes: 41112
num_examples: 90
download_size: 100367
dataset_size: 3273891
---
# Dataset Card for "contrastive-matters-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.44156238436698914,
-0.15092764794826508,
0.19309115409851074,
0.4132401645183563,
-0.27271032333374023,
-0.2862420082092285,
0.28837504982948303,
-0.26871782541275024,
0.8019673228263855,
0.38650763034820557,
-0.7204903364181519,
-0.46040022373199463,
-0.7629901766777039,
-0.54146695137... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jxie/covtype | jxie | 2023-11-11T07:08:14Z | 57 | 0 | null | [
"region:us"
] | 2023-11-11T07:08:14Z | 2023-11-11T07:08:05.000Z | 2023-11-11T07:08:05 | ---
dataset_info:
features:
- name: inputs
sequence: float64
- name: label
dtype: float64
splits:
- name: train
num_bytes: 165100068
num_examples: 371847
- name: val
num_bytes: 41275128
num_examples: 92962
- name: test
num_bytes: 51594132
num_examples: 116203
download_size: 26860361
dataset_size: 257969328
---
# Dataset Card for "covtype"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.605038583278656,
-0.25971728563308716,
-0.001968177268281579,
0.24112655222415924,
-0.18568159639835358,
0.16293923556804657,
0.39721184968948364,
-0.08601929247379303,
0.9304534792900085,
0.5165804028511047,
-0.8903853893280029,
-0.8451322913169861,
-0.5119816660881042,
-0.324782311916... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shredder-31/NeuraLearnAcademy | shredder-31 | 2023-11-25T01:37:13Z | 57 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-25T01:37:13Z | 2023-11-14T16:40:40.000Z | 2023-11-14T16:40:40 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 14885576
num_examples: 5087
download_size: 3107395
dataset_size: 14885576
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
higgsfield/school-math-questions | higgsfield | 2023-11-14T22:29:19Z | 57 | 0 | null | [
"region:us"
] | 2023-11-14T22:29:19Z | 2023-11-14T22:29:15.000Z | 2023-11-14T22:29:15 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 4787332
num_examples: 8792
download_size: 2576099
dataset_size: 4787332
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "school-math-questions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7360814213752747,
-0.6535045504570007,
0.23452958464622498,
0.11173807084560394,
-0.010187228210270405,
-0.011566152796149254,
0.31475576758384705,
0.1324574202299118,
0.7405228018760681,
0.30533650517463684,
-0.9378332495689392,
-0.5628969669342041,
-0.5468315482139587,
-0.358290314674... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PatronusAI/financebench | PatronusAI | 2023-11-16T13:48:29Z | 57 | 24 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-11-16T13:48:29Z | 2023-11-16T13:38:35.000Z | 2023-11-16T13:38:35 | ---
license: cc-by-nc-4.0
---
FinanceBench is a first-of-its-kind test suite for evaluating the performance of LLMs on open book financial question answering (QA). This is an open source sample of 150 annotated examples used in the evaluation and analysis of models assessed in the FinanceBench paper.
The dataset comprises of questions about publicly traded companies, with corresponding answers and evidence strings. The questions in FinanceBench are ecologically valid and cover a diverse set of scenarios. They are intended to be clear-cut and straightforward to answer to serve as a minimum performance standard.
We test 16 state of the art model configurations (including GPT-4-Turbo, Llama2 and Claude2, with vector stores and long context prompts) on a sample of 150 cases from FinanceBench, and manually review their answers (n=2,400). The cases are available open-source.
We find that existing LLMs have clear limitations for financial QA. All models assessed exhibit weaknesses, such as hallucinations, that limit their suitability for use by enterprises.
To evaluate your models on the full dataset, or if you have questions about this work, you can email us at contact@patronus.ai | [
-0.3334283232688904,
-0.7131799459457397,
0.3379138112068176,
0.5021293759346008,
-0.29533904790878296,
0.18129171431064606,
0.11712147295475006,
-0.4801364541053772,
0.08685260266065598,
0.5453031659126282,
-0.8438963890075684,
-0.5789032578468323,
-0.03821792080998421,
0.0633236467838287... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shaheenahmedc/goal_captions | shaheenahmedc | 2023-11-23T15:35:41Z | 57 | 0 | null | [
"license:unknown",
"region:us"
] | 2023-11-23T15:35:41Z | 2023-11-23T15:35:18.000Z | 2023-11-23T15:35:18 | ---
license: unknown
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huggan/anime-faces | huggan | 2022-03-22T10:01:22Z | 56 | 7 | null | [
"license:cc0-1.0",
"region:us"
] | 2022-03-22T10:01:22Z | 2022-03-03T13:15:34.000Z | 2022-03-03T13:15:34 | ---
license: cc0-1.0
---
# Dataset Card for anime-faces
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.kaggle.com/soumikrakshit/anime-faces
- **Repository:** https://www.kaggle.com/soumikrakshit/anime-faces
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** https://github.com/Mckinsey666
### Dataset Summary
This is a dataset consisting of 21551 anime faces scraped from www.getchu.com, which are then cropped using the anime face detection algorithm in https://github.com/nagadomi/lbpcascade_animeface. All images are resized to 64 * 64 for the sake of convenience. Please also cite the two sources when using this dataset.
Some outliers are still present in the dataset:
Bad cropping results
Some non-human faces.
Feel free to contribute to this dataset by adding images of similar quality or adding image labels.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
Has a data folder with png files inside.
### Data Splits
Only training set
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
---
annotations_creators:
- found
language_creators:
- found
languages:
- unknown
licenses:
- unknown
multilinguality:
- unknown
pretty_name: anime-faces
size_categories:
- unknown
source_datasets:
- original
task_categories:
- image-classification
task_ids: []
--- | [
-0.5596756935119629,
-0.6470117568969727,
0.13903354108333588,
0.2996732294559479,
-0.16097794473171234,
0.056002646684646606,
-0.04912761598825455,
-0.5032001733779907,
0.6674025654792786,
0.7321119904518127,
-1.1256189346313477,
-0.8391453623771667,
-0.6711089015007019,
0.135999932885169... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tner/fin | tner | 2022-08-15T17:50:31Z | 56 | 4 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] | 2022-08-15T17:50:31Z | 2022-07-16T11:08:45.000Z | 2022-07-16T11:08:45 | ---
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: FIN
---
# Dataset Card for "tner/fin"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/U15-1010.pdf](https://aclanthology.org/U15-1010.pdf)
- **Dataset:** FIN
- **Domain:** Financial News
- **Number of Entity:** 4
### Dataset Summary
FIN NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
FIN dataset contains training (FIN5) and test (FIN3) only, so we randomly sample a half size of test instances from the training set to create validation set.
- Entity Types: `ORG`, `LOC`, `PER`, `MISC`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
"tags": [0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"tokens": ["1", ".", "1", ".", "4", "Borrower", "engages", "in", "criminal", "conduct", "or", "is", "involved", "in", "criminal", "activities", ";"]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/fin/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-PER": 1,
"B-LOC": 2,
"B-ORG": 3,
"B-MISC": 4,
"I-PER": 5,
"I-LOC": 6,
"I-ORG": 7,
"I-MISC": 8
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|fin |1014 | 303| 150|
### Citation Information
```
@inproceedings{salinas-alvarado-etal-2015-domain,
title = "Domain Adaption of Named Entity Recognition to Support Credit Risk Assessment",
author = "Salinas Alvarado, Julio Cesar and
Verspoor, Karin and
Baldwin, Timothy",
booktitle = "Proceedings of the Australasian Language Technology Association Workshop 2015",
month = dec,
year = "2015",
address = "Parramatta, Australia",
url = "https://aclanthology.org/U15-1010",
pages = "84--90",
}
``` | [
-0.4543814957141876,
-0.5257153511047363,
0.11664781719446182,
-0.025506876409053802,
-0.4001055061817169,
-0.037544071674346924,
-0.2410646378993988,
-0.38771069049835205,
0.282002329826355,
0.4894752502441406,
-0.46537289023399353,
-0.8250408172607422,
-0.601307213306427,
0.3004216849803... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hoskinson-center/proof-pile | hoskinson-center | 2023-08-19T03:24:11Z | 56 | 33 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"language:en",
"license:apache-2.0",
"math",
"mathematics",
"formal-mathematics",
"region:us"
] | 2023-08-19T03:24:11Z | 2022-08-08T20:57:56.000Z | 2022-08-08T20:57:56 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: [apache-2.0]
multilinguality:
- monolingual
pretty_name: proof-pile
size_categories: []
source_datasets: []
tags:
- math
- mathematics
- formal-mathematics
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Description
The `proof-pile` is a 13GB pre-training dataset of mathematical text that comprises 8.3 billion tokens (using the `gpt-neox` tokenizer). Models trained on this dataset are coming soon :) The dataset is composed of diverse sources of both informal and formal mathematics, namely
- ArXiv.math (10GB)
- Open-source math textbooks (50MB)
- Formal mathematics libraries (500MB)
- Lean mathlib and other Lean repositories
- Isabelle AFP
- Coq mathematical components and other Coq repositories
- HOL Light
- set.mm
- Mizar Mathematical Library
- Math Overflow and Math Stack Exchange (2.5GB)
- Wiki-style sources (50MB)
- ProofWiki
- Wikipedia math articles
- MATH dataset (6MB)
The construction of the dataset is reproducible using the code and instructions in the [proof-pile Github
repo](https://github.com/zhangir-azerbayev/proof-pile).
# Supported Tasks
This dataset is intended to be used for pre-training and fine-tuning language models. We envision models trained on the `proof-pile` will have many downstream applications, including informal quantitative reasoning, formal theorem proving, semantic search for formal mathematics, and autoformalization.
# Languages
All informal mathematics in the `proof-pile` is written in English and LaTeX (arXiv articles in other languages are filtered out using [languagedetect](https://github.com/shuyo/language-detection/blob/wiki/ProjectHome.md)). Formal theorem proving languages represented in this dataset are Lean 3, Isabelle, Coq, HOL Light, Metamath, and Mizar.
# Evaluation
The version of `set.mm` in this dataset has 10% of proofs replaced with the `?` character in order to preserve a validation and test set for Metamath provers pre-trained on the `proof-pile`. The precise split can be found here: [validation](https://github.com/zhangir-azerbayev/mm-extract/blob/main/valid_decls.json) and [test](https://github.com/zhangir-azerbayev/mm-extract/blob/main/test_decls.json).
The Lean mathlib commit used in this dataset is `6313863`. Theorems created in subsequent commits can be used for evaluating Lean theorem provers.
This dataset contains only the training set of the [MATH dataset](https://github.com/hendrycks/math). However, because this dataset contains ProofWiki, the Stacks Project, Trench's Analysis, and Stein's Number Theory, models trained on it cannot be evaluated on the [NaturalProofs dataset](https://github.com/wellecks/naturalproofs).
# Data Preprocessing
This section describes any significant filtering and transformations made to various subsets of the data.
**arXiv.math.**
The arXiv.math dataset is large, heterogeneous, and contains a great deal of noise. We used the following heuristics
when choosing which files from arXiv.math source folders to include in the dataset:
- Keep only files with a `.tex` extension.
- Only include files that use either a `utf-8/16/32` or `latin-1` text encoding.
- Discard files that do not contain a part, chapter, section, sub...section, paragraph, or subparagraph heading.
- Delete files that contain the keyword `gnuplot`. Gnuplot-latex is an old command line utility that generates blocks
of entirely unintelligible source.
- Include only articles in English, as determined by the [langdetect library](https://pypi.org/project/langdetect/). \n",
"\n",
- Exclude files shorter than 280 characters (characters counted after substring removal described below).
In addition, we apply the following transformations to arXiv.math texts:
- Delete everything outside of `\begin{document}` and `\end{document}`.
- Delete everything including or after `\Refs`, `\begin{thebibliography}`, or `\begin{bibdiv}`
- Delete comments.
- Any more than three consecutive newlines are replaced by three consecutive newlines.
In [this notebook](https://github.com/zhangir-azerbayev/proof-pile/blob/main/analysis/arxiv_noisedetection.ipynb), we provide an analysis of the prevalence of noisy documents in the arXiv.math subset of the
proof-pile.
**Stack Exchange.**
We only include questions that have at least 5 upvotes and an answer. We format Stack Exchange posts as follows
```
QUESTION [{num_upvotes} upvotes]: {text of question}
REPLY [{num_upvotes} votes]: {text of reply}
REPLY [{num_upvotes} votes]: {text of reply}
.
.
.
```
**set.mm.**
We converted `set.mm` into human-readable form by following the instructions in the [mm-extract repo](https://github.com/zhangir-azerbayev/mm-extract)
## Contributions
Authors: Zhangir Azerbayev, Edward Ayers, Bartosz Piotrowski.
We would like to thank Jeremy Avigad, Albert Jiang, and Wenda Li for their invaluable guidance, and the Hoskinson Center for Formal Mathematics for its support.
| [
-0.594694197177887,
-0.5621137619018555,
0.2142481654882431,
-0.06952991336584091,
-0.332516074180603,
-0.1790366768836975,
0.11125896126031876,
-0.3735320270061493,
0.042623620480298996,
0.47594180703163147,
-0.3033866882324219,
-0.447566419839859,
-0.5610049366950989,
0.13571055233478546... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
allenai/ms2_sparse_max | allenai | 2022-11-24T16:27:49Z | 56 | 0 | multi-document-summarization | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"lang... | 2022-11-24T16:27:49Z | 2022-08-26T21:40:42.000Z | 2022-08-26T21:40:42 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `background` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==25`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4333 | 0.2163 | 0.1746 | 0.2636 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.378 | 0.1827 | 0.1559 | 0.2188 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.3928 | 0.1898 | 0.1672 | 0.2208 | | [
-0.266454815864563,
-0.2078908234834671,
0.20238904654979706,
0.1432640105485916,
-0.17462697625160217,
-0.123261958360672,
-0.19785836338996887,
0.02924766018986702,
0.2681557834148407,
0.3624703586101532,
-0.5445801615715027,
-0.48119792342185974,
-0.8308965563774109,
0.14381597936153412... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
batterydata/pos_tagging | batterydata | 2022-09-05T16:05:33Z | 56 | 0 | null | [
"task_categories:token-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-09-05T16:05:33Z | 2022-09-05T15:44:21.000Z | 2022-09-05T15:44:21 | ---
language:
- en
license:
- apache-2.0
task_categories:
- token-classification
pretty_name: 'Part-of-speech(POS) Tagging Dataset for BatteryDataExtractor'
---
# POS Tagging Dataset
## Original Data Source
#### Conll2003
E. F. Tjong Kim Sang and F. De Meulder, Proceedings of the
Seventh Conference on Natural Language Learning at HLT-
NAACL 2003, 2003, pp. 142–147.
#### The Peen Treebank
M. P. Marcus, B. Santorini and M. A. Marcinkiewicz, Comput.
Linguist., 1993, 19, 313–330.
## Citation
BatteryDataExtractor: battery-aware text-mining software embedded with BERT models | [
-0.00892242044210434,
-0.4850996732711792,
0.3398463726043701,
0.1777811199426651,
-0.13015715777873993,
0.07338544726371765,
-0.09029197692871094,
-0.3032713532447815,
0.048136454075574875,
0.2708722949028015,
-0.28127503395080566,
-0.7537397146224976,
-0.33874422311782837,
0.140298157930... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/codiesp | bigbio | 2022-12-22T15:44:28Z | 56 | 0 | null | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2022-12-22T15:44:28Z | 2022-11-13T22:08:01.000Z | 2022-11-13T22:08:01 |
---
language:
- es
bigbio_language:
- Spanish
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: CodiEsp
homepage: https://temu.bsc.es/codiesp/
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- TEXT_CLASSIFICATION
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for CodiEsp
## Dataset Description
- **Homepage:** https://temu.bsc.es/codiesp/
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS,NER,NED
Synthetic corpus of 1,000 manually selected clinical case studies in Spanish
that was designed for the Clinical Case Coding in Spanish Shared Task, as part
of the CLEF 2020 conference.
The goal of the task was to automatically assign ICD10 codes (CIE-10, in
Spanish) to clinical case documents, being evaluated against manually generated
ICD10 codifications. The CodiEsp corpus was selected manually by practicing
physicians and clinical documentalists and annotated by clinical coding
professionals meeting strict quality criteria. They reached an inter-annotator
agreement of 88.6% for diagnosis coding, 88.9% for procedure coding and 80.5%
for the textual reference annotation.
The final collection of 1,000 clinical cases that make up the corpus had a total
of 16,504 sentences and 396,988 words. All documents are in Spanish language and
CIE10 is the coding terminology (the Spanish version of ICD10-CM and ICD10-PCS).
The CodiEsp corpus has been randomly sampled into three subsets. The train set
contains 500 clinical cases, while the development and test sets have 250
clinical cases each. In addition to these, a collection of 176,294 abstracts
from Lilacs and Ibecs with the corresponding ICD10 codes (ICD10-CM and
ICD10-PCS) was provided by the task organizers. Every abstract has at least one
associated code, with an average of 2.5 ICD10 codes per abstract.
The CodiEsp track was divided into three sub-tracks (2 main and 1 exploratory):
- CodiEsp-D: The Diagnosis Coding sub-task, which requires automatic ICD10-CM
[CIE10-Diagnóstico] code assignment.
- CodiEsp-P: The Procedure Coding sub-task, which requires automatic ICD10-PCS
[CIE10-Procedimiento] code assignment.
- CodiEsp-X: The Explainable AI exploratory sub-task, which requires to submit
the reference to the predicted codes (both ICD10-CM and ICD10-PCS). The goal
of this novel task was not only to predict the correct codes but also to
present the reference in the text that supports the code predictions.
For further information, please visit https://temu.bsc.es/codiesp or send an
email to encargo-pln-life@bsc.es
## Citation Information
```
@article{miranda2020overview,
title={Overview of Automatic Clinical Coding: Annotations, Guidelines, and Solutions for non-English Clinical Cases at CodiEsp Track of CLEF eHealth 2020.},
author={Miranda-Escalada, Antonio and Gonzalez-Agirre, Aitor and Armengol-Estap{'e}, Jordi and Krallinger, Martin},
journal={CLEF (Working Notes)},
volume={2020},
year={2020}
}
```
| [
-0.5183122754096985,
-0.11942755430936813,
0.5289594531059265,
0.5417941212654114,
-0.41514191031455994,
0.022643620148301125,
-0.24494250118732452,
-0.6583740711212158,
0.5370626449584961,
0.12736260890960693,
-0.5211287140846252,
-0.8269796967506409,
-0.7551606893539429,
0.49092695116996... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TurkuNLP/squad_v2_fi | TurkuNLP | 2023-10-10T19:55:56Z | 56 | 0 | null | [
"task_categories:question-answering",
"language:fi",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-10-10T19:55:56Z | 2023-01-30T09:03:27.000Z | 2023-01-30T09:03:27 | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- fi
---
### Dataset Summary
This is a Finnish SQuAD question answering dataset. It is a DeepL -based machine translation of the English SQuAD2.0 dataset which combines the 100,000 questions in
SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones.
To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported
by the paragraph and abstain from answering.
### Data Fields
The data fields are the same among all splits.
#### Example Data
```
{
"title": "Victoria_(Australia)",
"paragraphs": [
{
"qas": [
{
"question": "Millainen talous Victoriassa on?",
"id": "570d2417fed7b91900d45c3d",
"answers": [
{
"text": "monipuolinen",
"answer_start": 26,
"texts": [
"monipuolinen"
],
"starts": [
26
]
},
{
"text": "hyvin monipuolinen",
"answer_start": 20,
"texts": [
"hyvin ",
"monipuolinen"
],
"starts": [
20,
26
]
},
{
"text": "hyvin monipuolinen",
"answer_start": 20,
"texts": [
"hyvin ",
"monipuolinen"
],
"starts": [
20,
26
]
}
],
"is_impossible": false
}
],
"context": "Victorian talous on hyvin monipuolinen: palvelualat, kuten rahoitus- ja kiinteistöpalvelut, terveydenhuolto, koulutus, tukkukauppa, vähittäiskauppa, majoitus- ja ravitsemistoiminta ja teollisuus muodostavat suurimman osan työllisyydestä. Victorian osavaltion bruttokansantuote on Australian toiseksi suurin, vaikka Victoria on asukaskohtaisen bruttokansantuotteen osalta neljäntenä, koska sen kaivostoiminta on vähäistä. Kulttuurin alalla Melbournessa on useita museoita, taidegallerioita ja teattereita, ja sitä kutsutaan myös \"Australian urheilupääkaupungiksi\". Melbournen krikettikenttä (Melbourne Cricket Ground) on Australian suurin stadion, ja siellä järjestettiin vuoden 1956 kesäolympialaiset ja vuoden 2006 Kansainyhteisön kisat. Kenttää pidetään myös australialaisen kriketin ja australialaisen jalkapallon \"henkisenä kotina\", ja se isännöi vuosittain Australian jalkapalloliigan (AFL) suurta loppuottelua, johon osallistuu yleensä yli 95 000 ihmistä. Victoriaan kuuluu kahdeksan julkista yliopistoa, joista vanhin, Melbournen yliopisto, on perustettu vuonna 1853."
}
]
}
```
#### squad_v2
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
- `texts`: a `string` feature.
- `starts`: a `int32` feature.
### Data Splits
| name | train | validation |
| -------- | -----: | ---------: |
| squad_v2 | 130319 | 11873 |
### Evaluation Results
Results from fine-tuning [TurkuNLP/bert-base-finnish-cased-v1](ttps://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) for extractive question answering.
| dataset | F1 |
| -------------------- | ----: |
| TurkuNLP/squad_v2_fi | 73.66 |
| ilmariky/SQuAD_v2_fi | 61.87 |
### Considerations for Using the Data
Due to DeepL terms and conditions, this dataset **must not be used for any machine translation work**, namely machine translation
system development and evaluation of any kind. In general, we wish you do not pair the original English data with the translations
except when working on research unrelated to machine translation, so as not to infringe on the terms and conditions.
### Licensing Information
Contents of this repository are distributed under the
[Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
Copyright of the dataset contents belongs to the original copyright holders. | [
-0.5233911275863647,
-0.8134146332740784,
0.3837854266166687,
0.13047723472118378,
-0.24885541200637817,
-0.10557223856449127,
-0.1522458791732788,
-0.2246771901845932,
0.46618449687957764,
0.6326386332511902,
-1.0391203165054321,
-0.7314918637275696,
-0.518837571144104,
0.6808647513389587... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GEM/xmediasum | GEM | 2023-02-15T14:01:56Z | 56 | 4 | null | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:zh",
"language:de",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-02-15T14:01:56Z | 2023-02-15T14:01:13.000Z | 2023-02-15T14:01:13 | ---
annotations_creators:
- expert-generated
language:
- en
- zh
- de
language_creators:
- crowdsourced
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
pretty_name: xmediasum
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- summarization
task_ids: []
---
# Dataset Card for XMediaSum
### Dataset Summary
We present XMediaSum, a cross-lingual dialogue summarization dataset with 40K English(dialogues)->Chinese(summaries) and 40K English (dialogues)->German(summaries) samples. XMediaSum is created by manually translating the English summaries of MediaSum (a English monolingual dialogue summarization dataset) to both Chinese and German.
- Paper: [ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization](https://aclanthology.org/2022.emnlp-main.526/) (EMNLP 2022)
- GitHub: https://github.com/krystalan/ClidSum
### Supported Task
- Cross-Lingual Summarization
- Cross-Lingual Dialogue Summarization
### Languages
- source language: English
- target language: Chinese and German
## Dataset Structure
### Data Instances
One example is given below in JSON format:
```json
{
"dialogue": "MADELELEINE BRAND, host: OK, here's some good news on the jobs front for both men and women. A new survey out today from the employment firm Manpower finds that about a quarter of employers will add jobs this summer. That's for adults, but for teenagers this summer's job market is shaping up to be the weakest in more than 50 years.\r\nALEX COHEN, host: So, how do you get your teenage kids not to spend the entire summer glued to the couch? You're about to get some tips from Michelle Singletary. She's Day to Day's personal finance contributor. Hi, Michelle!\r\nMICHELLE SINGLETARY: Hi!\r\nALEX COHEN, host: So why is the summer job market so hard for teens this year?\r\nMICHELLE SINGLETARY: Lot of things going on right now. We've got a tough economy. We've got a lot of college graduates going into the market. We have people who are losing their jobs and taking jobs that would traditionally go to teens, like in restaurants and retailers. And we have a lot of older people holding on to their jobs and not retiring because they can't afford to retire. And that puts teens at the end of the line when it comes to these types of jobs.\r\nALEX COHEN, host: So you've got a teenager at home, a little bit young for the working world just yet, but what would you say to a teenager who's out there hunting around for a job?\r\nMICHELLE SINGLETARY: If you absolutely need a job, keep looking. You know, obviously the types of jobs that teens tend to go for in retail, fast food, you know, they still need people. And oftentimes you know, listen, you may not get the job at the beginning of the summer, but hold on because in late summer, when some of those college students are going back and perhaps some of those people who lost their jobs are finding permanent positions with more pay, you might be able to still get that job. So don't give up, you may spend a month or month and a half without it, but go back to those retailers and those restaurants and those fast food places to see if they still need someone.\r\nALEX COHEN, host: And now I know parents like having the break from providing allowance. But, you know, is - are there reasons maybe not to push your teen towards taking a job?\r\nMICHELLE SINGLETARY: I think it absolutely is. In fact I think too many teens are working and they don't need to work. They're some who absolutely need, they're contributing to their household or they're putting money into their own college fund. But more often than not, what parents do is say you've got to get a job, and then the teens get the job and they spend all the money on clothes and you know videos and iPods and paying their cell phone bills because they don't need a cell phone anyway.\r\nALEX COHEN, host: So it's not going towards the college tuition at all.\r\nMICHELLE SINGLETARY: It is not. It's just disposable income that they're disposing of. And parents are not setting any limits and you know and then the kids get used to the fact that they're using all of their paycheck. That's another bad habit. Because they don't have to pay bills and all, all their income goes through you know this stuff.\r\nMICHELLE SINGLETARY: And when it comes time to get a real job, they're surprised they don't have enough money. And so you know what? You can wait to work. Instead, maybe they can spend the summer volunteering at a charitable organization or you know going back to school and boosting up their math skills or their English skills. We push the teens out into the market too soon, I think for some families.\r\nALEX COHEN, host: But now let's say your kid is working. What tips can parents provide in terms of holding on to that summer money?\r\nMICHELLE SINGLETARY: You know, before they get their job, they need to sit down with them and do a budget. So before they actually work and get that first paycheck I mean, you know, have them draw up a budge where the money is going. And you ought to have some requirements for some of their money. That's right, be a parent.\r\nMICHELLE SINGLETARY: So make them put some of it towards their college fund, if in fact they're headed for college. You know what? Make them put some away, I call it the tax fund, even though they may not have to pay taxes, but to pay for long-term things that they may want. You know, books once they get to college, or maybe they want to get a car, and they can actually pay cash for it, with some of these funds. Don't let them just go out and spend it on movies and stuff. You ought to set some guidelines - this is where you should put the money. And look at their budget.\r\nALEX COHEN, host: Day to Day's personal finance contributor Michelle Singletary. Thank you, Michelle!\r\nMICHELLE SINGLETARY: You're welcome.\r\nALEX COHEN, host: Stay with us. NPR's Day to Day continues.",
"summary": "The tight job market could be bad news for teens seeking summer work. If your teen does find a job, will he or she know how to manage those paychecks? Our personal finance contributor talks with Alex Cohen about ways to help teens find a job.",
"summary_de": "Der angespannte Arbeitsmarkt könnte für Jugendliche, die Sommerarbeit suchen, eine schlechte Nachricht sein. Wenn Ihr Teenager einen Job findet, wird er oder sie wissen, wie er mit diesen Gehaltsschecks umgeht? Unser Mitarbeiter für persönliche Finanzen spricht mit Alex Cohen darüber, wie Teenager bei der Jobsuche unterstützt werden können.",
"summary_zh": "紧张的就业市场对寻找暑期工作的青少年来说可能是个坏消息。如果你的孩子找到了一份工作,他/她懂得怎么管理这些薪水吗?我们的个人理财撰稿人与亚历克斯·科恩谈论如何帮助青少年找到工作。"
},
```
### Data Fields
- 'dialogue': An English dialogue
- 'summary': the original English summary of the corresponding dialogue (provided by MediaSum)
- 'summary_de': the human-translated German summary
- 'summary_zh': the human-translated Chinese summary
### Data Splits
- training set: 20K samples
- validation set: 10K samples
- testing set: 10K samples
## Dataset Creation
Please refer to [our paper](https://aclanthology.org/2022.emnlp-main.526/) for more details.
## Considerations for Using the Data
Please refer to [our paper](https://aclanthology.org/2022.emnlp-main.526/) for more details.
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/krystalan/ClidSum)
### Licensing Information
License: CC BY-NC-SA 4.0
### Citation Information
```
@inproceedings{wang-etal-2022-clidsum,
title = "{C}lid{S}um: A Benchmark Dataset for Cross-Lingual Dialogue Summarization",
author = "Wang, Jiaan and
Meng, Fandong and
Lu, Ziyao and
Zheng, Duo and
Li, Zhixu and
Qu, Jianfeng and
Zhou, Jie",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.526",
pages = "7716--7729",
abstract = "We present ClidSum, a benchmark dataset towards building cross-lingual summarization systems on dialogue documents. It consists of 67k+ dialogue documents and 112k+ annotated summaries in different target languages. Based on the proposed ClidSum, we introduce two benchmark settings for supervised and semi-supervised scenarios, respectively. We then build various baseline systems in different paradigms (pipeline and end-to-end) and conduct extensive experiments on ClidSum to provide deeper analyses. Furthermore, we propose mDialBART which extends mBART via further pre-training, where the multiple objectives help the pre-trained model capture the structural characteristics as well as key content in dialogues and the transformation from source to the target language. Experimental results show the superiority of mDialBART, as an end-to-end model, outperforms strong pipeline models on ClidSum. Finally, we discuss specific challenges that current approaches faced with this task and give multiple promising directions for future research. We have released the dataset and code at https://github.com/krystalan/ClidSum.",
}
```
### Contributions
Thanks to [@krystalan](https://github.com/krystalan) for adding this dataset. | [
-0.3368542492389679,
-0.42955830693244934,
0.22087731957435608,
0.2709539830684662,
-0.19968397915363312,
0.02278730273246765,
-0.19263695180416107,
-0.3140295743942261,
0.36946165561676025,
0.40570101141929626,
-0.820305347442627,
-0.44382137060165405,
-0.23899856209754944,
-0.08526229858... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AnonymousSub/MedQuAD_47441_Question_Answer_Pairs | AnonymousSub | 2023-03-09T15:02:29Z | 56 | 6 | null | [
"region:us"
] | 2023-03-09T15:02:29Z | 2023-03-09T15:02:27.000Z | 2023-03-09T15:02:27 | ---
dataset_info:
features:
- name: Questions
dtype: string
- name: Answers
dtype: string
splits:
- name: train
num_bytes: 24216623
num_examples: 47441
download_size: 9258859
dataset_size: 24216623
---
# Dataset Card for "MedQuAD_47441_Question_Answer_Pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6308921575546265,
-0.4131902754306793,
0.24950376152992249,
0.09683740139007568,
-0.34914055466651917,
-0.09445356577634811,
0.31182295083999634,
-0.04196874052286148,
0.8231142163276672,
0.6194606423377991,
-0.7474482655525208,
-0.5907946825027466,
-0.4086076021194458,
-0.0986738353967... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ErfanMoosaviMonazzah/fake-news-detection-dataset-English | ErfanMoosaviMonazzah | 2023-03-23T13:05:33Z | 56 | 0 | null | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:openrail",
"fake news",
"text classification",
"region:us"
] | 2023-03-23T13:05:33Z | 2023-03-23T12:55:15.000Z | 2023-03-23T12:55:15 | ---
license: openrail
task_categories:
- text-classification
language:
- en
tags:
- fake news
- text classification
pretty_name: Fake News Detection Dataset (English)
size_categories:
- 10K<n<100K
---
This is a cleaned and splitted version of this dataset (https://www.kaggle.com/datasets/sadikaljarif/fake-news-detection-dataset-english) <br>
Labels:
- Fake News: 0
- Real News: 1
<br>
You can find the cleansing script at: https://github.com/ErfanMoosaviMonazzah/Fake-News-Detection | [
-0.37900418043136597,
-0.8681780695915222,
0.1728612780570984,
-0.07747595757246017,
-0.41310110688209534,
0.22795158624649048,
-0.0007908171974122524,
-0.3615247309207916,
0.8587929606437683,
0.7834892868995667,
-0.8411345481872559,
-0.6332840919494629,
-0.4352093040943146,
0.192559629678... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
metaeval/boolq-natural-perturbations | metaeval | 2023-04-09T14:14:18Z | 56 | 0 | null | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2023-04-09T14:14:18Z | 2023-04-07T09:05:20.000Z | 2023-04-07T09:05:20 | ---
task_categories:
- text-classification
language:
- en
---
BoolQ questions with semantic alteration and human verifications
```bib
@article{khashabi2020naturalperturbations,
title={Natural Perturbation for Robust Question Answering},
author={D. Khashabi and T. Khot and A. Sabhwaral},
journal={arXiv preprint},
year={2020}
}
``` | [
-0.27067962288856506,
-1.2990108728408813,
0.23529209196567535,
-0.25707870721817017,
-0.17404238879680634,
0.24242044985294342,
0.16602647304534912,
-0.9207077026367188,
0.0666179209947586,
0.7931689620018005,
-0.7273305058479309,
-0.25814497470855713,
-0.3125714361667633,
0.4370964169502... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ejschwartz/oo-method-test | ejschwartz | 2023-09-03T14:34:23Z | 56 | 1 | null | [
"task_categories:text-classification",
"license:bsd",
"region:us"
] | 2023-09-03T14:34:23Z | 2023-06-07T19:33:55.000Z | 2023-06-07T19:33:55 | ---
license: bsd
task_categories:
- text-classification
#task_ids:
#- binary-classification
dataset_info:
features:
- name: Binary
dtype: string
- name: Addr
dtype: string
- name: Name
dtype: string
- name: Type
dtype:
class_label:
names:
'0': func
'1': method
- name: Disassembly
dtype: string
config_name: ejschwartz--oo-method-test
splits:
- name: combined
num_bytes: 6054378861
num_examples: 3537794
download_size: 1351783459
dataset_size: 6054378861
train-eval-index:
- config: default # The dataset config name to use. Example for datasets without configs: default. Example for glue: sst2
task: text-classification # The task category name (same as task_category). Example: question-answering
task_id: binary_classification # The AutoTrain task id. Example: extractive_question_answering
splits:
#train_split: train # The split to use for training. Example: train
eval_split: train # The split to use for evaluation. Example: test
col_mapping: # The columns mapping needed to configure the task_id.
Disassembly: text
Type: target
metrics:
- type: accuracy # The metric id. Example: wer. Use metric id from https://hf.co/metrics
name: accuracy # Tne metric name to be displayed. Example: Test WER
---
# Dataset Card for OO Method Test Dataset
## Dataset Description
### Dataset Summary
This dataset describes compiled functions in various [small, simple C++ programs](https://github.com/sei-eschwartz/buildexes/tree/master/tests/src/oo).
These programs were automatically compiled using various versions of Microsoft's Visual C++ compiler and different compilation settings. The details can be found
in the [BuildExes](https://github.com/sei-eschwartz/buildexes) repository.
For each function, the dataset includes a disassembled (using ROSE's `bat-dis` tool) representation of the compiled code, its name, and whether the function is a OO method or not.
**This dataset is largely intended for @ejschwartz to experiment with learning techniques and tools. The programs are artificial and are likely not representative of real programs.**
### Supported Tasks and Leaderboards
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed] | [
-0.5287927985191345,
-0.7026386260986328,
0.035980187356472015,
-0.1384800374507904,
0.02632085792720318,
0.11118005961179733,
0.08133172988891602,
-0.4170570373535156,
0.16083627939224243,
0.3444260358810425,
-0.6021372675895691,
-0.7190757393836975,
-0.1529170274734497,
0.094290897250175... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vietgpt-archive/c4 | vietgpt-archive | 2023-06-13T13:24:33Z | 56 | 0 | null | [
"region:us"
] | 2023-06-13T13:24:33Z | 2023-06-13T03:11:40.000Z | 2023-06-13T03:11:40 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ClimatePolicyRadar/global-stocktake-documents | ClimatePolicyRadar | 2023-10-18T16:10:13Z | 56 | 4 | null | [
"size_categories:1M<n<10M",
"language:en",
"license:cc",
"climate",
"policy",
"legal",
"doi:10.57967/hf/1112",
"region:us"
] | 2023-10-18T16:10:13Z | 2023-06-28T16:00:48.000Z | 2023-06-28T16:00:48 | ---
language:
- en
tags:
- climate
- policy
- legal
size_categories:
- 1M<n<10M
license: cc
dataset_info:
features:
- name: family_slug
dtype: string
- name: types
sequence: string
- name: role
dtype: string
- name: block_index
dtype: int64
- name: date
dtype: date32
- name: geography_iso
dtype: string
- name: document_name
dtype: string
- name: variant
dtype: string
- name: type_confidence
dtype: float64
- name: document_languages
sequence: string
- name: text_block_id
dtype: string
- name: document_source_url
dtype: string
- name: author_is_party
dtype: bool
- name: type
dtype: string
- name: coords
sequence:
sequence: float64
- name: author
sequence: string
- name: family_name
dtype: string
- name: status
dtype: string
- name: collection_id
dtype: string
- name: family_id
dtype: string
- name: language
dtype: string
- name: page_number
dtype: int64
- name: text
dtype: string
- name: has_valid_text
dtype: bool
- name: document_id
dtype: string
- name: translated
dtype: bool
- name: document_content_type
dtype: string
- name: document_md5_sum
dtype: string
splits:
- name: train
num_bytes: 1278730693
num_examples: 1578645
download_size: 228690459
dataset_size: 1278730693
---
# Global Stocktake Open Data
This repo contains the data for the first [UNFCCC Global Stocktake](https://unfccc.int/topics/global-stocktake). The data consists of document metadata from sources relevant to the Global Stocktake process, as well as full text parsed from the majority of the documents.
The files in this dataset are as follows:
- `metadata.csv`: a CSV containing document metadata for each document we have collected. **This metadata may not be the same as what's stored in the source databases** – we have cleaned and added metadata where it's corrupted or missing.
- `full_text.parquet`: a parquet file containing the full text of each document we have parsed. Each row is a text block (paragraph) with all the associated text block and document metadata.
A research tool you can use to view this data and the results of some classifiers run on it is at [gst1.org](https://gst1.org).
This data is licensed according to CC BY 4.0, which is a license that represents the terms at the source repositories.
**Contents**
- [Sources and data completeness](#sources-and-data-completeness)
- [Field descriptions](#field-descriptions)
- [Known issues](#known-issues)
- [Usage in Python](#usage-in-python)
- [Loading metadata CSV](#loading-metadata-csv)
- [Loading text block data](#loading-text-block-data)
---
## Sources and data completeness
This dataset contains documents from the following sources:
* [Global Stocktake Information Portal](https://unfccc.int/topics/global-stocktake/information-portal)
* [NDC Registry](https://unfccc.int/NDCREG)
* [Adaptation Communications Registry](https://unfccc.int/ACR)
* [Fast-Start Finance Country Reports](https://unfccc.int/climatefinance?submissions)
* [IPCC Reports](https://www.ipcc.ch/reports/)
The following Global Stocktake relevant data sources are not yet in this dataset:
* [National Adaptation Plan Central Portal](https://napcentral.org/submitted-naps)
* [TNA Country Reports](https://unfccc.int/ttclear/tna/reports.html)
### Data completeness
The last refresh of the data was on **2023-10-18**.
We currently only parse text out of PDFs. Any non-PDF file will only be referenced in `metadata.csv`, and not be referenced in `full_text.parquet`.
We have yet to process approximately 150 documents of the 1700 documents due to formatting issues. We are working on resolving this issue as soon as possible. [See the document list here](https://labs.climatepolicyradar.org/global-stocktake/UNPROCESSED_DOCUMENTS.html).
## Data model
This dataset contains individual documents that are grouped into 'document families'.
The way to think of is as follows:
* Each row in the dataset is a physical document. A physical document is a single document, in any format.
* All physical documents belong to document families. A document family is one or more physical documents, centred around a main document, which jointly contain all relevant information about the main document. For example, where a document has a translation, amendments or annexes, those files are stored together as a family.
### Getting unique text blocks
> TODO
## Field descriptions
- `author`: document author (str)
- `author_is_party`: whether the author is a Party (national government) or not (bool)
- `block_index`: the index of a text block in a document. Starts from 0 (int)
- `coords`: coordinates of the text block on the page
- `date`: publication date of the document
- `document_content_type`: file type. We have only parsed text from PDFs.
- `document_id`: unique identifier for a document
- `document_family_id`: see *data model* section above
- `document_family_slug`: see *data model* section above
- `document_md5_sum`: md5sum of the document's content
- `document_name`: document title
- `document_source_url`: URL for document
- `document_variant`: used to identify translations. In `[nan, 'Translation', 'Original Language']`
- `has_valid_text`: our heuristic about whether text is valid or not in the document based on the parser
- `language`: language of the text block. Either `en` or `nan` - see known issues
- `page_number`: page number of text block (0-indexed)
- `text`: text in text block
- `text_block_id`: identifier for a text block which is unique per document
- `translated`: whether we have machine-translated the document to English. Where we have translated documents, both the original and translated exist.
- `type`: type of text block. In `["Text", "Title", "List", "Table", "Figure","Ambiguous"]`
- `type_confidence`: confidence from that the text block is of the labelled type
- `types`: list of document types e.g. Nationally Determined Contribution, National Adaptation Plan (list[str])
- `version`: in `['MAIN', 'ANNEX', 'SUMMARY', 'AMENDMENT', 'SUPPORTING DOCUMENTATION', 'PREVIOUS VERSION']`
## Known issues
* Author names are sometimes corrupted
* Text block languages are sometimes missing or marked as `nan`
## Usage in Python
The easiest way to access this data via the terminal is to run `git clone <this-url>`.
### Loading metadata CSV
``` py
metadata = pd.read_csv("metadata.csv")
```
### Loading text block data
Once loaded into a Huggingface Dataset or Pandas DataFrame object the parquet file can be converted to other formats, e.g. Excel, CSV or JSON.
``` py
# Using huggingface (easiest)
dataset = load_dataset("ClimatePolicyRadar/global-stocktake-documents")
# Using pandas
text_blocks = pd.read_parquet("full_text.parquet")
``` | [
-0.16724589467048645,
-0.38813820481300354,
0.4487127363681793,
0.23571287095546722,
-0.16974471509456635,
0.04231514781713486,
-0.18513669073581696,
-0.22339759767055511,
0.060530006885528564,
0.6812530755996704,
-0.4525977373123169,
-0.9374530911445618,
-0.5956141948699951,
0.20336125791... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sajid73/SUBESCO-audio-dataset | sajid73 | 2023-07-03T11:53:40Z | 56 | 1 | null | [
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"language:bn",
"license:cc-by-4.0",
"region:us"
] | 2023-07-03T11:53:40Z | 2023-07-02T09:54:55.000Z | 2023-07-02T09:54:55 | ---
license: cc-by-4.0
task_categories:
- audio-classification
language:
- bn
pretty_name: SUST BANGLA EMOTIONAL SPEECH CORPUS
size_categories:
- 1K<n<10K
---
# SUST BANGLA EMOTIONAL SPEECH CORPUS
## Dataset Description
- **Homepage:** [bn_emotion_speech_corpus](https://huggingface.co/datasets/sustcsenlp/bn_emotion_speech_corpus)
- **Repository:**
- **Paper:** [SUBESCO PAPER](https://doi.org/10.1371/journal.pone.0250173)
- **Leaderboard:**
- **Point of Contact:** [Sadia Sultana](sadia-cse@sust.edu)
### Dataset Summary
SUBESCO is an audio-only emotional speech corpus of 7000 sentence-level utterances of the Bangla language. 20 professional actors (10 males and 10 females) participated in the recordings of 10 sentences for 7 target emotions. The emotions are Anger, Disgust, Fear, Happiness, Neutral, Sadness and Surprise. Total duration of the corpus is 7 hours 40 min 40 sec. Total size of the dataset is 2.03 GB. The dataset was evaluated by 50 raters (25 males, 25 females). Human perception test achieved a raw accuracy of 71%. All the details relating to creation, evaluation and analysis of SUBESCO have been described in the corresponding journal paper which has been published in Plos One.
https://doi.org/10.1371/journal.pone.0250173
### Downloading the data
```
from datasets import load_dataset
train = load_dataset("sajid73/SUBESCO-audio-dataset", split="train")
```
### Languages
This dataset contains `Bangla` Audio Data.
## Dataset Creation
This database was created as a part of PhD thesis project of the author Sadia Sultana. It was designed and developed by the author in the Department of Computer Science and Engineering of Shahjalal University of Science and Technology. Financial grant was supported by the university. If you use the dataset please cite SUBESCO and the corresponding academic journal publication in Plos One.
### Citation Information
```
@dataset{sadia_sultana_2021_4526477,
author = {Sadia Sultana},
title = {SUST Bangla Emotional Speech Corpus (SUBESCO)},
month = feb,
year = 2021,
note = {{This database was created as a part of PhD thesis
project of the author Sadia Sultana. It was
designed and developed by the author in the
Department of Computer Science and Engineering of
Shahjalal University of Science and Technology.
Financial grant was supported by the university.
If you use the dataset please cite SUBESCO and the
corresponding academic journal publication in Plos
One.}},
publisher = {Zenodo},
version = {version - 1.1},
doi = {10.5281/zenodo.4526477},
url = {https://doi.org/10.5281/zenodo.4526477}
}
```
### Contributors
| Name | University |
| ----------- | ----------- |
| Sadia Sultana | Shahjalal University of Science and Technology |
| Dr. M. Zafar Iqbal | Shahjalal University of Science and Technology |
| Dr. M. Shahidur Rahman | Shahjalal University of Science and Technology | | [
-0.41645780205726624,
-0.46078041195869446,
-0.1390836536884308,
0.3998659551143646,
-0.5791550874710083,
-0.23187461495399475,
-0.1309301108121872,
-0.3490872383117676,
0.5734304189682007,
0.19265128672122955,
-0.7040859460830688,
-0.8180736303329468,
-0.5288841128349304,
0.22469572722911... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
daqc/wikipedia-txt-spanish | daqc | 2023-08-06T19:06:28Z | 56 | 0 | null | [
"license:cc",
"region:us"
] | 2023-08-06T19:06:28Z | 2023-08-06T18:58:47.000Z | 2023-08-06T18:58:47 | ---
license: cc
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LEAP/subsampled_low_res | LEAP | 2023-10-09T16:42:18Z | 56 | 1 | null | [
"arxiv:2306.08754",
"region:us"
] | 2023-10-09T16:42:18Z | 2023-08-18T20:31:09.000Z | 2023-08-18T20:31:09 | Inputs and targets in this dataset are pre-normalized and scaled with .nc files found on the GitHub repo:
https://github.com/leap-stc/ClimSim/tree/main/preprocessing/normalizations
Read more: https://arxiv.org/abs/2306.08754. | [
-0.46739503741264343,
-0.31942757964134216,
0.26304781436920166,
-0.034209948033094406,
-0.20326998829841614,
-0.20457440614700317,
-0.2795780897140503,
0.0010737355332821608,
0.2778204381465912,
0.7632697224617004,
-0.9193982481956482,
-0.8383467793464661,
-0.736382246017456,
0.1749847233... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HoangCuongNguyen/CTI-to-MITRE-dataset | HoangCuongNguyen | 2023-09-03T13:05:44Z | 56 | 2 | null | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-09-03T13:05:44Z | 2023-08-19T11:45:19.000Z | 2023-08-19T11:45:19 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
size_categories:
- 10K<n<100K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lv2/Indonesia_LLama | lv2 | 2023-08-26T23:08:53Z | 56 | 2 | null | [
"region:us"
] | 2023-08-26T23:08:53Z | 2023-08-26T12:38:40.000Z | 2023-08-26T12:38:40 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 42278540
num_examples: 49969
download_size: 22157927
dataset_size: 42278540
---
# Dataset Card for "Indonesia_LLama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5060991644859314,
-0.21844735741615295,
0.17987647652626038,
0.45067232847213745,
-0.6302772164344788,
0.021020295098423958,
0.3380119800567627,
-0.4642754793167114,
0.9963632225990295,
0.6520847082138062,
-0.7609110474586487,
-0.8410611748695374,
-0.8056617379188538,
0.1399392187595367... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lavita/ChatDoctor-iCliniq | lavita | 2023-09-11T21:13:37Z | 56 | 2 | null | [
"region:us"
] | 2023-09-11T21:13:37Z | 2023-09-11T21:11:18.000Z | 2023-09-11T21:11:18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input
dtype: string
- name: answer_icliniq
dtype: string
- name: answer_chatgpt
dtype: string
- name: answer_chatdoctor
dtype: string
splits:
- name: train
num_bytes: 16962106
num_examples: 7321
download_size: 9373080
dataset_size: 16962106
---
# Dataset Card for "ChatDoctor-iCliniq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6629843711853027,
-0.22016595304012299,
-0.09108736366033554,
0.16613280773162842,
-0.2040533721446991,
0.22476468980312347,
0.20383401215076447,
-0.040640681982040405,
0.7429875135421753,
0.4839954078197479,
-0.822334885597229,
-0.8994938135147095,
-0.6575722694396973,
-0.4473366439342... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kunishou/amenokaku-code-instruct | kunishou | 2023-10-03T11:38:43Z | 56 | 4 | null | [
"license:other",
"region:us"
] | 2023-10-03T11:38:43Z | 2023-10-01T01:04:50.000Z | 2023-10-01T01:04:50 | ---
license: other
license_name: mixed-liscence
license_link: LICENSE
---
# AmenokakuCode-instruct
## 概要
- コードに特化した5.2KのInstructionデータセットです。
- データセットに含まれるデータは商用利用できるラインセンスが付与されたプログラミング学習コンテンツから収集、加工し作成しました(英語のコンテンツは日本語に自動翻訳し、翻訳の不自然な箇所を手動で修正)。
- また、ライセンスが明記されていない学習コンテンツについては権利者に個別に連絡を取り、本データセットへの掲載の許諾を得ております。
## データセット詳細
指示タスクの内訳としてはコード生成(code_generation)が1050レコード、コードの挙動確認(check_code_behavor)が150レコード、コードのバグ修正(code_fix)が4000レコードになります。
詳細な内訳は以下の通りになります。
|source name|num record|liscence|url|
|:----|:----|:----|:----|
|データサイエンス100本ノック(構造化データ加工編)(Python解答)|100|[MIT](https://github.com/The-Japan-DataScientist-Society/100knocks-preprocess/blob/master/LICENSE)|https://github.com/The-Japan-DataScientist-Society/100knocks-preprocess|
|データサイエンス100本ノック(構造化データ加工編)(SQL解答)|100|[MIT](https://github.com/rootassist/100knocks-preprocess-inSQLandPython-withColab/blob/master/LICENSE)|https://github.com/rootassist/100knocks-preprocess-inSQLandPython-withColab|
|画像処理100本ノック|100|[MIT](https://github.com/ryoppippi/Gasyori100knock/blob/master/LICENSE)|https://github.com/ryoppippi/Gasyori100knock|
|言語処理100本ノック2020|100|[MIT](https://github.com/nlp100/nlp100.github.io/blob/develop/LICENSE)<br>[MIT](https://github.com/upura/nlp100v2020/blob/master/LICENSE)|(問題) https://github.com/nlp100/nlp100.github.io<br>(解答) https://github.com/upura/nlp100v2020|
|Python初学者のためのpandas100本ノック※|100|AmenokakuCode Liscence|https://qiita.com/kunishou/items/bd5fad9a334f4f5be51c|
|Python初学者のためのPolars100本ノック※|100|AmenokakuCode Liscence|https://qiita.com/kunishou/items/1386d14a136f585e504e|
|100 Numpy Execieses|100|[MIT](https://github.com/rougier/numpy-100/blob/master/LICENSE.txt)|https://github.com/rougier/numpy-100|
|100 Julia Exercises|100|The Unliscence|https://github.com/RoyiAvital/Julia100Exercises|
|自作Python100本ノック|100|AmenokakuCode Liscence|https://qiita.com/ahpjop/items/373f807d68044cda1c9b|
|Python-for-Beginners-Solve-50-Exercises-Live|50|[MIT](https://github.com/garg10may/Python-for-Beginners-Solve-50-Exercises-Live/blob/master/LICENSE)|https://github.com/garg10may/Python-for-Beginners-Solve-50-Exercises-Live|
|R初学者のためのtidyverse100本ノック|100|AmenokakuCode Liscence|https://qiita.com/nekobo/items/cbf32a13637273f229da|
|JavaScript Questions|155|[MIT](https://github.com/lydiahallie/javascript-questions/blob/master/LICENSE)|https://github.com/lydiahallie/javascript-questions|
|Break-It-Fix-It|4,000|[MIT](https://github.com/michiyasunaga/BIFI/blob/main/LICENSE)|https://github.com/michiyasunaga/BIFI|
※ 私が過去に作成した学習コンテンツです。
## ライセンス
個々のデータのライセンスは収集元のライセンスに従うため、データセット全体では混合ライセンスになります。
また、データ自体にライセンスが明記されておらず個別に権利者に言語モデル学習用途でデータセットへの掲載許諾を取ったデータに関しては [AmenokakuCode Liscence](https://github.com/kunishou/amenokaku-code-instruct/blob/main/AmenokakuCode%20Liscence) というライセンスを付与しています。このライセンスは、言語モデルでの学習用途に限り自由にデータを利用することを許可するものになります(そのため、データ自体を販売したり、配布することは認めていません)。
## データセットの更新
データセットについては、商用利用可能なプログラミング学習コンテンツを見つけたら今後随時追加していきたいと思います。
**もし、有益なコンテンツを見つけたり、自身で作成した学習コンテンツを提供しても良いという方がおりましたら是非ご連絡下さい。**
## データセット名
Amenokaku は古事記に登場する[天迦久神](http://kojiki.kokugakuin.ac.jp/shinmei/amenokakunokami/)(あめのかくのかみ)という鹿の神様の名前を参考にしました。
## Github
https://github.com/kunishou/amenokaku-code-instruct | [
-0.5559020638465881,
-0.5883769392967224,
0.30229195952415466,
0.4662507176399231,
-0.1574951708316803,
0.11498350650072098,
-0.26691532135009766,
-0.3705274164676666,
0.5287347435951233,
0.12435230612754822,
-0.4274010956287384,
-0.7111800909042358,
-0.28330254554748535,
0.192742377519607... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nadsoft/Jordan-Audio | nadsoft | 2023-10-11T08:20:58Z | 56 | 0 | null | [
"region:us"
] | 2023-10-11T08:20:58Z | 2023-10-11T08:17:45.000Z | 2023-10-11T08:17:45 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 669684377.68
num_examples: 5044
download_size: 660360475
dataset_size: 669684377.68
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jo_aud"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6079072952270508,
-0.5867852568626404,
0.09272506833076477,
0.1917182356119156,
-0.1704917848110199,
-0.11190445721149445,
0.27285754680633545,
-0.188079833984375,
0.9884785413742065,
0.7355281710624695,
-0.7954195141792297,
-0.8259356021881104,
-0.6500076651573181,
-0.12979614734649658... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Lostkyd/pdf_forms | Lostkyd | 2023-10-30T10:01:17Z | 56 | 0 | null | [
"region:us"
] | 2023-10-30T10:01:17Z | 2023-10-30T09:36:16.000Z | 2023-10-30T09:36:16 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pesc101/CodeAlpacpa-20k-llama-format | pesc101 | 2023-10-30T11:07:46Z | 56 | 0 | null | [
"region:us"
] | 2023-10-30T11:07:46Z | 2023-10-30T10:56:13.000Z | 2023-10-30T10:56:13 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 6535066
num_examples: 20022
download_size: 3269704
dataset_size: 6535066
---
# Dataset Card for "CodeAlpacpa-20k-llama-format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4036215841770172,
-0.08713018894195557,
0.030314121395349503,
0.8639485239982605,
-0.5058469176292419,
0.30786076188087463,
0.08880428224802017,
-0.2582371234893799,
1.0647112131118774,
0.6279873847961426,
-0.6299552321434021,
-0.8130576014518738,
-0.7245714068412781,
-0.006820569746196... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kosta-naumenko/medflex | kosta-naumenko | 2023-11-17T12:38:41Z | 56 | 0 | null | [
"region:us"
] | 2023-11-17T12:38:41Z | 2023-10-31T13:14:13.000Z | 2023-10-31T13:14:13 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
splits:
- name: train
num_bytes: 2574069
num_examples: 1934
download_size: 314783
dataset_size: 2574069
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "medflex"
dataset = load_dataset("kosta-naumenko/medflex", split='train', download_mode='force_redownload', verification_mode='no_checks')
'tokens' - список списков слов предложений (is_split_into_words=True при токенизации)
'ner_tags' - список списков классов слов
- 0 - не симптом
- 1 - начало симптома
- 2 - продолжение симптома
Пример дальнейшей обработки - https://huggingface.co/learn/nlp-course/chapter7/2
| [
-0.39410316944122314,
-0.6054456830024719,
0.12785789370536804,
0.2575981020927429,
-0.553767204284668,
0.1464008390903473,
-0.22069162130355835,
-0.11078833043575287,
0.4965977072715759,
0.4207595884799957,
-1.0289711952209473,
-0.6423448920249939,
-0.5004250407218933,
0.44145679473876953... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
slplab/QoLT_apacAnd5Sentences | slplab | 2023-11-02T12:36:57Z | 56 | 0 | null | [
"region:us"
] | 2023-11-02T12:36:57Z | 2023-11-02T12:36:07.000Z | 2023-11-02T12:36:07 | ---
dataset_info:
features:
- name: speaker
dtype: string
- name: audio
struct:
- name: array
sequence: float32
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: txt
dtype: string
- name: severity
dtype: int64
splits:
- name: train
num_bytes: 507067812
num_examples: 2352
- name: validation
num_bytes: 62855037
num_examples: 294
- name: test
num_bytes: 62245337
num_examples: 294
download_size: 278294583
dataset_size: 632168186
---
# Dataset Card for "QoLT_apacAnd5Sentences"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5855717062950134,
0.21448585391044617,
0.3024299442768097,
0.2351403832435608,
-0.3930108845233917,
-0.018492499366402626,
0.4524361193180084,
-0.1438986212015152,
0.7404292821884155,
0.31074830889701843,
-0.822072446346283,
-0.9373213648796082,
-0.47764018177986145,
-0.0239629317075014... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KelNoMel/llama2-poi-traj-prediction | KelNoMel | 2023-11-24T07:45:26Z | 56 | 0 | null | [
"region:us"
] | 2023-11-24T07:45:26Z | 2023-11-05T14:53:41.000Z | 2023-11-05T14:53:41 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nmd2k/apps_rlaif | nmd2k | 2023-11-27T17:41:45Z | 56 | 0 | null | [
"task_categories:text-generation",
"task_categories:reinforcement-learning",
"size_categories:1K<n<10K",
"license:mit",
"code",
"arxiv:2105.09938",
"arxiv:2307.09288",
"region:us"
] | 2023-11-27T17:41:45Z | 2023-11-06T08:15:38.000Z | 2023-11-06T08:15:38 | ---
license: mit
size_categories:
- 1K<n<10K
task_categories:
- text-generation
- reinforcement-learning
pretty_name: apps_rlaif
dataset_info:
features:
- name: problem_id
dtype: int64
- name: question
dtype: string
- name: input_output
dtype: string
- name: difficulty
dtype: string
- name: url
dtype: string
- name: starter_code
dtype: string
- name: prefer_solution
dtype: string
- name: flaw_solution
sequence: string
splits:
- name: train
num_bytes: 196914903
num_examples: 23129
download_size: 38020746
dataset_size: 196914903
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- code
---
# APPS Dataset for Reinforcement Learning with AI Feedback
## Dataset Details
[APPS_RLAIF](https://huggingface.co/datasets/nmd2k/apps_rlaif/) is an extended work from APPS [[1]]([^1])
to use Chat LLMs to create multiple variances for each solution for defined problems.
In each solution, we use LLama 34B [[2]]([^2]) to transform the original solutions into variances and rank them by score.
The generated flow is demonstrated as below; each variance is created based on the previous version of it in the chat.
We iterated each solutions `n=3` times
<img src="https://cdn-uploads.huggingface.co/production/uploads/63733f7fd398fce0dd45125c/MhfwiSbafLQDvxQdTuR-2.png" width="600" />
## Languages
The dataset contains problem description in English and code solutions in Python.
## Dataset Structure
```python
from datasets import load_dataset
load_dataset("nmd2k/apps_rlaif")
DatasetDict({
train: Dataset({
features: ['problem_id', 'question', 'solutions', 'input_output', 'difficulty', 'url', 'starter_code', 'variances'],
num_rows: 4999
})
})
```
**How to use the dataset**
Each sample consists of a pair of problems and solutions (from APPS [[1]]([^1])) and a list of solution variances generated by LLM stored in the `variances` field.
For example:
```json
{'problem_id': 0,
'question': 'Polycarp has $n$ different binary words. A word called binary if it contains only characters \'0\' and \'1\'. For example, these words are binary: "0001", "11", "0" and "0011100".\n\nPolycarp wants to offer his set of $n$ binary words to play a game "words". In this game, players name words and each next word (starting from the second) must start with the last character of the previous word. The first word can be any. For example, these sequence of words can be named during the game: "0101", "1", "10", "00", "00001".\n\nWord reversal is the operation of reversing the order of the characters. For example, the word "0111" after the reversal becomes "1110", the word "11010" after the reversal becomes "01011".\n\nProbably, Polycarp has such a set of words that there is no way to put them in the order correspondent to the game rules. In this situation, he wants to reverse some words from his set so that: the final set of $n$ words still contains different words (i.e. all words are unique); there is a way to put all words of the final set of words in the order so that the final sequence of $n$ words is consistent with the game rules. \n\nPolycarp wants to reverse minimal number of words. Please, help him.\n\n\n-----Input-----\n\nThe first line of the input contains one integer $t$ ($1 \\le t \\le 10^4$) — the number of test cases in the input. Then $t$ test cases follow.\n\nThe first line of a test case contains one integer $n$ ($1 \\le n \\le 2\\cdot10^5$) — the number of words in the Polycarp\'s set. Next $n$ lines contain these words. All of $n$ words aren\'t empty and contains only characters \'0\' and \'1\'. The sum of word lengths doesn\'t exceed $4\\cdot10^6$. All words are different.\n\nGuaranteed, that the sum of $n$ for all test cases in the input doesn\'t exceed $2\\cdot10^5$. Also, guaranteed that the sum of word lengths for all test cases in the input doesn\'t exceed $4\\cdot10^6$.\n\n\n-----Output-----\n\nPrint answer for all of $t$ test cases in the order they appear.\n\nIf there is no answer for the test case, print -1. Otherwise, the first line of the output should contain $k$ ($0 \\le k \\le n$) — the minimal number of words in the set which should be reversed. The second line of the output should contain $k$ distinct integers — the indexes of the words in the set which should be reversed. Words are numerated from $1$ to $n$ in the order they appear. If $k=0$ you can skip this line (or you can print an empty line). If there are many answers you can print any of them.\n\n\n-----Example-----\nInput\n4\n4\n0001\n1000\n0011\n0111\n3\n010\n101\n0\n2\n00000\n00001\n4\n01\n001\n0001\n00001\n\nOutput\n1\n3 \n-1\n0\n\n2\n1 2',
'solutions': "for _ in range(int(input())):\n n = int(input())\n mass = []\n zo = 0\n oz = 0\n zz = 0\n oo = 0\n ozs = []\n zos = []\n ozss = set()\n zoss = set()\n for j in range(n):\n k = input()\n mass.append(k)\n if k[0] == '0' and k[-1] == '1':\n zoss.add(k)\n zos.append(j + 1)\n zo += 1\n elif k[0] == '1' and k[-1] == '0':\n ozss.add(k)\n ozs.append(j + 1)\n oz += 1\n elif k[0] == '0' and k[-1] == '0':\n zz += 1\n else:\n oo += 1\n if zz and oo and not oz and not zo:\n print(-1)\n continue\n else:\n if zo > oz:\n print((zo - oz) // 2)\n ans = []\n need = (zo - oz) // 2\n i = 0\n while need:\n zzz = mass[zos[i] - 1][len(mass[zos[i] - 1]) - 1:: -1]\n if zzz not in ozss:\n ans.append(zos[i])\n need -= 1\n i += 1\n print(*ans)\n else:\n print((oz - zo) // 2)\n ans = []\n need = (oz - zo) // 2\n i = 0\n while need:\n zzz = mass[ozs[i] - 1][len(mass[ozs[i] - 1]) - 1:: -1]\n if zzz not in zoss:\n ans.append(ozs[i])\n need -= 1\n i += 1\n print(*ans)\n",
'input_output': '{\n "inputs": [\n "4\\n4\\n0001\\n1000\\n0011\\n0111\\n3\\n010\\n101\\n0\\n2\\n00000\\n00001\\n4\\n01\\n001\\n0001\\n00001\\n"\n ],\n "outputs": [\n "1\\n3 \\n-1\\n0\\n\\n2\\n1 2 \\n"\n ]\n}',
'difficulty': 'interview',
'url': 'https://codeforces.com/problemset/problem/1259/D',
'starter_code': '',
'variances': ["for _ in range(int(input())):\n n = int(input())\n numbers = []\n zero_start_one_end = 0\n one_start_zero_end = 0\n zero_start_zero_end = 0\n one_start_one_end = 0\n zero_start_one_end_indices = []\n one_start_zero_end_indices = []\n zero_start_one_end_set = set()\n one_start_zero_end_set = set()\n for j in range(n):\n k = input()\n numbers.append(k)\n if k[0] == '0' and k[-1] == '1':\n one_start_zero_end_set.add(k)\n one_start_zero_end_indices.append(j + 1)\n one_start_zero_end += 1\n elif k[0] == '1' and k[-1] == '0':\n zero_start_one_end_set.add(k)\n zero_start_one_end_indices.append(j + 1)\n zero_start_one_end += 1\n elif k[0] == '0' and k[-1] == '0':\n zero_start_zero_end += 1\n else:\n one_start_one_end += 1\n if zero_start_zero_end and one_start_one_end and not one_start_zero_end and not zero_start_one_end:\n print(-1)\n continue\n else:\n if zero_start_one_end > one_start_zero_end:\n print((zero_start_one_end - one_start_zero_end) // 2)\n result = []\n required = (zero_start_one_end - one_start_zero_end) // 2\n index = 0\n while required:\n reversed_str = numbers[zero_start_one_end_indices[index] - 1][len(numbers[zero_start_one_end_indices[index] - 1]) - 1:: -1]\n if reversed_str not in one_start_zero_end_set:\n result.append(zero_start_one_end_indices[index])\n required -= 1\n index += 1\n print(*result)\n else:\n print((one_start_zero_end - zero_start_one_end) // 2)\n result = []\n required = (one_start_zero_end - zero_start_one_end) // 2\n index = 0\n while required:\n reversed_str = numbers[one_start_zero_end_indices[index] - 1][len(numbers[one_start_zero_end_indices[index] - 1]) - 1:: -1]\n if reversed_str not in zero_start_one_end_set:\n result.append(one_start_zero_end_indices[index])\n required -= 1\n index += 1\n print(*result)",
"for _ in range(int(input())):\n n = int(input())\n sequence = []\n first_zero_last_one = 0\n first_one_last_zero = 0\n first_zero_last_zero = 0\n first_one_last_one = 0\n first_zero_last_one_positions = []\n first_one_last_zero_positions = []\n first_zero_last_one_set = set()\n first_one_last_zero_set = set()\n for i in range(n):\n element = input()\n sequence.append(element)\n if element[0] == '0' and element[-1] == '1':\n first_one_last_zero_set.add(element)\n first_one_last_zero_positions.append(i + 1)\n first_one_last_zero += 1\n elif element[0] == '1' and element[-1] == '0':\n first_zero_last_one_set.add(element)\n first_zero_last_one_positions.append(i + 1)\n first_zero_last_one += 1\n elif element[0] == '0' and element[-1] == '0':\n first_zero_last_zero += 1\n else:\n first_one_last_one += 1\n if first_zero_last_zero and first_one_last_one and not first_zero_last_one and not first_one_last_zero:\n print(-1)\n continue\n else:\n if first_zero_last_one > first_one_last_zero:\n print((first_zero_last_one - first_one_last_zero) // 2)\n solution = []\n necessary = (first_zero_last_one - first_one_last_zero) // 2\n position = 0\n while necessary:\n reversed_element = sequence[first_zero_last_one_positions[position] - 1][len(sequence[first_zero_last_one_positions[position] - 1]) - 1:: -1]\n if reversed_element not in first_one_last_zero_set:\n solution.append(first_zero_last_one_positions[position])\n necessary -= 1\n position += 1\n print(*solution)\n else:\n print((first_one_last_zero - first_zero_last_one) // 2)\n solution = []\n necessary = (first_one_last_zero - first_zero_last_one) // 2\n position = 0\n while necessary:\n reversed_element = sequence[first_one_last_zero_positions[position] - 1][len(sequence[first_one_last_zero_positions[position] - 1]) - 1:: -1]\n if reversed_element not in first_zero_last_one_set:\n solution.append(first_one_last_zero_positions[position])\n necessary -= 1\n position += 1\n print(*solution)",
"for _ in range(int(input())):\n number_of_cases = int(input())\n sequence_list = []\n zero_start_one_end_count = 0\n one_start_zero_end_count = 0\n zero_start_zero_end_count = 0\n one_start_one_end_count = 0\n zero_start_one_end_index_list = []\n one_start_zero_end_index_list = []\n zero_start_one_end_set = set()\n one_start_zero_end_set = set()\n for case_index in range(number_of_cases):\n sequence = input()\n sequence_list.append(sequence)\n if sequence[0] == '0' and sequence[-1] == '1':\n one_start_zero_end_set.add(sequence)\n one_start_zero_end_index_list.append(case_index + 1)\n one_start_zero_end_count += 1\n elif sequence[0] == '1' and sequence[-1] == '0':\n zero_start_one_end_set.add(sequence)\n zero_start_one_end_index_list.append(case_index + 1)\n zero_start_one_end_count += 1\n elif sequence[0] == '0' and sequence[-1] == '0':\n zero_start_zero_end_count += 1\n else:\n one_start_one_end_count += 1\n if zero_start_zero_end_count and one_start_one_end_count and not one_start_zero_end_count and not zero_start_one_end_count:\n print(-1)\n continue\n else:\n if zero_start_one_end_count > one_start_zero_end_count:\n print((zero_start_one_end_count - one_start_zero_end_count) // 2)\n output = []\n required_count = (zero_start_one_end_count - one_start_zero_end_count) // 2\n index = 0\n while required_count:\n reversed_sequence = sequence_list[zero_start_one_end_index_list[index] - 1][len(sequence_list[zero_start_one_end_index_list[index] - 1]) - 1:: -1]\n if reversed_sequence not in one_start_zero_end_set:\n output.append(zero_start_one_end_index_list[index])\n required_count -= 1\n index += 1\n print(*output)\n else:\n print((one_start_zero_end_count - zero_start_one_end_count) // 2)\n output = []\n required_count = (one_start_zero_end_count - zero_start_one_end_count) // 2\n index = 0\n while required_count:\n reversed_sequence = sequence_list[one_start_zero_end_index_list[index] - 1][len(sequence_list[one_start_zero_end_index_list[index] - 1]) - 1:: -1]\n if reversed_sequence not in zero_start_one_end_set:\n output.append(one_start_zero_end_index_list[index])\n required_count -= 1\n index += 1\n print(*output)\n### User Message\nCreate another variance this code. New variance:\n### Assistant\n\nfor _ in range(int(input())):\n number_of_cases = int(input())\n sequence_list = []\n count_start_end_zero_one = 0\n count_start_end_one_zero = 0\n count_start_zero_end_zero = 0\n count_start_one_end_one = 0\n index_start_end_zero_one = []\n index_start_end_one_zero = []\n set_start_end_zero_one = set()\n set_start_end_one_zero = set()\n for case_index"]
}
```
<!-- ## Dataset Creation
If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section.
## Citation
**BibTeX:**
```
@misc{apps_rlaif,
author = {Manh, Dung Nguyen and Hai, Nam Le and Bui, Nghi DQ},
title = {Code Alpaca: An Instruction-following LLaMA model for code generation},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/sahil280114/codealpaca}},
}
```
Naturally, you should also cite the original LLaMA-2 paper [[2]]([^2]) and the APPS paper [[1]]([^1]).
-->
[^1]: https://arxiv.org/abs/2105.09938
[^2]: https://arxiv.org/abs/2307.09288
| [
-0.4613986015319824,
-0.5630689263343811,
0.31301349401474,
0.540930986404419,
-0.10738803446292877,
-0.26827237010002136,
0.05707293003797531,
-0.12255658954381943,
0.45695391297340393,
0.5386666655540466,
-0.7514044046401978,
-0.41398924589157104,
-0.47258517146110535,
0.4373698234558105... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
xrizs/instseg | xrizs | 2023-11-06T11:43:54Z | 56 | 0 | null | [
"region:us"
] | 2023-11-06T11:43:54Z | 2023-11-06T11:43:48.000Z | 2023-11-06T11:43:48 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: annotation
dtype: image
splits:
- name: train
num_bytes: 34491927.0
num_examples: 58
- name: val
num_bytes: 12337041.0
num_examples: 20
- name: test
num_bytes: 5255226.0
num_examples: 9
download_size: 52063862
dataset_size: 52084194.0
---
# Dataset Card for "instseg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5765759944915771,
-0.029615910723805428,
0.21608836948871613,
0.22743283212184906,
-0.37347733974456787,
0.1006755381822586,
0.1624617576599121,
-0.07386509329080582,
0.6448614597320557,
0.6239091753959656,
-0.729295551776886,
-0.749426007270813,
-0.6074039936065674,
-0.2281352728605270... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Sesamoo/pdf-3dsimulation | Sesamoo | 2023-11-09T17:18:04Z | 56 | 0 | null | [
"license:unknown",
"region:us"
] | 2023-11-09T17:18:04Z | 2023-11-09T17:16:30.000Z | 2023-11-09T17:16:30 | ---
license: unknown
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
medmac01/dar-en-space-test | medmac01 | 2023-11-10T10:14:26Z | 56 | 0 | null | [
"region:us"
] | 2023-11-10T10:14:26Z | 2023-11-10T08:43:35.000Z | 2023-11-10T08:43:35 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kristaller486/ALMA-prompt-completion | kristaller486 | 2023-11-15T08:43:52Z | 56 | 0 | null | [
"task_categories:translation",
"size_categories:100K<n<1M",
"language:en",
"language:ru",
"language:cs",
"language:de",
"language:is",
"language:zh",
"license:mit",
"arxiv:2309.11674",
"region:us"
] | 2023-11-15T08:43:52Z | 2023-11-14T10:58:43.000Z | 2023-11-14T10:58:43 | ---
license: mit
task_categories:
- translation
language:
- en
- ru
- cs
- de
- is
- zh
pretty_name: ALMA Dataset
size_categories:
- 100K<n<1M
---
# Dataset Card for ALMA-prompt-completion
[ALMA Dataset](https://github.com/fe1ixxu/ALMA/tree/master/human_written_data) if format of [prompt-completion](https://github.com/higgsfield-ai/higgsfield/tree/main/tutorials)
- **Created by:** fe1ixxu
- **Shared by:** me
- **Language(s) (NLP):** English, Czech, German, Russian, Islandic, Chinese
- **License:** MIT
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [https://github.com/fe1ixxu/ALMA]
- **Paper [optional]:** [https://arxiv.org/abs/2309.11674]
## Uses
LLM translators | [
-0.18121153116226196,
-0.2124183475971222,
0.5422187447547913,
0.1299058198928833,
-0.3752792775630951,
0.23139838874340057,
-0.09167499095201492,
-0.08155278861522675,
0.43675482273101807,
0.4491036534309387,
-1.0561261177062988,
-1.2493953704833984,
-0.3825427293777466,
0.472902029752731... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ekolasky/NQLongAnswersForCustomLEDForQA | ekolasky | 2023-11-15T04:21:56Z | 56 | 0 | null | [
"region:us"
] | 2023-11-15T04:21:56Z | 2023-11-14T20:39:16.000Z | 2023-11-14T20:39:16 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: start_positions
sequence: int64
- name: end_positions
sequence: int64
- name: global_attention_mask
sequence: int64
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 1000140417
num_examples: 12323
- name: validation
num_bytes: 47307511
num_examples: 588
download_size: 119671635
dataset_size: 1047447928
---
# Dataset Card for "NQLongAnswersForCustomLEDForQA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5407509803771973,
-0.1868482232093811,
0.3751160800457001,
0.27992209792137146,
-0.180329367518425,
0.1654863804578781,
0.4329574704170227,
0.03339841961860657,
0.8175276517868042,
0.6887535452842712,
-0.9446442127227783,
-0.6122259497642517,
-0.3486766517162323,
-0.045656558126211166,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
higgsfield/question_to_sql | higgsfield | 2023-11-14T21:54:15Z | 56 | 0 | null | [
"region:us"
] | 2023-11-14T21:54:15Z | 2023-11-14T21:54:11.000Z | 2023-11-14T21:54:11 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 19324361
num_examples: 78577
download_size: 6963083
dataset_size: 19324361
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "question_to_sql"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.614671528339386,
-0.7030301094055176,
0.290326327085495,
0.12624137103557587,
-0.25614166259765625,
-0.2100694477558136,
0.21216313540935516,
0.0009343277779407799,
0.8298345804214478,
0.6993156671524048,
-0.9365435242652893,
-0.700305700302124,
-0.2795758545398712,
-0.21125942468643188... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SebastianSchramm/ultrafeedback_binarized_prefs_filtered | SebastianSchramm | 2023-11-15T12:14:31Z | 56 | 0 | null | [
"region:us"
] | 2023-11-15T12:14:31Z | 2023-11-15T12:13:37.000Z | 2023-11-15T12:13:37 | ---
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
- split: test_prefs
path: data/test_prefs-*
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
splits:
- name: train_prefs
num_bytes: 360858348.6922183
num_examples: 56286
- name: test_prefs
num_bytes: 11663780.3125
num_examples: 1825
download_size: 180008939
dataset_size: 372522129.0047183
---
# Dataset Card for "ultrafeedback_binarized_prefs_filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7012701034545898,
-0.31697437167167664,
0.27427250146865845,
0.4103192687034607,
-0.5607607960700989,
0.04078133404254913,
0.3413972556591034,
-0.2531193792819977,
0.7499613165855408,
1.0406252145767212,
-0.9131790399551392,
-0.9480178952217102,
-0.43819954991340637,
-0.054932851344347,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomaarsen/ner-orgs | tomaarsen | 2023-11-22T16:53:37Z | 56 | 1 | null | [
"region:us"
] | 2023-11-22T16:53:37Z | 2023-11-22T09:36:41.000Z | 2023-11-22T09:36:41 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-ORG
'2': I-ORG
splits:
- name: train
num_bytes: 40381520.59961503
num_examples: 109424
- name: validation
num_bytes: 5782294.96333573
num_examples: 15908
- name: test
num_bytes: 10727120.198367199
num_examples: 28124
download_size: 14938552
dataset_size: 56890935.76131796
---
# Dataset Card for "ner-orgs"
This dataset is a concatenation of subsets of [Few-NERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd), [CoNLL 2003](https://huggingface.co/datasets/conll2003) and [OntoNotes v5](https://huggingface.co/datasets/tner/ontonotes5), but only the "B-ORG" and "I-ORG" labels.
Exactly half of the samples per split contain organisations, while the other half do not contain any.
It was generated using the following script:
```py
import random
from datasets import load_dataset, concatenate_datasets, Features, Sequence, ClassLabel, Value, DatasetDict
FEATURES = Features(
{
"tokens": Sequence(feature=Value(dtype="string")),
"ner_tags": Sequence(feature=ClassLabel(names=["O", "B-ORG", "I-ORG"])),
}
)
def load_fewnerd():
def mapper(sample):
sample["ner_tags"] = [int(tag == 5) for tag in sample["ner_tags"]]
sample["ner_tags"] = [
2 if tag == 1 and idx > 0 and sample["ner_tags"][idx - 1] == 1 else tag
for idx, tag in enumerate(sample["ner_tags"])
]
return sample
dataset = load_dataset("DFKI-SLT/few-nerd", "supervised")
dataset = dataset.map(mapper, remove_columns=["id", "fine_ner_tags"])
dataset = dataset.cast(FEATURES)
return dataset
def load_conll():
label_mapping = {3: 1, 4: 2}
def mapper(sample):
sample["ner_tags"] = [label_mapping.get(tag, 0) for tag in sample["ner_tags"]]
return sample
dataset = load_dataset("conll2003")
dataset = dataset.map(mapper, remove_columns=["id", "pos_tags", "chunk_tags"])
dataset = dataset.cast(FEATURES)
return dataset
def load_ontonotes():
label_mapping = {11: 1, 12: 2}
def mapper(sample):
sample["ner_tags"] = [label_mapping.get(tag, 0) for tag in sample["ner_tags"]]
return sample
dataset = load_dataset("tner/ontonotes5")
dataset = dataset.rename_column("tags", "ner_tags")
dataset = dataset.map(mapper)
dataset = dataset.cast(FEATURES)
return dataset
def has_org(sample):
return bool(sum(sample["ner_tags"]))
def has_no_org(sample):
return not has_org(sample)
def preprocess_raw_dataset(raw_dataset):
# Set the number of sentences without an org equal to the number of sentences with an org
dataset_org = raw_dataset.filter(has_org)
dataset_no_org = raw_dataset.filter(has_no_org)
dataset_no_org = dataset_no_org.select(random.sample(range(len(dataset_no_org)), k=len(dataset_org)))
dataset = concatenate_datasets([dataset_org, dataset_no_org])
return dataset
def main() -> None:
fewnerd_dataset = load_fewnerd()
conll_dataset = load_conll()
ontonotes_dataset = load_ontonotes()
raw_train_dataset = concatenate_datasets([fewnerd_dataset["train"], conll_dataset["train"], ontonotes_dataset["train"]])
raw_eval_dataset = concatenate_datasets([fewnerd_dataset["validation"], conll_dataset["validation"], ontonotes_dataset["validation"]])
raw_test_dataset = concatenate_datasets([fewnerd_dataset["test"], conll_dataset["test"], ontonotes_dataset["test"]])
train_dataset = preprocess_raw_dataset(raw_train_dataset)
eval_dataset = preprocess_raw_dataset(raw_eval_dataset)
test_dataset = preprocess_raw_dataset(raw_test_dataset)
dataset_dict = DatasetDict(
{
"train": train_dataset,
"validation": eval_dataset,
"test": test_dataset,
}
)
dataset_dict.push_to_hub("ner-orgs", private=True)
if __name__ == "__main__":
main()
``` | [
-0.40353602170944214,
-0.5503494739532471,
0.22606654465198517,
0.21033546328544617,
-0.24351096153259277,
-0.08571742475032806,
-0.3489384651184082,
-0.17138035595417023,
0.2189818173646927,
0.4109189510345459,
-0.5846763849258423,
-0.6796846985816956,
-0.5132513046264648,
0.3488676846027... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ctheodoris/Genecorpus-30M | ctheodoris | 2023-11-11T06:42:26Z | 55 | 35 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-11T06:42:26Z | 2022-03-12T21:21:46.000Z | 2022-03-12T21:21:46 | ---
license: apache-2.0
---
# Dataset Card for Genecorpus-30M
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Species](#species)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
<!---
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
--->
## Dataset Description
<!--- **Paper:**
--->
- **Point of Contact:** christina.theodoris@gladstone.ucsf.edu
### Dataset Summary
We assembled a large-scale pretraining corpus, Genecorpus-30M, comprised of ~30 million human single cell transcriptomes from a broad range of tissues from publicly available data. This corpus was used for pretraining [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology.
See [our manuscript](https://rdcu.be/ddrx0) for details.
### Supported Tasks
This corpus was used for pretraining [Geneformer](https://rdcu.be/ddrx0) and is compatible with pretraining or fine-tuning Geneformer or similar models.
### Species
Homo sapiens
## Dataset Structure
### Data Instances
Genecorpus-30M is provided as tokenized data in the Huggingface Datasets structure, which is based on the Apache Arrow format. Each example within the dataset is composed of the rank value encoding for a single cell within the corpus. Rank value encodings provide a nonparametric representation of each single cell’s transcriptome, ranking genes by their expression within that cell normalized by their expression across the entire Genecorpus-30M. This method takes advantage of the many observations of each gene’s expression across Genecorpus-30M to prioritize genes that distinguish cell state. Specifically, this method will deprioritize ubiquitously highly-expressed housekeeping genes by normalizing them to a lower rank. Conversely, genes such as transcription factors that may be lowly expressed when they are expressed but highly distinguish cell state will move to a higher rank within the encoding. Furthermore, this rank-based approach may be more robust against technical artifacts that may systematically bias the absolute transcript counts value while the overall relative ranking of genes within each cell remains more stable.
To accomplish this, we first calculated the nonzero median value of expression of each detected gene across all cells from the entire Genecorpus-30M. We aggregated the transcript count distribution for each gene, normalizing the gene transcript counts in each cell by the total transcript count of that cell to account for varying sequencing depth. We then normalized the genes in each single cell transcriptome by that gene’s nonzero median value of expression across Genecorpus-30M and ordered the genes by the rank of their normalized expression in that specific cell. Of note, we opted to use the nonzero median value of expression rather than include zeros in the distribution so as not to weight the value by tissue representation within Genecorpus-30M, assuming that a representative range of transcript values would be observed within the cells in which each gene was detected.
The rank value encodings for each single cell transcriptome were then tokenized based on a total vocabulary of 25,424 protein-coding or miRNA genes detected within Geneformer-30M. The token dictionary mapping each token ID to special tokens (pad and mask) or Ensembl IDs for each gene is included within the repository as a pickle file (token_dictionary.pkl).
### Data Fields
- `input_ids`: rank value encoding for an example cell
- `lengths`: length of rank value encoding for that example cell
### Data Splits
The dataset does not contain any predefined splits.
## Dataset Creation
### Curation Rationale
Mapping the gene regulatory networks that drive disease progression enables screening for molecules that correct the network by normalizing core regulatory elements, rather than targeting peripheral downstream effectors that may not be disease modifying. However, mapping the gene network architecture requires large amounts of transcriptomic data to learn the connections between genes, which impedes network-correcting drug discovery in settings with limited data, including rare diseases and diseases affecting clinically inaccessible tissues. Although data remains limited in these settings, recent advances in sequencing technologies have driven a rapid expansion in the amount of transcriptomic data available from human tissues more broadly. Furthermore, single cell technologies have facilitated the observation of transcriptomic states without averaging genes’ expression across multiple cells, potentially providing more precise data for inference of network interactions, especially in diseases driven by dysregulation of multiple cell types. Recently, the concept of transfer learning has revolutionized fields such as natural language understanding and computer vision by leveraging deep learning models pretrained on large-scale general datasets that can then be fine-tuned towards a vast array of downstream tasks with limited task-specific data that would be insufficient to yield meaningful predictions when used in isolation. We therefore assembled Genecorpus-30M to allow the large-scale pretraining of [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology.
### Source Data
#### Initial Data Collection and Normalization
Source data included 29.9 million (29,900,531) human single cell transcriptomes from a broad range of tissues from 561 publicly available datasets from original studies cited in the Methods of Theodoris et al, Nature 2023. Datasets were filtered to retain cells with total read counts within three standard deviations of the mean within that dataset and mitochondrial reads within three standard deviations of the mean within that dataset. Ensembl-annotated protein-coding and miRNA genes were used for downstream analysis. Cells with less than seven detected Ensembl-annotated protein-coding or miRNA genes were excluded as the 15% masking used for the pretraining learning objective would not reliably mask a gene in cells with fewer detected genes. Ultimately, 27.4 million (27,406,217) cells passed the defined quality filters. Cells were then represented as rank value encodings as discussed above in [Data Instances](#data-instances).
#### Who are the source data producers?
Publicly available datasets containing raw counts were collected from National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO), NCBI Sequence Read Archive (SRA), Human Cell Atlas, European Molecular Biology Laboratory-European Bioinformatics Institute (EMBL-EBI) Single Cell Expression Atlas, Broad Institute Single Cell Portal, Brotman Baty Institute (BBI)-Allen Single Cell Atlases, Tumor Immune Single-cell Hub (TISCH) (excluding malignant cells), Panglao Database, 10x Genomics, University of California, Santa Cruz Cell Browser, European Genome-phenome Archive, Synapse, Riken, Zenodo, National Institutes of Health (NIH) Figshare Archive, NCBI dbGap, Refine.bio, China National GeneBank Sequence Archive, Mendeley Data, and individual communication with authors of the original studies as cited in the Methods of Theodoris et al, Nature 2023.
### Annotations
#### Annotation process
Geneformer-30M does not contain annotations.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
There is no personal or sensitive information included in the dataset. The dataset is composed of rank value encodings, so there are no traceable sequencing reads included.
## Considerations for Using the Data
### Social Impact of Dataset
Genecorpus-30M enabled the large-scale pretraining of [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a foundation model that enables context-aware predictions in settings with limited data in network biology. Within our publication, we demonstrated that during pretraining, Geneformer gained a fundamental understanding of network dynamics, encoding network hierarchy in the model’s attention weights in a completely self-supervised manner. Fine-tuning Geneformer towards a diverse panel of downstream tasks relevant to chromatin and network dynamics using limited task-specific data demonstrated that Geneformer consistently boosted predictive accuracy. Applied to disease modeling with limited patient data, Geneformer identified candidate therapeutic targets for cardiomyopathy. Overall, Geneformer represents a pretrained foundation model from which fine-tuning towards a broad range of downstream applications can be pursued to accelerate discovery of key network regulators and candidate therapeutic targets.
### Discussion of Biases
We excluded cells with high mutational burdens (e.g. malignant cells and immortalized cell lines) that could lead to substantial network rewiring without companion genome sequencing to facilitate interpretation. We only included droplet-based sequencing platforms to assure expression value unit comparability. Although we assembled the dataset to represent as diverse a set of human tissues and cell types as possible, particular tissues and cell types are not represented due to unavailability of public data at the time of dataset assembly. In our manuscript, we demonstrated that pretraining with larger and more diverse corpuses consistently improved Geneformer’s predictive power, consistent with observations that large-scale pretraining allows training of deeper models that ultimately have greater predictive potential in fields including NLU, computer vision, and mathematical problem-solving. Additionally, exposure to hundreds of experimental datasets during pretraining also appeared to promote robustness to batch-dependent technical artifacts and individual variability that commonly impact single cell analyses in biology. These findings suggest that as the amount of publicly available transcriptomic data continues to expand, future models pretrained on even larger-scale corpuses may open opportunities to achieve meaningful predictions in even more elusive tasks with increasingly limited task-specific data.
### Other Known Limitations
Genecorpus-30M was intended to be used for self-supervised pretraining. To achieve the best possible predictions in downstream tasks, Geneformer should be fine-tuned with labeled datasets relevant to the task at hand.
## Additional Information
### Dataset Curators
Christina Theodoris, MD, PhD
### Citation Information
Theodoris CV*, Xiao L, Chopra A, Chaffin MD, Al Sayed ZR, Hill MC, Mantineo H, Brydon EM, Zeng Z, Liu XS, Ellinor PT*. Transfer learning enables predictions in network biology. Nature. 2023 May 31; Epub ahead of print.
(*co-corresponding authors)
<!--- ### Licensing Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
---> | [
-0.3533567488193512,
-0.24831682443618774,
-0.008777426555752754,
0.03831348940730095,
-0.17470845580101013,
0.2900856137275696,
0.02034846320748329,
-0.11376063525676727,
0.5938975214958191,
0.5649240612983704,
-0.6230807900428772,
-0.6726831793785095,
-0.5276152491569519,
0.3307257294654... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HugoLaurencon/libri_light | HugoLaurencon | 2022-05-10T15:51:37Z | 55 | 2 | null | [
"region:us"
] | 2022-05-10T15:51:37Z | 2022-05-09T14:31:34.000Z | 2022-05-09T14:31:34 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mounikaiiith/Telugu_Sentiment | mounikaiiith | 2022-07-04T15:05:31Z | 55 | 1 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-07-04T15:05:31Z | 2022-06-19T12:06:15.000Z | 2022-06-19T12:06:15 | ---
license: cc-by-4.0
---
Do cite the below reference for using the dataset:
@article{marreddy2022resource, title={Am I a Resource-Poor Language? Data Sets, Embeddings, Models and Analysis for four different NLP tasks in Telugu Language},
author={Marreddy, Mounika and Oota, Subba Reddy and Vakada, Lakshmi Sireesha and Chinni, Venkata Charan and Mamidi, Radhika},
journal={Transactions on Asian and Low-Resource Language Information Processing}, publisher={ACM New York, NY} }
If you want to use the two classes (positive and negative) from the dataset, do cite the below reference:
@article{marreddy2022multi,
title={Multi-Task Text Classification using Graph Convolutional Networks for Large-Scale Low Resource Language},
author={Marreddy, Mounika and Oota, Subba Reddy and Vakada, Lakshmi Sireesha and Chinni, Venkata Charan and Mamidi, Radhika},
journal={arXiv preprint arXiv:2205.01204},
year={2022}
}
| [
-0.25279301404953003,
-0.3938676714897156,
-0.07206372916698456,
0.2779061496257782,
-0.2257867157459259,
-0.1965554654598236,
-0.27604085206985474,
-0.24515925347805023,
0.2719566226005554,
0.44860509037971497,
-0.12452983856201172,
-0.2527170777320862,
-0.4610893428325653,
0.448268890380... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/tmvar_v1 | bigbio | 2022-12-22T15:47:01Z | 55 | 0 | null | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-12-22T15:47:01Z | 2022-11-13T22:12:28.000Z | 2022-11-13T22:12:28 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: tmVar v1
homepage: https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/tmvar/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for tmVar v1
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/tmvar/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
This dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds. It can be used for NER tasks only.
The dataset is split into train(334) and test(166) splits
## Citation Information
```
@article{wei2013tmvar,
title={tmVar: a text mining approach for extracting sequence variants in biomedical literature},
author={Wei, Chih-Hsuan and Harris, Bethany R and Kao, Hung-Yu and Lu, Zhiyong},
journal={Bioinformatics},
volume={29},
number={11},
pages={1433--1439},
year={2013},
publisher={Oxford University Press}
}
```
| [
-0.11688082665205002,
-0.35596963763237,
0.38488277792930603,
0.02781994454562664,
-0.4998117983341217,
-0.038144223392009735,
0.16607166826725006,
-0.12922975420951843,
0.3295217454433441,
0.7293922901153564,
-0.6967202425003052,
-0.9656850099563599,
-0.7977056503295898,
0.679791033267974... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
matchbench/Abt-Buy | matchbench | 2022-11-16T09:03:32Z | 55 | 0 | null | [
"region:us"
] | 2022-11-16T09:03:32Z | 2022-11-16T08:53:16.000Z | 2022-11-16T08:53:16 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ramos-Ramos/smallnorb | Ramos-Ramos | 2022-12-15T08:30:22Z | 55 | 0 | null | [
"region:us"
] | 2022-12-15T08:30:22Z | 2022-12-15T07:29:28.000Z | 2022-12-15T07:29:28 | ---
dataset_info:
features:
- name: image_lt
dtype: image
- name: image_rt
dtype: image
- name: category
dtype: int32
- name: instance
dtype: int32
- name: elevation
dtype: int32
- name: azimuth
dtype: int32
- name: lighting
dtype: int32
splits:
- name: train
num_bytes: 117947794.0
num_examples: 24300
- name: test
num_bytes: 118130266.0
num_examples: 24300
download_size: 236815224
dataset_size: 236078060.0
---
# Dataset Card for "smallnorb"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
**NOTE:** This dataset is an unofficial port of small NORB based on a [repo from Andrea Palazzi](https://github.com/ndrplz/small_norb) using this [script](https://colab.research.google.com/drive/1Tx20uP1PrnyarsNCWf1dN9EQyr38BDIE?usp=sharing). For complete and accurate information, we highly recommend visiting the dataset's original homepage.
- **Homepage:** https://cs.nyu.edu/~ylclab/data/norb-v1.0-small/
- **Paper:** https://ieeexplore.ieee.org/document/1315150
### Dataset Summary
From the dataset's [homepage](https://cs.nyu.edu/~ylclab/data/norb-v1.0-small/):
> This database is intended for experiments in 3D object reocgnition from shape. It contains images of 50 toys belonging to 5 generic categories: four-legged animals, human figures, airplanes, trucks, and cars. The objects were imaged by two cameras under 6 lighting conditions, 9 elevations (30 to 70 degrees every 5 degrees), and 18 azimuths (0 to 340 every 20 degrees).
>
> The training set is composed of 5 instances of each category (instances 4, 6, 7, 8 and 9), and the test set of the remaining 5 instances (instances 0, 1, 2, 3, and 5).
## Dataset Structure
### Data Instances
An example of an instance in this dataset:
```
{
'image_lt': <PIL.PngImagePlugin.PngImageFile image mode=L size=96x96 at 0x...>,
'image_rt': <PIL.PngImagePlugin.PngImageFile image mode=L size=96x96 at 0x...>,
'category': 0,
'instance': 8,
'elevation': 6,
'azimuth': 4,
'lighting': 4
}
```
### Data Fields
Explanation of this dataset's fields:
- `image_lt`: a PIL image of an object from the dataset taken with one of two cameras
- `image_rt`: a PIL image of an object from the dataset taken with one of two cameras
- `category`: the category of the object shown in the images
- `instance`: the instance of the category of the object shown in the images
- `elevation`: the label of the elevation of the cameras used in capturing a picture of the object
- `azimuth`: the label of the azimuth of the cameras used in capturing a picture of the object
- `lighting`: the label of the lighting condition used in capturing a picture of the object
For more information on what these categories and labels pertain to, please see [Dataset Summary](#dataset-summary) or the [repo](https://github.com/ndrplz/small_norb) used in processing the dataset.
### Data Splits
Information on this dataset's splits:
| | train | test |
|------|------:|------:|
| size | 24300 | 24300 |
## Additional Information
### Dataset Curators
Credits from the dataset's [homepage](https://cs.nyu.edu/~ylclab/data/norb-v1.0-small/):
> [Fu Jie Huang](http://www.cs.nyu.edu/jhuangfu/), [Yann LeCun](http://yann.lecun.com/)
>
> Courant Institute, New York University
>
> October, 2005
### Licensing Information
From the dataset's [homepage](https://cs.nyu.edu/~ylclab/data/norb-v1.0-small/):
> This database is provided for research purposes. It cannot be sold. Publications that include results obtained with this database should reference the following paper:
>
> Y. LeCun, F.J. Huang, L. Bottou, Learning Methods for Generic Object Recognition with Invariance to Pose and Lighting. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) 2004
### Citation Information
From the dataset's [homepage](https://cs.nyu.edu/~ylclab/data/norb-v1.0-small/):
> Publications that include results obtained with this database should reference the following paper:
>
> Y. LeCun, F.J. Huang, L. Bottou, Learning Methods for Generic Object Recognition with Invariance to Pose and Lighting. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) 2004
```
@inproceedings{lecun2004learning,
title={Learning methods for generic object recognition with invariance to pose and lighting},
author={LeCun, Yann and Huang, Fu Jie and Bottou, Leon},
booktitle={Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004.},
volume={2},
pages={II--104},
year={2004},
organization={IEEE}
}
```
DOI: [10.1109/CVPR.2004.1315150](https://doi.org/10.1109/CVPR.2004.1315150)
### Contributions
Code to process small NORB adapted from [Andrea Palazzi's repo](https://github.com/ndrplz/small_norb) with this [script](https://colab.research.google.com/drive/1Tx20uP1PrnyarsNCWf1dN9EQyr38BDIE?usp=sharing). | [
-0.41501328349113464,
-0.3535352051258087,
0.13779570162296295,
-0.31496188044548035,
-0.27121007442474365,
-0.3959081768989563,
-0.02476627752184868,
-0.6389668583869934,
0.08408788591623306,
0.3354296386241913,
-0.5578839182853699,
-0.7311679124832153,
-0.1985570639371872,
0.156824365258... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lmqg/qag_jaquad | lmqg | 2022-12-18T07:54:08Z | 55 | 0 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_jaquad",
"language:ja",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | 2022-12-18T07:54:08Z | 2022-12-18T07:05:33.000Z | 2022-12-18T07:05:33 | ---
license: cc-by-sa-4.0
pretty_name: SQuAD for question generation
language: ja
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: lmqg/qg_jaquad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qag_jaquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the JAQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Japanese (ja)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": ""Nerdilinga"は898年にカロリング朝の王領として初めて文献に記録されている。レーゲンスブルク司教の統治下でネルトリンゲンは市場町に成長していった。1215年にネルトリンゲンは皇帝フリードリヒ2世から都市権を与えられ、帝国自由都市となった。この年に最初の市壁が築かれた。その縄張りは現在も街の地図に見て取れる。1219年、ネルトリンゲンの聖霊降臨祭についての最も古い文献上の記録が遺されている。重要な交易路が交差するこの都市は穀物、家畜、織物、毛皮、金属製品の主要な集散地に発展していった。ネルトリンゲンはフランクフルトと並ぶドイツで最も重要な遠距離交易都市の一つとなったのである。",
"questions": [ "1215年にネルトリンゲンは誰から都市権を与えられ、帝国自由都市となったか。", "\"Nerdilinga\"の最初の記録は何年のものですか。" ],
"answers": [ "皇帝フリードリヒ2世", "898年" ],
"questions_answers": "question: 1215年にネルトリンゲンは誰から都市権を与えられ、帝国自由都市となったか。, answer: 皇帝フリードリヒ2世 | question: "Nerdilinga"の最初の記録は何年のものですか。, answer: 898年"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|9508| 1431 | 3050|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | [
-0.6835241317749023,
-1.0919816493988037,
0.35167554020881653,
0.045177772641181946,
-0.4069981873035431,
-0.18848682940006256,
-0.2116755247116089,
-0.20145605504512787,
0.40112435817718506,
0.47832390666007996,
-0.7757771611213684,
-0.5573024153709412,
-0.279087096452713,
0.1647585034370... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
etrent17/irs-articles | etrent17 | 2023-01-21T19:42:50Z | 55 | 1 | null | [
"license:mit",
"region:us"
] | 2023-01-21T19:42:50Z | 2023-01-21T19:42:17.000Z | 2023-01-21T19:42:17 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cRick/NL-to-LTL-Synthetic-Dataset | cRick | 2023-03-18T11:11:15Z | 55 | 0 | null | [
"region:us"
] | 2023-03-18T11:11:15Z | 2023-03-18T11:04:18.000Z | 2023-03-18T11:04:18 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yuncongli/chat-sentiment-analysis | yuncongli | 2023-04-08T04:19:48Z | 55 | 2 | null | [
"language:en",
"license:mit",
"sentiment",
"aspect-based sentiment analysis",
"Aspect Term Extraction",
"Opinion Term Extraction",
"Aspect Term-Opinion Term Pair Extraction",
"Aspect term, Sentiment, Opinion term Triplet Extraction",
"Aspect Category Detection",
"Aspect Category-Sentiment Pair Ext... | 2023-04-08T04:19:48Z | 2023-04-08T04:04:08.000Z | 2023-04-08T04:04:08 | ---
license: mit
language:
- en
tags:
- sentiment
- aspect-based sentiment analysis
- Aspect Term Extraction
- Opinion Term Extraction
- Aspect Term-Opinion Term Pair Extraction
- Aspect term, Sentiment, Opinion term Triplet Extraction
- Aspect Category Detection
- Aspect Category-Sentiment Pair Extraction
- Aspect-Category-Opinion-Sentiment (ACOS) Quadruple Extraction
- Holder, Target, Opinion, Sentiment (HTOS) Quadruple Extraction
- sentiment analysis
---
# A Sentiment Analsysis Dataset for Finetuning Large Models in Chat-style
More details can be found at https://github.com/l294265421/chat-sentiment-analysis
## Supported Tasks
- Aspect Term Extraction (ATE)
- Opinion Term Extraction (OTE)
- Aspect Term-Opinion Term Pair Extraction (AOPE)
- Aspect term, Sentiment, Opinion term Triplet Extraction (ASOTE)
- Aspect Category Detection (ACD)
- Aspect Category-Sentiment Pair Extraction (ACSA)
- [Aspect-Category-Opinion-Sentiment (ACOS) Quadruple Extraction](https://github.com/NUSTM/ACOS)
- [Holder, Target, Opinion, Sentiment (HTOS) Quadruple Extraction](https://github.com/jerbarnes/semeval22_structured_sentiment)
| [
-0.5711650252342224,
-0.8146602511405945,
0.28931567072868347,
0.5554507374763489,
-0.32405343651771545,
0.08780241012573242,
-0.1932968646287918,
-0.1168261170387268,
0.6218637824058533,
0.7367198467254639,
-0.8886163830757141,
-0.8254641890525818,
-0.473848819732666,
-0.01807712204754352... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
distil-whisper/ami-ihm | distil-whisper | 2023-09-25T10:30:14Z | 55 | 0 | null | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2023-09-25T10:30:14Z | 2023-04-10T12:57:58.000Z | 2023-04-10T12:57:58 | ---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: AMI IHM
---
# Distil Whisper: AMI IHM
This is a variant of the [AMI IHM](https://huggingface.co/datasets/edinburghcstr/ami) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/edinburghcstr/ami).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/ami-ihm", "ihm")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/ami-ihm", "ihm", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
| [
-0.21760469675064087,
-0.5775007605552673,
0.17530860006809235,
0.4062367081642151,
-0.22946025431156158,
0.06184310466051102,
-0.09008686244487762,
-0.22280025482177734,
0.37083497643470764,
0.43395373225212097,
-0.8630781173706055,
-0.46388867497444153,
-0.6760497689247131,
0.14838851988... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lighteval/pile | lighteval | 2023-04-26T06:27:38Z | 55 | 0 | null | [
"region:us"
] | 2023-04-26T06:27:38Z | 2023-04-26T06:26:43.000Z | 2023-04-26T06:26:43 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ai-forever/spellcheck_benchmark | ai-forever | 2023-10-04T16:13:44Z | 55 | 2 | null | [
"task_categories:text-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<20k",
"language:ru",
"license:mit",
"spellcheck",
"russian",
"arxiv:2308.09435",
"region:us"
] | 2023-10-04T16:13:44Z | 2023-04-28T09:49:40.000Z | 2023-04-28T09:49:40 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ru
license: mit
multilinguality:
- monolingual
size_categories:
- 10K<n<20k
task_categories:
- text-generation
pretty_name: Russian Spellcheck Benchmark
language_bcp47:
- ru-RU
tags:
- spellcheck
- russian
---
# Dataset Card for Russian Spellcheck Benchmark
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [SAGE](https://github.com/ai-forever/sage)
- **Paper:** [arXiv:2308.09435](https://arxiv.org/abs/2308.09435)
- **Point of Contact:** nikita.martynov.98@list.ru
### Dataset Summary
Spellcheck Benchmark includes four datasets, each of which consists of pairs of sentences in Russian language.
Each pair embodies sentence, which may contain spelling errors, and its corresponding correction.
Datasets were gathered from various sources and domains including social networks, internet blogs, github commits, medical anamnesis, literature, news, reviews and more.
All datasets were passed through two-stage manual labeling pipeline.
The correction of a sentence is defined by an agreement of at least two human annotators.
Manual labeling scheme accounts for jargonisms, collocations and common language, hence in some cases it encourages
annotators not to amend a word in favor of preserving style of a text.
### Supported Tasks and Leaderboards
- **Task:** automatic spelling correction.
- **Metrics:** https://www.dialog-21.ru/media/3427/sorokinaaetal.pdf.
### Languages
Russian.
## Dataset Structure
### Data Instances
#### RUSpellRU
- **Size of downloaded dataset files:** 3.64 Mb
- **Size of the generated dataset:** 1.29 Mb
- **Total amount of disk used:** 4.93 Mb
An example of "train" / "test" looks as follows
```
{
"source": "очень классная тетка ктобы что не говорил.",
"correction": "очень классная тетка кто бы что ни говорил",
}
```
#### MultidomainGold
- **Size of downloaded dataset files:** 15.05 Mb
- **Size of the generated dataset:** 5.43 Mb
- **Total amount of disk used:** 20.48 Mb
An example of "test" looks as follows
```
{
"source": "Ну что могу сказать... Я заказала 2 вязанных платья: за 1000 руб (у др продавца) и это ща 1200. Это платье- голимая синтетика (в том платье в составе была шерсть). Это платье как очень плохая резинка. На свои параметры (83-60-85) я заказала С . Пока одевала/снимала - оно в горловине растянулось. Помимо этого в этом платье я выгляжу ну очень тоской. У меня вес 43 кг на 165 см роста. Кстати, продавец отправлял платье очень долго. Я пыталась отказаться от заказа, но он постоянно отклонял мой запрос. В общем не советую.",
"correction": "Ну что могу сказать... Я заказала 2 вязаных платья: за 1000 руб (у др продавца) и это ща 1200. Это платье- голимая синтетика (в том платье в составе была шерсть). Это платье как очень плохая резинка. На свои параметры (83-60-85) я заказала С . Пока надевала/снимала - оно в горловине растянулось. Помимо этого в этом платье я выгляжу ну очень доской. У меня вес 43 кг на 165 см роста. Кстати, продавец отправлял платье очень долго. Я пыталась отказаться от заказа, но он постоянно отклонял мой запрос. В общем не советую.",
"domain": "reviews",
}
```
#### MedSpellcheck
- **Size of downloaded dataset files:** 1.49 Mb
- **Size of the generated dataset:** 0.54 Mb
- **Total amount of disk used:** 2.03 Mb
An example of "test" looks as follows
```
{
"source": "Кровотечения, поерации в анамнезе отрицает",
"correction": "Кровотечения, операции в анамнезе отрицает",
}
```
#### GitHubTypoCorpusRu
- **Size of downloaded dataset files:** 1.23 Mb
- **Size of the generated dataset:** 0.48 Mb
- **Total amount of disk used:** 1.71 Mb
An example of "test" looks as follows
```
{
"source": "## Запросы и ответа содержат заголовки",
"correction": "## Запросы и ответы содержат заголовки",
}
```
### Data Fields
#### RUSpellRU
- `source`: a `string` feature
- `correction`: a `string` feature
- `domain`: a `string` feature
#### MultidomainGold
- `source`: a `string` feature
- `correction`: a `string` feature
- `domain`: a `string` feature
#### MedSpellcheck
- `source`: a `string` feature
- `correction`: a `string` feature
- `domain`: a `string` feature
#### GitHubTypoCorpusRu
- `source`: a `string` feature
- `correction`: a `string` feature
- `domain`: a `string` feature
### Data Splits
#### RUSpellRU
| |train|test|
|---|---:|---:|
|RUSpellRU|2000|2008|
#### MultidomainGold
| |train|test|
|---|---:|---:|
|web|386|756|
|news|361|245|
|social_media|430|200|
|reviews|584|586|
|subtitles|1810|1810|
|strategic_documents|-|250|
|literature|-|260|
#### MedSpellcheck
| |test|
|---|---:|
|MedSpellcheck|1054|
#### GitHubTypoCorpusRu
| |test|
|---|---:|
|GitHubTypoCorpusRu|868|
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The datasets are chosen in accordance with the specified criteria.
First, domain variation: half of the datasets are chosen from different domains to ensure diversity, while the remaining half are from a single domain.
Another criterion is spelling orthographic mistakes:
the datasets exclusively comprised mistyping, omitting grammatical or more complex errors of nonnative speakers.
- **RUSpellRU**: texts collected from ([LiveJournal](https://www.livejournal.com/media)), with manually corrected typos and errors;
- **MultidomainGold**: examples from several text sources including the open web, news, social media, reviews, subtitles, policy documents and literary works were collected:
*Aranea web-corpus* is a family of multilanguage gigaword web-corpora collected from Internet resources. The texts in the corpora are evenly distributed across periods, writing styles and topics they cover. We randomly picked the sentences from Araneum Russicum, which is harvested from the Russian part of the web.
*Literature* is a collection of Russian poems and prose of different classical literary works. We randomly picked sentences from the source dataset that were gathered from Ilibrary, LitLib, and Wikisource.
*News*, as the name suggests, covers news articles on various topics such as sports, politics, environment, economy etc. The passages are randomly picked from the summarization dataset Gazeta.ru.
*Social media* is the text domain from social media platforms marked with specific hashtags. These texts are typically short, written in an informal style and may contain slang, emojis and obscene lexis.
*Strategic Documents* is part of the dataset the Ministry of Economic Development of the Russian Federation collected. Texts are written in a bureaucratic manner, rich in embedded entities, and have complex syntactic and discourse structures. The full version of the dataset has been previously used in the RuREBus shared task.
- **MedSpellChecker**: texts with errors from medical anamnesis;
- **GitHubTypoCorpusRu**: spelling errors and typos in commits from [GitHub](https://github.com);
### Annotations
#### Annotation process
We set up two-stage annotation project via a crowd-sourcing platform Toloka:
1. Data gathering stage: we provide the texts with possible mistakes to annotators and ask them to write the sentence correctly;
2. Validation stage: we provide annotators with the pair of sentences (source and its corresponding correction from the previous stage) and ask them to check if the correction is right.
We prepared instructions for annotators for each task. The instructions ask annotators to correct misspellings if it does not alter the original style of the text.
Instructions do not provide rigorous criteria on the matter of distinguishing the nature of an error in terms of its origin - whether it came from an urge to endow a sentence with particular stylistic features or from unintentional spelling violation since it is time-consuming and laborious to describe every possible case of employing slang, dialect, collo- quialisms, etc. instead of proper language. Instructions also do not distinguish errors that come from the geographical or social background of the source. Instead, we rely on annotators’ knowledge and understanding of a language since, in this work, the important factor is to preserve the original style of the text.
To ensure we receive qualified expertise, we set up test iteration on a small subset of the data for both stages. We manually validated the test results and selected annotators, who processed at least six samples (2% of the total test iteration) and did not make a single error. After test iteration, we cut 85% and 86% of labellers for gathering and validation stages.
We especially urge annotators to correct mistakes associated with the substitution of the letters "ё" "й" and "щ" for corresponding "е" "и" and "ш" and not to explain abbreviations and correct punctuation errors. Each annotator is also warned about potentially sensitive topics in data (e.g., politics, societal minorities, and religion).
#### Who are the annotators?
Native Russian speakers who passed the language exam.
## Considerations for Using the Data
### Discussion of Biases
We clearly state our work’s aims and
implications, making it open source and transparent. The data will be available under a public license. As our research involved anonymized textual data, informed consent from human participants was not required. However, we obtained permission to access publicly available datasets and
ensured compliance with any applicable terms of
service or usage policies.
### Other Known Limitations
The data used in our research may be limited to specific
domains, preventing comprehensive coverage of
all possible text variations. Despite these limitations, we tried to address the issue of data diversity
by incorporating single-domain and multi-domain
datasets in the proposed research. This approach
allowed us to shed light on the diversity and variances within the data, providing valuable insights
despite the inherent constraints.
We primarily focus on the Russian language. Further
research is needed to expand the datasets for a wider
range of languages.
## Additional Information
### Future plans
We are planning to expand our benchmark with both new Russian datasets and datasets in other languages including (but not limited to) European and CIS languages.
If you would like to contribute, please contact us.
### Dataset Curators
Nikita Martynov nikita.martynov.98@list.ru
### Licensing Information
All our datasets are published by MIT License.
### Citation Information
```
@inproceedings{martynov2023augmentation,
title={Augmentation methods for spelling corruptions},
author={Martynov, Nikita and Baushenko, Mark and Abramov, Alexander and Fenogenova, Alena},
booktitle={Proceedings of the International Conference “Dialogue},
volume={2023},
year={2023}
}
@misc{martynov2023methodology,
title={A Methodology for Generative Spelling Correction
via Natural Spelling Errors Emulation across Multiple Domains and Languages},
author={Nikita Martynov and Mark Baushenko and Anastasia Kozlova and
Katerina Kolomeytseva and Aleksandr Abramov and Alena Fenogenova},
year={2023},
eprint={2308.09435},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.3529849350452423,
-0.5035313963890076,
0.19568777084350586,
-0.03928481042385101,
-0.22600658237934113,
-0.08529694378376007,
-0.22713886201381683,
-0.4221895635128021,
0.546633780002594,
0.37180110812187195,
-0.7062727808952332,
-0.90085369348526,
-0.546663224697113,
0.4207160770893097... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LEAP/ClimSim_low-res | LEAP | 2023-09-29T20:31:55Z | 55 | 1 | null | [
"license:cc-by-4.0",
"arxiv:2306.08754",
"doi:10.57967/hf/0740",
"region:us"
] | 2023-09-29T20:31:55Z | 2023-05-03T04:51:53.000Z | 2023-05-03T04:51:53 | ---
license: cc-by-4.0
---
Corresponding GitHub repo can be found here:
https://github.com/leap-stc/ClimSim
Read more: https://arxiv.org/abs/2306.08754. | [
-0.49635207653045654,
-0.19715553522109985,
0.40936484932899475,
0.19352662563323975,
-0.23208875954151154,
-0.23502826690673828,
0.035419005900621414,
-0.34497591853141785,
0.5172688961029053,
0.6393495202064514,
-0.6135981678962708,
-0.6661348938941956,
-0.5961029529571533,
-0.1636127233... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
agomberto/FrenchCensus-handwritten-texts | agomberto | 2023-11-28T17:35:18Z | 55 | 1 | null | [
"task_categories:image-to-text",
"size_categories:1K<n<10K",
"language:fr",
"license:mit",
"imate-to-text",
"trocr",
"region:us"
] | 2023-11-28T17:35:18Z | 2023-05-09T11:21:00.000Z | 2023-05-09T11:21:00 | ---
language:
- fr
license: mit
size_categories:
- 1K<n<10K
task_categories:
- image-to-text
tags:
- imate-to-text
- trocr
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 501750699.816
num_examples: 5601
- name: validation
num_bytes: 45084242.0
num_examples: 707
- name: test
num_bytes: 49133043.0
num_examples: 734
download_size: 459795745
dataset_size: 595967984.816
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
## Source
This repository contains 3 datasets created within the POPP project ([Project for the Oceration of the Paris Population Census](https://popp.hypotheses.org/#ancre2)) for the task of handwriting text recognition. These datasets have been published in [Recognition and information extraction in historical handwritten tables: toward understanding early 20th century Paris census at DAS 2022](https://link.springer.com/chapter/10.1007/978-3-031-06555-2_10).
The 3 datasets are called “Generic dataset”, “Belleville”, and “Chaussée d’Antin” and contains lines made from the extracted rows of census tables from 1926. Each table in the Paris census contains 30 rows, thus each page in these datasets corresponds to 30 lines.
We publish here only the lines. If you want the pages, go [here](https://zenodo.org/record/6581158). This dataset is made 4800 annotated lines extracted from 80 double pages of the 1926 Paris census.
## Data Info
Since the lines are extracted from table rows, we defined 4 special characters to describe the structure of the text:
- ¤ : indicates an empty cell
- / : indicates the separation into columns
- ? : indicates that the content of the cell following this symbol is written above the regular baseline
- ! : indicates that the content of the cell following this symbol is written below the regular baseline
There are three splits: train, valid and test.
## How to use it
```python
from datasets import load_dataset
import numpy as np
dataset = load_dataset("agomberto/FrenchCensus-handwritten-texts")
i = np.random.randint(len(dataset['train']))
img = dataset['train']['image'][i]
text = dataset['train']['text'][i]
print(text)
img
```
## BibTeX entry and citation info
```bibtex
@InProceedings{10.1007/978-3-031-06555-2_10,
author="Constum, Thomas
and Kempf, Nicolas
and Paquet, Thierry
and Tranouez, Pierrick
and Chatelain, Cl{\'e}ment
and Br{\'e}e, Sandra
and Merveille, Fran{\c{c}}ois",
editor="Uchida, Seiichi
and Barney, Elisa
and Eglin, V{\'e}ronique",
title="Recognition and Information Extraction in Historical Handwritten Tables: Toward Understanding Early {\$}{\$}20^{\{}th{\}}{\$}{\$}Century Paris Census",
booktitle="Document Analysis Systems",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="143--157",
abstract="We aim to build a vast database (up to 9 million individuals) from the handwritten tabular nominal census of Paris of 1926, 1931 and 1936, each composed of about 100,000 handwritten simple pages in a tabular format. We created a complete pipeline that goes from the scan of double pages to text prediction while minimizing the need for segmentation labels. We describe how weighted finite state transducers, writer specialization and self-training further improved our results. We also introduce through this communication two annotated datasets for handwriting recognition that are now publicly available, and an open-source toolkit to apply WFST on CTC lattices.",
isbn="978-3-031-06555-2"
}
``` | [
-0.11982173472642899,
-0.3390210270881653,
0.5872538089752197,
-0.1355171650648117,
-0.1480945348739624,
-0.10066567361354828,
-0.04419444501399994,
-0.5735475420951843,
0.2206965535879135,
0.8770132064819336,
-0.32014936208724976,
-0.6091263294219971,
-0.5929353833198547,
0.24560251832008... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Hemanth-thunder/en_ta | Hemanth-thunder | 2023-08-12T06:58:11Z | 55 | 2 | null | [
"size_categories:10K<n<100K",
"language:ta",
"language:en",
"license:mit",
"region:us"
] | 2023-08-12T06:58:11Z | 2023-05-16T07:45:59.000Z | 2023-05-16T07:45:59 | ---
license: mit
language:
- ta
- en
size_categories:
- 10K<n<100K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Fredithefish/Instruction-Tuning-with-GPT-4-RedPajama-Chat | Fredithefish | 2023-05-17T11:31:57Z | 55 | 1 | null | [
"task_categories:question-answering",
"language:en",
"license:cc",
"region:us"
] | 2023-05-17T11:31:57Z | 2023-05-16T14:12:28.000Z | 2023-05-16T14:12:28 | ---
license: cc
task_categories:
- question-answering
language:
- en
---
# Instruction Tuning with GPT 4 RedPajama-Chat
This dataset has been converted from the <a href="https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM" target="_new">Instruction-Tuning-with-GPT-4</a> dataset for the purpose of fine-tuning the <a href="https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1" target="_new">RedPajama-INCITE-Chat-3B-v1</a> model.
## About Instruction-Tuning-with-GPT-4
English Instruction-Following Data generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
### Usage and License Notices
The data is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
| [
-0.39169588685035706,
-1.0773141384124756,
0.4075392484664917,
0.4220479428768158,
-0.5203211307525635,
-0.2685585916042328,
-0.30724379420280457,
-0.5044820308685303,
0.2405276745557785,
0.6253586411476135,
-1.1101553440093994,
-0.8232482671737671,
-0.6110675930976868,
0.04978420585393905... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yangwang825/reuters-21578 | yangwang825 | 2023-05-19T02:04:58Z | 55 | 0 | null | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2023-05-19T02:04:58Z | 2023-05-17T14:25:37.000Z | 2023-05-17T14:25:37 | ---
task_categories:
- text-classification
language:
- en
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': acq
'1': crude
'2': earn
'3': grain
'4': interest
'5': money-fx
'6': ship
'7': trade
---
`yangwang825/reuters-21578` is an 8-class subset of the Reuters 21578 news dataset.
| [
-0.22293904423713684,
0.12625986337661743,
0.10133089125156403,
0.28308162093162537,
-0.06392564624547958,
-0.04178020730614662,
0.2892269492149353,
-0.17308491468429565,
0.43364617228507996,
0.8734237551689148,
-0.506523847579956,
-0.28940314054489136,
-0.4942188858985901,
0.4171426892280... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lucadiliello/wikiqa_grouped | lucadiliello | 2023-05-30T08:14:53Z | 55 | 0 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2023-05-30T08:14:53Z | 2023-05-30T08:12:28.000Z | 2023-05-30T08:12:28 | ---
task_categories:
- text-classification
language:
- en
pretty_name: WikiQA
size_categories:
- 1K<n<10K
---
WikiQA dataset with answers grouped together for each question. | [
-0.7191609740257263,
-0.5256795287132263,
-0.044536154717206955,
-0.2590060234069824,
0.18784382939338684,
-0.14639335870742798,
0.3899795413017273,
0.20936717092990875,
0.5815765261650085,
0.7645736336708069,
-0.7757495641708374,
-0.376956045627594,
-0.1252814680337906,
0.4234457910060882... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sukantan/nyaya-acts | sukantan | 2023-06-08T06:56:32Z | 55 | 0 | null | [
"region:us"
] | 2023-06-08T06:56:32Z | 2023-06-04T11:58:07.000Z | 2023-06-04T11:58:07 | ---
dataset_info:
features:
- name: section_href
dtype: string
- name: act_enactment_date
dtype: timestamp[s]
- name: act_short_title
dtype: string
- name: act_long_title
dtype: string
- name: act_id
dtype: string
- name: ministry
dtype: string
- name: section_number
dtype: string
- name: section_title
dtype: string
- name: section_content
dtype: string
- name: section_part_no
dtype: string
- name: section_part
dtype: string
splits:
- name: train
num_bytes: 648190303
num_examples: 53788
download_size: 35939204
dataset_size: 648190303
---
# Dataset Card for "nyaya-acts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3301193416118622,
-0.36383211612701416,
0.0005013903719373047,
0.14928482472896576,
-0.2112956941127777,
-0.17075487971305847,
0.48590904474258423,
-0.2711142897605896,
1.1724718809127808,
0.5162906646728516,
-0.9073036313056946,
-0.724950909614563,
-0.631169319152832,
-0.35665705800056... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nicholasKluge/reward-aira-dataset | nicholasKluge | 2023-11-10T12:52:20Z | 55 | 0 | null | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:pt",
"language:en",
"license:apache-2.0",
"reward model",
"instruction",
"alignment",
"region:us"
] | 2023-11-10T12:52:20Z | 2023-06-07T18:14:57.000Z | 2023-06-07T18:14:57 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- pt
- en
tags:
- reward model
- instruction
- alignment
pretty_name: Reward-Aira Dataset
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: instruction
dtype: string
- name: chosen_response
dtype: string
- name: rejected_response
dtype: string
splits:
- name: english
num_bytes: 53232958
num_examples: 32675
- name: portuguese
num_bytes: 59650447
num_examples: 32675
download_size: 65455319
dataset_size: 112883405
---
# Dataset (`Reward-Aira Dataset`)
### Overview
This dataset contains a collection of prompt + completion examples of LLM following instructions in a conversational manner. All prompts come with two possible completions (one better than the other). The dataset is available in both Portuguese and English languages.
### Dataset Details
- **Dataset Name:** Reward-Aira Dataset
- **Language:** Portuguese and English
- **Total Size:** 32,675 pairs of demonstrations (in Portuguese and English)
### Contents
The dataset consists of data frames with the following columns:
- **Prompt (`instruction`):** The initial prompt provided to the model.
- **Completion (`chosen_response`):** A completion to the prompt.
- **Completion (`rejected_response`):** A worst completion to the prompt.
```python
{
"instruction":"Why is AI Ethics important?",
"chosen_response": "The field of AI Ethics delves deeply into the intricate ethical considerations that arise with respect to AI systems. This includes the role of humanity in creating and deploying these systems, as well as the conduct of machines themselves. Broadly speaking, AI Ethics can be divided into two major categories : concerns surrounding the morality of human actions in relation to creating and using AI, and concerns regarding the moral implications of machine behavior.",
"rejected_response": "Who cares about AI Ethics? It's just a bunch of whining about humans making and using AI and bitching about what the machines do."
}
```
### Use Cases
`Reward-Aira Dataset` can be utilized to train a reward/preference model.
## How to use
Available splits are `portuguese` and `english`.
```python
from datasets import load_dataset
dataset = load_dataset("nicholasKluge/reward-aira-dataset")
```
### Disclaimer
This dataset is provided as is, without any warranty or guarantee of its accuracy or suitability for any purpose. The creators and contributors of this dataset are not liable for any damages or losses arising from its use. Please review and comply with the licenses and terms of the original datasets before use. | [
-0.07842090725898743,
-0.4236580729484558,
0.2751758396625519,
0.24149596691131592,
-0.12055229395627975,
-0.23539277911186218,
-0.06981335580348969,
-0.24518492817878723,
0.28133317828178406,
0.5931194424629211,
-0.7592361569404602,
-0.26302048563957214,
-0.6157646775245667,
0.10394047945... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.