id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
68.7k
citation
stringlengths
0
10.7k
cardData
null
likes
int64
0
3.55k
downloads
int64
0
10.1M
card
stringlengths
0
1.01M
usvsnsp/pile-semantic-memorization-filter-results
2023-09-19T18:56:42.000Z
[ "region:us" ]
usvsnsp
null
null
null
0
8
--- dataset_info: features: - name: sequence_id dtype: int64 - name: text dtype: string - name: sequence_duplicates dtype: int64 - name: max_frequency dtype: int64 - name: avg_frequency dtype: float64 - name: min_frequency dtype: int64 - name: median_frequency dtype: float64 - name: p25_frequency dtype: int64 - name: p75_frequency dtype: int64 - name: frequencies sequence: int64 - name: is_incrementing dtype: bool - name: tokens sequence: int64 - name: repeating_offset dtype: int32 - name: num_repeating dtype: int32 - name: smallest_repeating_chunk sequence: int64 - name: memorization_score dtype: float64 - name: templating_frequency_0.9 dtype: int64 - name: templating_frequency_0.8 dtype: int64 - name: prompt_perplexity dtype: float32 - name: generation_perplexity dtype: float32 - name: sequence_perplexity dtype: float32 splits: - name: pile.duped.70m num_bytes: 7003348430 num_examples: 5000000 - name: pile.duped.160m num_bytes: 7003348430 num_examples: 5000000 - name: pile.duped.410m num_bytes: 7003348430 num_examples: 5000000 - name: pile.duped.1b num_bytes: 7003348430 num_examples: 5000000 - name: pile.duped.1.4b num_bytes: 7003348430 num_examples: 5000000 - name: pile.duped.2.8b num_bytes: 7003348430 num_examples: 5000000 - name: pile.duped.6.9b num_bytes: 7003348430 num_examples: 5000000 - name: pile.duped.12b num_bytes: 7003348430 num_examples: 5000000 - name: pile.deduped.70m num_bytes: 7013409756 num_examples: 5000000 - name: pile.deduped.160m num_bytes: 7013409756 num_examples: 5000000 - name: pile.deduped.410m num_bytes: 7013409756 num_examples: 5000000 - name: pile.deduped.1b num_bytes: 7013409756 num_examples: 5000000 - name: pile.deduped.1.4b num_bytes: 7013409756 num_examples: 5000000 - name: pile.deduped.2.8b num_bytes: 7013409756 num_examples: 5000000 - name: pile.deduped.6.9b num_bytes: 7013409756 num_examples: 5000000 - name: pile.deduped.12b num_bytes: 7013409756 num_examples: 5000000 download_size: 48107269588 dataset_size: 112134065488 --- # Dataset Card for "pile-semantic-memorization-filter-results" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gunika-dhingra/findsum_worked
2023-09-20T05:09:07.000Z
[ "region:us" ]
gunika-dhingra
null
null
null
0
8
Entry not found
legacy107/qa_wikipedia_chunked
2023-09-21T04:25:09.000Z
[ "region:us" ]
legacy107
null
null
null
0
8
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answer_start dtype: int64 - name: answer dtype: string - name: article dtype: string - name: chunked_article sequence: string splits: - name: train num_bytes: 15700776313 num_examples: 110970 - name: validation num_bytes: 1842888919 num_examples: 13833 - name: test num_bytes: 1928000472 num_examples: 13873 download_size: 2970213547 dataset_size: 19471665704 --- # Dataset Card for "qa_wikipedia_chunked" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Linyuyu/zhouguangbo
2023-10-10T07:07:33.000Z
[ "region:us" ]
Linyuyu
null
null
null
0
8
Entry not found
maxxrichard/sports_plan
2023-09-20T13:15:01.000Z
[ "region:us" ]
maxxrichard
null
null
null
0
8
Entry not found
sugeun/pat_japan
2023-09-21T04:03:32.000Z
[ "region:us" ]
sugeun
null
null
null
0
8
Entry not found
Jackoon/dataset_huy
2023-09-21T07:57:03.000Z
[ "region:us" ]
Jackoon
null
null
null
0
8
Entry not found
DarrenLo/ygo_tcgcards
2023-10-05T13:45:10.000Z
[ "region:us" ]
DarrenLo
null
null
null
0
8
ToniAqqia/chico_synthetic
2023-09-21T19:12:49.000Z
[ "license:mit", "region:us" ]
ToniAqqia
null
null
null
0
8
--- license: mit ---
minwook/novatusTest
2023-09-22T02:27:11.000Z
[ "region:us" ]
minwook
null
null
null
0
8
--- dataset_info: features: - name: question dtype: string - name: answer dtype: string splits: - name: train num_bytes: 1384 num_examples: 2 download_size: 5958 dataset_size: 1384 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "novatusTest" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kewu93/three_styles_prompted_250_512x512_50perclass_identity
2023-09-22T13:24:26.000Z
[ "region:us" ]
kewu93
null
null
null
0
8
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: val path: data/val-* dataset_info: features: - name: image dtype: image - name: text dtype: string - name: style_class dtype: string splits: - name: train num_bytes: 4334353.0 num_examples: 150 - name: val num_bytes: 4317601.0 num_examples: 150 download_size: 0 dataset_size: 8651954.0 --- # Dataset Card for "three_styles_prompted_250_512x512_50perclass_identity" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sankettgorey/donut_model_invoice
2023-09-22T13:13:42.000Z
[ "region:us" ]
sankettgorey
null
null
null
0
8
Entry not found
aditijha/instruct_control_and_lima
2023-09-22T21:13:23.000Z
[ "region:us" ]
aditijha
null
null
null
0
8
--- dataset_info: features: - name: prompt dtype: string - name: response dtype: string - name: source dtype: string splits: - name: train num_bytes: 7084154 num_examples: 2000 download_size: 4023227 dataset_size: 7084154 --- # Dataset Card for "instruct_control_and_lima" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aditijha/instruct_v1_2k
2023-09-22T21:16:55.000Z
[ "region:us" ]
aditijha
null
null
null
0
8
--- dataset_info: features: - name: prompt dtype: string - name: response dtype: string splits: - name: train num_bytes: 1475295.9366042826 num_examples: 2000 download_size: 788725 dataset_size: 1475295.9366042826 --- # Dataset Card for "instruct_v1_2k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
seank0602/A03_fandom_pygmalion
2023-09-23T17:35:57.000Z
[ "region:us" ]
seank0602
null
null
null
0
8
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: conversations list: - name: role dtype: string - name: value dtype: string splits: - name: train num_bytes: 1477380 num_examples: 750 download_size: 381654 dataset_size: 1477380 --- # Dataset Card for "A03_fandom_pygmalion" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Hadnet/olavo-notes-dataset
2023-09-23T19:15:28.000Z
[ "region:us" ]
Hadnet
null
null
null
0
8
--- dataset_info: features: - name: input_ids sequence: int32 - name: labels sequence: int64 - name: attention_mask sequence: bool splits: - name: train num_bytes: 408196 num_examples: 131 download_size: 54853 dataset_size: 408196 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "olavo-notes-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dim/povarenok_links
2023-09-23T22:52:09.000Z
[ "region:us" ]
dim
null
null
null
0
8
--- dataset_info: features: - name: title dtype: string - name: ingridients sequence: string - name: views dtype: int64 - name: likes dtype: int64 - name: ups dtype: int64 - name: link dtype: string splits: - name: train num_bytes: 15412981 num_examples: 46500 download_size: 2195713 dataset_size: 15412981 --- # Dataset Card for "povarenok_links" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Kris8an/begin_only
2023-09-25T15:01:40.000Z
[ "region:us" ]
Kris8an
null
null
null
0
8
Entry not found
Kris8an/64_samples
2023-09-25T16:10:07.000Z
[ "region:us" ]
Kris8an
null
null
null
0
8
Entry not found
Kris8an/gtp_70k
2023-09-25T16:58:37.000Z
[ "region:us" ]
Kris8an
null
null
null
0
8
Entry not found
aditijha/instruct_v1_5k_and_lima
2023-09-26T02:22:11.000Z
[ "region:us" ]
aditijha
null
null
null
0
8
--- dataset_info: features: - name: prompt dtype: string - name: response dtype: string - name: source dtype: string splits: - name: train num_bytes: 6691318 num_examples: 6000 download_size: 3598588 dataset_size: 6691318 --- # Dataset Card for "instruct_v1_5k_and_lima" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aditijha/instruct_v1_10k_and_lima
2023-09-26T02:22:34.000Z
[ "region:us" ]
aditijha
null
null
null
0
8
--- dataset_info: features: - name: prompt dtype: string - name: response dtype: string - name: source dtype: string splits: - name: train num_bytes: 10473658 num_examples: 11000 download_size: 5587292 dataset_size: 10473658 --- # Dataset Card for "instruct_v1_10k_and_lima" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nitinbhayana/review_data_training
2023-09-26T07:26:09.000Z
[ "region:us" ]
nitinbhayana
null
null
null
0
8
Entry not found
open-ko-llm-leaderboard/results
2023-10-10T22:41:02.000Z
[ "region:us" ]
open-ko-llm-leaderboard
null
null
null
0
8
Entry not found
NusaCrowd/nergrit
2023-09-26T12:35:09.000Z
[ "language:ind", "license:mit", "named-entity-recognition", "region:us" ]
NusaCrowd
Nergrit Corpus is a dataset collection of Indonesian Named Entity Recognition (NER), Statement Extraction, and Sentiment Analysis developed by PT Gria Inovasi Teknologi (GRIT). The Named Entity Recognition contains 18 entities as follow: 'CRD': Cardinal 'DAT': Date 'EVT': Event 'FAC': Facility 'GPE': Geopolitical Entity 'LAW': Law Entity (such as Undang-Undang) 'LOC': Location 'MON': Money 'NOR': Political Organization 'ORD': Ordinal 'ORG': Organization 'PER': Person 'PRC': Percent 'PRD': Product 'QTY': Quantity 'REG': Religion 'TIM': Time 'WOA': Work of Art 'LAN': Language
@misc{Fahmi_NERGRIT_CORPUS_2019, author = {Fahmi, Husni and Wibisono, Yudi and Kusumawati, Riyanti}, title = {{NERGRIT CORPUS}}, url = {https://github.com/grit-id/nergrit-corpus}, year = {2019} }
null
0
8
--- license: mit tags: - named-entity-recognition language: - ind --- # nergrit Nergrit Corpus is a dataset collection of Indonesian Named Entity Recognition (NER), Statement Extraction, and Sentiment Analysis developed by PT Gria Inovasi Teknologi (GRIT). The Named Entity Recognition contains 18 entities as follow: 'CRD': Cardinal 'DAT': Date 'EVT': Event 'FAC': Facility 'GPE': Geopolitical Entity 'LAW': Law Entity (such as Undang-Undang) 'LOC': Location 'MON': Money 'NOR': Political Organization 'ORD': Ordinal 'ORG': Organization 'PER': Person 'PRC': Percent 'PRD': Product 'QTY': Quantity 'REG': Religion 'TIM': Time 'WOA': Work of Art 'LAN': Language ## Dataset Usage Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`. ## Citation ``` @misc{Fahmi_NERGRIT_CORPUS_2019, author = {Fahmi, Husni and Wibisono, Yudi and Kusumawati, Riyanti}, title = {{NERGRIT CORPUS}}, url = {https://github.com/grit-id/nergrit-corpus}, year = {2019} } ``` ## License MIT ## Homepage [https://github.com/grit-id/nergrit-corpus](https://github.com/grit-id/nergrit-corpus) ### NusaCatalogue For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
Srinivas7/Data_bot
2023-09-26T18:57:08.000Z
[ "license:other", "region:us" ]
Srinivas7
null
null
null
0
8
--- license: other ---
Aliki/areto26
2023-09-27T08:43:35.000Z
[ "region:us" ]
Aliki
null
null
null
0
8
Entry not found
yuanmei424/xxt_en
2023-09-26T19:00:06.000Z
[ "region:us" ]
yuanmei424
null
null
null
0
8
--- dataset_info: features: - name: edit_prompt dtype: string - name: input_image dtype: image - name: edited_image dtype: image splits: - name: train num_bytes: 5329195147.25 num_examples: 2283951 download_size: 526250170 dataset_size: 5329195147.25 --- # Dataset Card for "xxt_en" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Audi0911/loganalyzer
2023-09-26T19:27:52.000Z
[ "license:openrail", "region:us" ]
Audi0911
null
null
null
0
8
--- license: openrail ---
Rageshhf/llama_recommendation_datset
2023-09-27T07:01:36.000Z
[ "region:us" ]
Rageshhf
null
null
null
0
8
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: recommendation dtype: string - name: instruction dtype: string - name: text dtype: string splits: - name: train num_bytes: 11447133 num_examples: 3283 download_size: 3250903 dataset_size: 11447133 --- # Dataset Card for "llama_recommendation_datset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Aples/FineTune_Dataset_Aples_1K
2023-09-27T19:26:31.000Z
[ "license:mit", "region:us" ]
Aples
null
null
null
0
8
--- license: mit ---
Globaly/categories-1k
2023-09-27T21:25:06.000Z
[ "region:us" ]
Globaly
null
null
null
0
8
Entry not found
infCapital/viet-llama2-ft-tiny
2023-09-28T07:12:29.000Z
[ "license:mit", "region:us" ]
infCapital
null
null
null
0
8
--- license: mit dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 287453537 num_examples: 346327 download_size: 143427693 dataset_size: 287453537 configs: - config_name: default data_files: - split: train path: data/train-* ---
Surajsangwan90/Benchmark_LLM
2023-09-30T06:33:29.000Z
[ "region:us" ]
Surajsangwan90
null
null
null
0
8
Entry not found
Anas986/amazon-shoe-reviews
2023-09-29T10:00:32.000Z
[ "region:us" ]
Anas986
null
null
null
0
8
--- dataset_info: features: - name: labels dtype: int64 - name: text dtype: string splits: - name: train num_bytes: 15128362.8 num_examples: 81000 - name: test num_bytes: 1680929.2 num_examples: 9000 download_size: 10009431 dataset_size: 16809292.0 --- # Dataset Card for "amazon-shoe-reviews" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TheAIchemist13/malyalam_asr_dataset
2023-09-29T12:16:02.000Z
[ "region:us" ]
TheAIchemist13
null
null
null
0
8
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: audio dtype: audio - name: ' transcriptions' dtype: string splits: - name: train num_bytes: 1437332887.196 num_examples: 3023 - name: test num_bytes: 576755142.814 num_examples: 1103 download_size: 1668143452 dataset_size: 2014088030.0100002 --- # Dataset Card for "malyalam_asr_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jonathanasdf/MathGLM-dataset-5M
2023-09-29T19:10:31.000Z
[ "license:afl-3.0", "region:us" ]
jonathanasdf
null
null
null
0
8
--- license: afl-3.0 --- Every 10th row from https://github.com/THUDM/MathGLM (original dataset has 50M entries)
anirudh-sub/debate_dataset_v2
2023-09-29T23:05:46.000Z
[ "region:us" ]
anirudh-sub
null
null
null
0
8
Entry not found
anirudh-sub/debate_dataset_v3.1
2023-09-30T00:52:51.000Z
[ "region:us" ]
anirudh-sub
null
null
null
0
8
Entry not found
amphora/plat-clean
2023-09-30T01:19:55.000Z
[ "region:us" ]
amphora
null
null
null
0
8
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 33608338 num_examples: 24926 download_size: 16086395 dataset_size: 33608338 --- # Dataset Card for "plat-clean" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AtAndDev/ShareGPT-Vicuna-v3-cleaned-unfiltered
2023-09-30T12:03:18.000Z
[ "region:us" ]
AtAndDev
null
null
null
0
8
--- dataset_info: features: - name: id dtype: string - name: text dtype: string splits: - name: train num_bytes: 1211675 num_examples: 145 download_size: 0 dataset_size: 1211675 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "ShareGPT-Vicuna-v3-cleaned-unfiltered" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mHossain/social_mdedia_comment_v2
2023-09-30T12:51:40.000Z
[ "region:us" ]
mHossain
null
null
null
0
8
Entry not found
youngermax/sherlock
2023-10-01T01:35:46.000Z
[ "region:us" ]
youngermax
null
null
null
0
8
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: article dtype: string - name: infobox dtype: string splits: - name: train num_bytes: 373301449 num_examples: 27906 download_size: 216489948 dataset_size: 373301449 --- # Dataset Card for "sherlock" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SebastianMoncaleano/llama2-5.2k-cammel
2023-10-01T20:56:23.000Z
[ "region:us" ]
SebastianMoncaleano
null
null
null
0
8
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 1261568 num_examples: 5249 download_size: 240693 dataset_size: 1261568 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "llama2-5.2k-cammel" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Valarmathy/cricket_indvspak
2023-10-02T04:36:44.000Z
[ "task_categories:table-question-answering", "task_categories:tabular-classification", "size_categories:1K<n<10K", "license:cc0-1.0", "region:us" ]
Valarmathy
null
null
null
0
8
--- license: cc0-1.0 configs: - config_name: Valarmathy--cricket_indvspak task_categories: - table-question-answering - tabular-classification size_categories: - 1K<n<10K ---
BaffledCoder/EnvironmentalScience
2023-10-06T13:09:13.000Z
[ "region:us" ]
BaffledCoder
null
null
null
0
8
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/datasets-cards {} --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
cbasconc/levels
2023-10-02T20:01:50.000Z
[ "language:es", "region:us" ]
cbasconc
null
null
null
0
8
--- language: - es pretty_name: levels ---
neural-bridge/full_cqa_22k
2023-10-02T20:14:12.000Z
[ "region:us" ]
neural-bridge
null
null
null
0
8
--- dataset_info: features: - name: clear_prompt dtype: string splits: - name: train num_bytes: 43183498.53262665 num_examples: 17433 - name: test num_bytes: 10797732.467373349 num_examples: 4359 download_size: 32335855 dataset_size: 53981231.0 --- # Dataset Card for "full_cqa_12k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
relaxtraffic/attrain
2023-10-03T09:36:58.000Z
[ "region:us" ]
relaxtraffic
null
null
null
0
8
Entry not found
Falah/3d_perspective_drawing
2023-10-03T12:36:18.000Z
[ "region:us" ]
Falah
null
null
null
0
8
--- dataset_info: features: - name: prompts dtype: string splits: - name: train num_bytes: 174080 num_examples: 1000 download_size: 18501 dataset_size: 174080 --- # Dataset Card for "3d_perspective_drawing" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Kris8an/30k_no_obs
2023-10-03T16:43:30.000Z
[ "region:us" ]
Kris8an
null
null
null
0
8
Entry not found
yudiwbs/olimpiade
2023-10-04T02:44:10.000Z
[ "region:us" ]
yudiwbs
null
null
null
0
8
Sumber: https://www.kaggle.com/datasets/heesoo37/120-years-of-olympic-history-athletes-and-results Dataset untuk modul praktikum https://docs.google.com/document/d/1ehUlhdLeubEJz9qc3fvGeCRtZindHhowyGhg5Pbqq3w/edit
lhallee/uniref_small
2023-10-04T03:12:15.000Z
[ "region:us" ]
lhallee
null
null
null
0
8
--- dataset_info: features: - name: uniref dtype: string splits: - name: train num_bytes: 20739509 num_examples: 100000 download_size: 20824692 dataset_size: 20739509 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "uniref_small" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dz-data-ai/a10_command_script
2023-10-04T08:29:15.000Z
[ "license:cc", "region:us" ]
dz-data-ai
null
null
null
0
8
--- license: cc pretty_name: 아마란스10 커맨드 스크립트 ---
AndyLiu0104/Soldering-Data-Tiny-1004-unsolder-area
2023-10-04T16:28:52.000Z
[ "region:us" ]
AndyLiu0104
null
null
null
0
8
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 18073742.875 num_examples: 10481 download_size: 0 dataset_size: 18073742.875 --- # Dataset Card for "Soldering-Data-Tiny-1004-unsolder-area" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
HamdanXI/cleaned_daily_dialog_sentence
2023-10-04T07:42:26.000Z
[ "region:us" ]
HamdanXI
null
null
null
0
8
--- dataset_info: features: - name: dialogue dtype: string splits: - name: train num_bytes: 5434241 num_examples: 77350 download_size: 3467625 dataset_size: 5434241 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "cleaned_daily_dialog_sentence" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Viswa09/gujarati_speechdata
2023-10-05T03:40:48.000Z
[ "region:us" ]
Viswa09
null
null
null
0
8
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: audio dtype: audio - name: language dtype: string - name: language_probability dtype: float64 - name: segments list: - name: avg_logprob dtype: float64 - name: start dtype: float64 - name: end dtype: float64 - name: text dtype: string splits: - name: train num_bytes: 4531512898.0 num_examples: 434 download_size: 2099229077 dataset_size: 4531512898.0 --- # Dataset Card for "gujarati_speechdata" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
BlazeLlama/euclid_elements_eng_propositions
2023-10-04T21:35:19.000Z
[ "license:apache-2.0", "region:us" ]
BlazeLlama
null
null
null
0
8
--- license: apache-2.0 ---
gayanin/pubmed-abs-del-25
2023-10-05T00:35:34.000Z
[ "region:us" ]
gayanin
null
null
null
0
8
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* - split: validation path: data/validation-* dataset_info: features: - name: 'Unnamed: 0' dtype: int64 - name: refs dtype: string - name: del_25 dtype: string splits: - name: train num_bytes: 18493557 num_examples: 74724 - name: test num_bytes: 2367411 num_examples: 9341 - name: validation num_bytes: 2429052 num_examples: 9341 download_size: 13058446 dataset_size: 23290020 --- # Dataset Card for "pubmed-abs-del-25" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kkboy1/leaudiocsv
2023-10-05T01:29:33.000Z
[ "region:us" ]
kkboy1
null
null
null
0
8
ikiransuryavanshi/llama_training
2023-10-05T09:46:13.000Z
[ "region:us" ]
ikiransuryavanshi
null
null
null
0
8
Entry not found
daspartho/agree_disagree
2023-10-05T13:46:44.000Z
[ "region:us" ]
daspartho
null
null
null
1
8
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: statement dtype: string - name: reply dtype: string - name: sentiment dtype: int64 splits: - name: train num_bytes: 267030 num_examples: 1660 download_size: 113328 dataset_size: 267030 --- # Dataset Card for "agree_disagree" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
vsarathy/nl-robotics-translation-simple_english-30k-context
2023-10-05T14:43:52.000Z
[ "region:us" ]
vsarathy
null
null
null
0
8
Entry not found
lofcz/cs_autotherapy
2023-10-05T21:43:21.000Z
[ "license:mit", "region:us" ]
lofcz
null
null
null
0
8
--- license: mit ---
minh21/COVID-QA-sentence-transformer-biencoder-data-75_25
2023-10-06T07:38:09.000Z
[ "region:us" ]
minh21
null
null
null
0
8
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: question dtype: string - name: positive dtype: string - name: negative dtype: string - name: document_id dtype: int64 splits: - name: train num_bytes: 25188652 num_examples: 12274 - name: test num_bytes: 2473938 num_examples: 1360 download_size: 1946559 dataset_size: 27662590 --- # Dataset Card for "COVID-QA-sentence-transformer-biencoder-data-75_25" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
berardi6/LBcmopcenscaspnewwsx2
2023-10-06T10:35:51.000Z
[ "region:us" ]
berardi6
null
null
null
0
8
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 503461 num_examples: 1788 download_size: 0 dataset_size: 503461 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "LBcmopcenscaspnewwsx2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
c123ian/khan_academy_context
2023-10-06T12:04:19.000Z
[ "region:us" ]
c123ian
null
null
null
0
8
--- dataset_info: features: - name: context dtype: string splits: - name: train num_bytes: 20828078 num_examples: 2167 download_size: 8344879 dataset_size: 20828078 --- # Dataset Card for "khan_academy_context" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dmrau/trec_dl19
2023-10-09T13:07:39.000Z
[ "region:us" ]
dmrau
null
null
null
0
8
--- configs: - config_name: default data_files: - split: queries path: data/queries-* - split: corpus path: data/corpus-* dataset_info: features: - name: _id dtype: string - name: text dtype: string - name: title dtype: string splits: - name: queries num_bytes: 2194 num_examples: 43 - name: corpus num_bytes: 2181810 num_examples: 5482 download_size: 1207481 dataset_size: 2184004 --- # Dataset Card for "trec_dl19" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tog/dolphin_5k_test
2023-10-06T15:06:19.000Z
[ "task_categories:text-generation", "language:en", "license:apache-2.0", "region:us" ]
tog
null
null
null
0
8
--- language: - en license: apache-2.0 task_categories: - text-generation dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 8726321.400179625 num_examples: 5000 download_size: 4973800 dataset_size: 8726321.400179625 configs: - config_name: default data_files: - split: train path: data/train-* --- Tiny Dolphin 🐬 see https://erichartford.com/dolphin ## Dataset details This dataset is an extract of ~1 million of FLANv2 augmented with GPT-4 completions (flan1m-alpaca-uncensored.jsonl). It is derived from this [dataset](https://huggingface.co/datasets/ehartford/dolphin) ### Loading ```python dataset = load_dataset("tog/dolphin_5k_test) ``` This dataset is licensed apache-2.0 for commercial or non-commercial use.
MaxReynolds/Lee_Souder_Combined
2023-10-06T20:24:19.000Z
[ "region:us" ]
MaxReynolds
null
null
null
0
8
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 1084639.0 num_examples: 37 download_size: 1080965 dataset_size: 1084639.0 --- # Dataset Card for "Lee_Souder_Combined" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
stepkurniawan/qa-rag-llama2-13B-chat-hf
2023-10-07T22:19:36.000Z
[ "license:mit", "region:us" ]
stepkurniawan
null
null
null
0
8
--- license: mit dataset_info: features: - name: question dtype: string - name: ground_truths sequence: string - name: answer dtype: string - name: contexts dtype: string splits: - name: train num_bytes: 296284 num_examples: 100 download_size: 176771 dataset_size: 296284 configs: - config_name: default data_files: - split: train path: data/train-* --- The Question-Answer Dataset is : https://huggingface.co/datasets/stepkurniawan/qa_sustainability_wiki It is answered by this LLM: https://huggingface.co/meta-llama/Llama-2-13b-hf This dataset will be evaluated later by some framework like RAGAS, and be judged on how good the RAG model is.
LongJiAn/marsh-capstone
2023-10-07T06:17:47.000Z
[ "region:us" ]
LongJiAn
null
null
null
0
8
Entry not found
JiggaBooJombs/Novelist
2023-10-07T10:20:27.000Z
[ "license:apache-2.0", "region:us" ]
JiggaBooJombs
null
null
null
0
8
--- license: apache-2.0 ---
carnival13/massive_val_DA5_tokenized
2023-10-07T11:03:09.000Z
[ "region:us" ]
carnival13
null
null
null
0
8
--- dataset_info: features: - name: pass_label dtype: int64 - name: input_ids sequence: int32 - name: attention_mask sequence: int8 splits: - name: train num_bytes: 16518310 num_examples: 24160 download_size: 3778628 dataset_size: 16518310 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "massive_val_DA5_tokenized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
SuodhanJ6/elliptic_txs_features
2023-10-08T06:20:10.000Z
[ "region:us" ]
SuodhanJ6
null
null
null
0
8
Entry not found
Falah/book_cover_prompts_with_sections
2023-10-08T08:54:49.000Z
[ "region:us" ]
Falah
null
null
null
0
8
--- dataset_info: features: - name: prompts dtype: string splits: - name: train num_bytes: 393452 num_examples: 1000 download_size: 45494 dataset_size: 393452 --- # Dataset Card for "book_cover_prompts_with_sections" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/synpre_delete_1M
2023-10-08T09:12:38.000Z
[ "region:us" ]
tyzhu
null
null
null
0
8
--- dataset_info: features: - name: inputs dtype: string - name: targets dtype: string splits: - name: train num_bytes: 1742619734 num_examples: 1000000 - name: validation num_bytes: 17552085 num_examples: 10000 download_size: 1091004286 dataset_size: 1760171819 --- # Dataset Card for "synpre_delete_1M" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pythainlp/thaisum
2023-10-08T14:06:17.000Z
[ "task_categories:summarization", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_d...
pythainlp
null
null
null
0
8
--- annotations_creators: - no-annotation language_creators: - found language: - th license: - mit multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - summarization - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: null pretty_name: ThaiSum --- # Dataset Card for ThaiSum This dataset was forked from [thaisum](https://huggingface.co/datasets/thaisum) to HF hub. ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/nakhunchumpolsathien/ThaiSum - **Repository:** https://github.com/nakhunchumpolsathien/ThaiSum - **Paper:** - **Leaderboard:** - **Point of Contact:** https://github.com/nakhunchumpolsathien ### Dataset Summary ThaiSum is a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard. This dataset consists of over 350,000 article and summary pairs written by journalists. ### Supported Tasks and Leaderboards summarization, language modeling ### Languages Thai ## Dataset Structure ### Data Instances ``` {'body': 'กีเก ซานเชซ ฟลอเรส\xa0 กุนซือเลือดกระทิงของทีมวัตฟอร์ด\xa0 เมินประเด็นจุดโทษปัญหาในเกมพรีเมียร์ลีก อังกฤษ นัดที่แตนอาละวาดเปิดบ้านพ่าย คริสตัล พาเลซ 0-1ชี้ทีมของเขาเล่นไม่ดีพอเอง,สำนักข่าวต่างประเทศรายงานวันที่ 27 ก.ย. ว่า กีเก ซานเชซ ฟลอเรส\xa0 ผู้จัดการทีมชาวสเปน ของ แตนอาละวาด วัตฟอร์ด\xa0 ยอมรับทีมของเขาเล่นได้ไม่ดีพอเอง ในเกมพรีเมียร์ลีก อังกฤษ นัดเปิดบ้านพ่าย อินทรีผงาด คริสตัล พาเลซ 0-1 เมื่อคืนวันอาทิตย์ที่ผ่านมา,เกมนี้จุดเปลี่ยนมาอยู่ที่การได้จุดโทษในช่วงครึ่งหลังของ คริสตัล พาเลซ ซึ่งไม่ค่อยชัดเจนเท่าไหร่ว่า อัลลัน นียอม นั้นไปทำฟาล์วใส่ วิลฟรีด ซาฮา ในเขตโทษหรือไม่ แต่ผู้ตัดสินก็ชี้เป็นจุดโทษ ซึ่ง โยอัน กาบาย สังหารไม่พลาด และเป็นประตูชัยช่วยให้ คริสตัล พาเลซ เอาชนะ วัตฟอร์ด ไป 1-0 และเป็นการพ่ายแพ้ในบ้านนัดแรกของวัตฟอร์ดในฤดูกาลนี้อีกด้วย,ฟลอเรส กล่าวว่า มันเป็นเรื่องยากในการหยุดเกมรุกของคริสตัล พาเลซ ซึ่งมันอึดอัดจริงๆสำหรับเรา เราเล่นกันได้ไม่ดีนักในตอนที่ได้ครองบอล เราต้องเล่นทางริมเส้นให้มากกว่านี้ เราไม่สามารถหยุดเกมสวนกลับของพวกเขาได้ และแนวรับของเราก็ยืนไม่เป็นระเบียบสักเท่าไหร่ในช่วงครึ่งแรก ส่วนเรื่องจุดโทษการตัดสินใจขั้นสุดท้ายมันอยู่ที่ผู้ตัดสิน ซึ่งมันเป็นการตัดสินใจที่สำคัญ ผมเองก็ไม่รู้ว่าเขาตัดสินถูกหรือเปล่า บางทีมันอาจเป็นจุดที่ตัดสินเกมนี้เลย แต่เราไม่ได้แพ้เกมนี้เพราะจุดโทษ เราแพ้ในวันนี้เพราะเราเล่นไม่ดีและคริสตัล พาเลซ เล่นดีกว่าเรา เราไม่ได้มีฟอร์มการเล่นที่ดีในเกมนี้เลย', 'summary': 'กีเก ซานเชซ ฟลอเรส กุนซือเลือดกระทิงของทีมวัตฟอร์ด เมินประเด็นจุดโทษปัญหาในเกมพรีเมียร์ลีก อังกฤษ นัดที่แตนอาละวาดเปิดบ้านพ่าย คริสตัล พาเลซ 0-1ชี้ทีมของเขาเล่นไม่ดีพอเอง', 'tags': 'พรีเมียร์ลีก,วัตฟอร์ด,คริสตัล พาเลซ,กีเก ซานเชซ ฟลอเรส,ข่าวกีฬา,ข่าว,ไทยรัฐออนไลน์', 'title': 'ฟลอเรส รับ วัตฟอร์ดห่วยเองเกมพ่ายพาเลซคาบ้าน', 'type': '', 'url': 'https://www.thairath.co.th/content/528322'} ``` ### Data Fields - `title`: title of article - `body`: body of article - `summary`: summary of article - `type`: type of article, if any - `tags`: tags of article, separated by `,` - `url`: URL of article ### Data Splits train/valid/test: 358868 / 11000 / 11000 ## Dataset Creation ### Curation Rationale Sequence-to-sequence (Seq2Seq) models have shown great achievement in text summarization. However, Seq2Seq model often requires large-scale training data to achieve effective results. Although many impressive advancements in text summarization field have been made, most of summarization studies focus on resource-rich languages. The progress of Thai text summarization is still far behind. The dearth of large-scale dataset keeps Thai text summarization in its infancy. As far as our knowledge goes, there is not a large-scale dataset for Thai text summarization available anywhere. Thus, we present ThaiSum, a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard. ### Source Data #### Initial Data Collection and Normalization We used a python library named Scrapy to crawl articles from several news websites namely Thairath, Prachatai, ThaiPBS and, The Standard. We first collected news URLs provided in their sitemaps. During web-crawling, we used HTML markup and metadata available in HTML pages to identify article text, summary, headline, tags and label. Collected articles were published online from 2014 to August 2020. <br> <br> We further performed data cleansing process to minimize noisy data. We filtered out articles that their article text or summary is missing. Articles that contains article text with less than 150 words or summary with less than 15 words were removed. We also discarded articles that contain at least one of these following tags: ‘ดวง’ (horoscope), ‘นิยาย’ (novel), ‘อินสตราแกรมดารา’ (celebrity Instagram), ‘คลิปสุดฮา’(funny video) and ‘สรุปข่าว’ (highlight news). Some summaries were completely irrelevant to their original article texts. To eliminate those irrelevant summaries, we calculated abstractedness score between summary and its article text. Abstractedness score is written formally as: <br> <center><a href="https://www.codecogs.com/eqnedit.php?latex=\begin{equation}&space;\frac{|S-A|}{r}&space;\times&space;100&space;\end{equation}" target="_blank"><img src="https://latex.codecogs.com/gif.latex?\begin{equation}&space;\frac{|S-A|}{r}&space;\times&space;100&space;\end{equation}" title="\begin{equation} \frac{|S-A|}{r} \times 100 \end{equation}" /></a></center><br> <br>Where 𝑆 denotes set of article tokens. 𝐴 denotes set of summary tokens. 𝑟 denotes a total number of summary tokens. We omitted articles that have abstractedness score at 1-grams higher than 60%. <br><br> It is important to point out that we used [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp), version 2.2.4, tokenizing engine = newmm, to process Thai texts in this study. It is challenging to tokenize running Thai text into words or sentences because there are not clear word/sentence delimiters in Thai language. Therefore, using different tokenization engines may result in different segment of words/sentences. After data-cleansing process, ThaiSum dataset contains over 358,000 articles. The size of this dataset is comparable to a well-known English document summarization dataset, CNN/Dily mail dataset. Moreover, we analyse the characteristics of this dataset by measuring the abstractedness level, compassion rate, and content diversity. For more details, see [thaisum_exploration.ipynb](https://github.com/nakhunchumpolsathien/ThaiSum/blob/master/thaisum_exploration.ipynb). #### Dataset Statistics ThaiSum dataset consists of 358,868 articles. Average lengths of article texts and summaries are approximately 530 and 37 words respectively. As mentioned earlier, we also collected headlines, tags and labels provided in each article. Tags are similar to keywords of the article. An article normally contains several tags but a few labels. Tags can be name of places or persons that article is about while labels indicate news category (politic, entertainment, etc.). Ultimatly, ThaiSum contains 538,059 unique tags and 59 unique labels. Note that not every article contains tags or labels. |Dataset Size| 358,868 | articles | |:---|---:|---:| |Avg. Article Length| 529.5 | words| |Avg. Summary Length | 37.3 | words| |Avg. Headline Length | 12.6 | words| |Unique Vocabulary Size | 407,355 | words| |Occurring > 10 times | 81,761 | words| |Unique News Tag Size | 538,059 | tags| |Unique News Label Size | 59 | labels| #### Who are the source language producers? Journalists of respective articles ### Annotations #### Annotation process `summary`, `type` and `tags` are created by journalists who wrote the articles and/or their publishers. #### Who are the annotators? `summary`, `type` and `tags` are created by journalists who wrote the articles and/or their publishers. ### Personal and Sensitive Information All data are public news articles. No personal and sensitive information is expected to be included. ## Considerations for Using the Data ### Social Impact of Dataset - News summarization in Thai - Language modeling for Thai news ### Discussion of Biases - [ThaiPBS](https://www.thaipbs.or.th/home) [receives funding from Thai government](https://www.bangkokbiznews.com/blog/detail/648740). - [Thairath](https://www.thairath.co.th/) is known as [the most popular newspaper in Thailand](https://mgronline.com/onlinesection/detail/9620000058532); no clear political leaning. - [The Standard](https://thestandard.co/) is a left-leaning online magazine. - [Prachathai](https://prachatai.com/) is a left-leaning, human-right-focused news site. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [@nakhunchumpolsathien](https://github.com/nakhunchumpolsathien/) [@caramelWaffle](https://github.com/caramelWaffle) ### Licensing Information MIT License ### Citation Information ``` @mastersthesis{chumpolsathien_2020, title={Using Knowledge Distillation from Keyword Extraction to Improve the Informativeness of Neural Cross-lingual Summarization}, author={Chumpolsathien, Nakhun}, year={2020}, school={Beijing Institute of Technology} ``` ### Contributions Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
kargaranamir/GlotLID
2023-10-08T23:32:03.000Z
[ "multilinguality:multilingual", "language:zxx", "language:pes", "license:cc0-1.0", "Glot", "Glot500", "GlotLID", "multilingual", "LangID", "LID", "Language Identification", "region:us" ]
kargaranamir
GlotLID Corpus \
null
null
0
8
--- multilinguality: - multilingual extra_gated_heading: Access GlotLID Corpus on Hugging Face extra_gated_description: >- This is a form to enable access to GlotLID Corpus on Hugging Face. Please email amir@cis.lmu.de with subject of "GlotLID Corpus", a form will be sent to you and after agreeing with our license terms and use policy your requests to access this repository will be accepted. Requests will be processed in 1-2 days. extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provided us in form, or your request will not be approved.**" extra_gated_button_content: Submit extra_gated_fields: Name: text Email: text Affiliation: text Country: text Usecase: text I will only use this corpus myself for the use case that I have described above: checkbox license: cc0-1.0 language: - zxx - pes pretty_name: GlotLID Corpus tags: - Glot - Glot500 - GlotLID - multilingual - LangID - LID - Language Identification --- # GlotLID Corpus ## License We do not own any of the text from which these data has been extracted. We license the actual packaging, the metadata and the annotations of these data under the cc0-1.0 (waiving all of the rights under copyright law). If you are a website/dataset owner and do not want your data to be included in this corpra, please send us an email at amir@cis.lmu.de . ## Ethical Considerations **1. Biases:** The text corpus may reflect the perspectives, opinions, or demographics of its sources or creators. It is important for users to critically evaluate the text in context. **2. Representativeness:** While we have aimed for diversity and inclusivity, the text corpus may not fully represent all native speakers. Users should be mindful of any potential underrepresentation. **3. Ethics:** We acknowledge that the collection and use of text data can have ethical implications. We have strived to handle the data responsibly, but we encourage users to consider the broader ethical implications of their own research or applications. ## Citation If you use any part of this code and collected data in your research, please cite it using the following BibTeX entry. ``` @inproceedings{ kargaran2023glotlid, title={{GlotLID}: Language Identification for Low-Resource Languages}, author={Kargaran, Amir Hossein and Imani, Ayyoob and Yvon, Fran{\c{c}}ois and Sch{\"u}tze, Hinrich}, booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing}, year={2023}, url={https://openreview.net/forum?id=dl4e3EBz5j} } ```
carnival13/eng_sur_val_DA_tokenized
2023-10-09T07:13:09.000Z
[ "region:us" ]
carnival13
null
null
null
0
8
--- dataset_info: features: - name: pass_label dtype: int64 - name: input_ids sequence: int32 - name: attention_mask sequence: int8 splits: - name: train num_bytes: 30391635 num_examples: 22390 download_size: 5882210 dataset_size: 30391635 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "eng_sur_val_DA_tokenized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
boundless-asura/wikihow
2023-10-09T08:34:42.000Z
[ "region:us" ]
boundless-asura
null
null
null
0
8
Entry not found
imdatta0/openbqa_sciq
2023-10-09T08:38:01.000Z
[ "region:us" ]
imdatta0
null
null
null
0
8
Entry not found
VishalCh/sql-parsed
2023-10-09T13:11:05.000Z
[ "task_categories:text-generation", "task_categories:question-answering", "task_categories:table-question-answering", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "SQL", "code", "NLP", "text-to-sql", "context-sql", "spider", "wikisql", "sqlglot", "region:us" ]
VishalCh
null
null
null
0
8
--- license: cc-by-4.0 task_categories: - text-generation - question-answering - table-question-answering language: - en tags: - SQL - code - NLP - text-to-sql - context-sql - spider - wikisql - sqlglot pretty_name: sql-create-context size_categories: - 10K<n<100K ---
clickbait_news_bg
2023-01-25T14:28:03.000Z
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:bg", "license:unknown", "region:us" ]
null
Dataset with clickbait and fake news in Bulgarian. Introduced for the Hack the Fake News 2017.
@InProceedings{clickbait_news_bg, title = {Dataset with clickbait and fake news in Bulgarian. Introduced for the Hack the Fake News 2017.}, authors={Data Science Society}, year={2017}, url={https://gitlab.com/datasciencesociety/case_fake_news/} }
null
0
7
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - bg license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - fact-checking pretty_name: Clickbait/Fake News in Bulgarian dataset_info: features: - name: fake_news_score dtype: class_label: names: '0': legitimate '1': fake - name: click_bait_score dtype: class_label: names: '0': normal '1': clickbait - name: content_title dtype: string - name: content_url dtype: string - name: content_published_time dtype: string - name: content dtype: string splits: - name: train num_bytes: 24480402 num_examples: 2815 - name: validation num_bytes: 6752242 num_examples: 761 download_size: 8569575 dataset_size: 31232644 --- # Dataset Card for Clickbait/Fake News in Bulgarian ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Data Science Society / Case Fake News](https://gitlab.com/datasciencesociety/case_fake_news) - **Repository:** [Data Science Society / Case Fake News / Data](https://gitlab.com/datasciencesociety/case_fake_news/-/tree/master/data) - **Paper:** [This paper uses the dataset.](https://www.acl-bg.org/proceedings/2017/RANLP%202017/pdf/RANLP045.pdf) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This is a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned. The news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks. The dataset was prepared for the Hack the Fake News hackathon. It was provided by the [Bulgarian Association of PR Agencies](http://www.bapra.bg/) and is available in [Gitlab](https://gitlab.com/datasciencesociety/). The corpus was automatically collected, and then annotated by students of journalism. The training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news and 1,968 (i.e., 70%) are click-baits; There are 761 testing examples. There is 98% correlation between fake news and clickbaits. One important aspect about the training dataset is that it contains many repetitions. This should not be surprising as it attempts to represent a natural distribution of factual vs. fake news on-line over a period of time. As publishers of fake news often have a group of websites that feature the same deceiving content, we should expect some repetition. In particular, the training dataset contains 434 unique articles with duplicates. These articles have three reposts each on average, with the most reposted article appearing 45 times. If we take into account the labels of the reposted articles, we can see that if an article is reposted, it is more likely to be fake news. The number of fake news that have a duplicate in the training dataset are 1018 whereas, the number of articles with genuine content that have a duplicate article in the training set is 322. (The dataset description is from the following [paper](https://www.acl-bg.org/proceedings/2017/RANLP%202017/pdf/RANLP045.pdf).) ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Bulgarian ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields Each entry in the dataset consists of the following elements: * `fake_news_score` - a label indicating whether the article is fake or not * `click_bait_score` - another label indicating whether it is a click-bait * `content_title` - article heading * `content_url` - URL of the original article * `content_published_time` - date of publication * `content` - article content ### Data Splits The **training dataset** contains 2,815 examples, where 1,940 (i.e., 69%) are fake news and 1,968 (i.e., 70%) are click-baits; The **validation dataset** contains 761 testing examples. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@tsvm](https://github.com/tsvm), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
counter
2023-01-25T14:28:41.000Z
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:semantic-similarity-scoring", "task_ids:topic-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", ...
null
The COrpus of Urdu News TExt Reuse (COUNTER) corpus contains 1200 documents with real examples of text reuse from the field of journalism. It has been manually annotated at document level with three levels of reuse: wholly derived, partially derived and non derived.
@Article{Sharjeel2016, author="Sharjeel, Muhammad and Nawab, Rao Muhammad Adeel and Rayson, Paul", title="COUNTER: corpus of Urdu news text reuse", journal="Language Resources and Evaluation", year="2016", pages="1--27", issn="1574-0218", doi="10.1007/s10579-016-9367-2", url="http://dx.doi.org/10.1007/s10579-016-9367-2" }
null
0
7
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - ur license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - n<1K source_datasets: - original task_categories: - text-classification task_ids: - text-scoring - semantic-similarity-scoring - topic-classification paperswithcode_id: counter pretty_name: COUNTER dataset_info: features: - name: source struct: - name: filename dtype: string - name: headline dtype: string - name: body dtype: string - name: total_number_of_words dtype: int64 - name: total_number_of_sentences dtype: int64 - name: number_of_words_with_swr dtype: int64 - name: newspaper dtype: string - name: newsdate dtype: string - name: domain dtype: class_label: names: '0': business '1': sports '2': national '3': foreign '4': showbiz - name: classification dtype: class_label: names: '0': wholly_derived '1': partially_derived '2': not_derived - name: derived struct: - name: filename dtype: string - name: headline dtype: string - name: body dtype: string - name: total_number_of_words dtype: int64 - name: total_number_of_sentences dtype: int64 - name: number_of_words_with_swr dtype: int64 - name: newspaper dtype: string - name: newsdate dtype: string - name: domain dtype: class_label: names: '0': business '1': sports '2': national '3': foreign '4': showbiz - name: classification dtype: class_label: names: '0': wholly_derived '1': partially_derived '2': not_derived splits: - name: train num_bytes: 2598872 num_examples: 600 download_size: 1356306 dataset_size: 2598872 --- # Dataset Card for COUNTER ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://ucrel.lancs.ac.uk/textreuse/counter.php - **Repository:** [More Information Needed] - **Paper:** https://link.springer.com/article/10.1007%2Fs10579-016-9367-2 - **Leaderboard:** [More Information Needed] - **Point of Contact:** [UCREL](ucrel@lancaster.ac.uk) ### Dataset Summary The COrpus ofUrdu News TExt Reuse (COUNTER) corpus contains 1200 documents with realexamples of text reuse from the field of journalism. It has been manually annotatedat document level with three levels of reuse: wholly derived, partially derived andnon derived ### Supported Tasks and Leaderboards other:text-reuse ### Languages ur ## Dataset Structure ### Data Instances Here is one example from the dataset: ``` {"derived": { "body" :"میر پور(وقت نیوز) بنگلہ دیش نے 5 میچوں کی سیریز کےآ خری میچ میں بھی فتح حاصل کر کے سیریز میں وائٹ واش کر دیا،زمبابوے ایک میچ بھی نہ جیت سکا۔آخری میچ میں زمبابوے کے 129 رنز کا ہدف بنگال ٹائیگرز نے 24.3 اوورز میں 5 وکٹوں کے نقصان پر حاصل کر لیا۔بنگلہ دیش کے شیر بنگلہ سٹیڈیم میر پور میں کھیلے گئے آخری ایک روزہ میچ میں زمبابوے کے کپتان چکمبورا نے ٹاس جیت کے بینٹگ کا فیصلہ کیا جو ان کی ٹیم کیلئے ڈراؤنا خواب ثابت ہوا اور پوری ٹیم 30 اوورز میں 128 رنز بنا کر پویلین لوٹ گئی زمبابوے کی پہلی وکٹ 16 رنز پر گری جب سکندر رضا صرف 9 رنز بنا کر مشرقی مرتضی کی بال پر آؤٹ ہوئے اس کے بعد مساکد ازااور سباندا کی پارٹنرشپنے ٹیم کا سکور95 رنز تک پہنچا دیا ۔مساکدازا 52 رنز بنا کر جبیر الحسن کا شکار بنے جبکہ سباندا نے 37 رنز کی اننگز کھیلی اس کے بعد کئی بھی زمبابوے کا کھلاڑی جم کر نہ کھیل سکا۔بنگال ٹائیگرز کی جانب سے عمدہ باؤلنگ کے نتیجے میں کپتان چکمبورا سمیت 8 کھلاڑی ڈبل فیگر کراس نہ کر سکے ۔بنگلہ دیش کی جانب سے ایک روزہ میچوں میں ڈیبیو کرنے والے تیج السلام نے اپنے پہلے ہی میچ میں ہیٹرک کی اسلام نے 7 اوورز میں صرف 14 رنز دئے اور چار کھلاڑیوں کع آؤٹ کیا جبکہ شکیب الحسن نے 30 رنز دیکر 3 اور جبیر الحسن نے41 رنز دیکر2 کھلاڑیوں کو پویلین کی راہ دکھائی ۔ 128 رنز کے جواب میں بنگال ٹائیگرز نے بیٹنگ شروع کی مشکلات کا سامنا رہا ان کے بھی ابتدائی 3 کھلاڑی 47 رنز پر پویلین لوٹ گئے۔ تمیم اقبال 10، انعام الحق8 رنز بنا کر آؤٹ ہوئے،آل راؤنڈر شکیب الحسن بغیر کوئی رنز بنائیپویلین لوٹ گئے وکٹ کیپر مشفق الرحیم صرف 11 رنز بنا کر چتارہ کا شکار بن گئے۔محمد اللہ نے51 رنز کی میچ وننگ اننگز کھیلی جبکہ صابر رحمٰن13 رنز بنا کر ناٹ آؤٹ رہے۔ زمبابوے کی جانب سے چتارہ نے 3 اور پنیا نگارا نے 2 کھلاڑیوں کو آؤٹ کیا ۔فتح کے ساتھ بنگلہ دیش نے سیریز میں وائٹ واش کر دیا۔زمبابوے کی ٹیم کوئی میچ نہ جیت سکی،تیج السلام کو میچ کا بہترین ایوارڈ دیا گیا جبکہ سیریز کا بہترین کھلاڑی مشفق الرحیم کو قرار دیا گیا۔", "classification": 1, # partially_derived "domain": 1, # sports "filename": "0001p.xml", "headline": "بنگلہ دیش کا زمبابوے کا ون ڈے سیریز میں 5-0 سے وائٹ واش", "newsdate": "02.12.14", "newspaper": "daily_waqt", "number_of_words_with_swr": 265, "total_number_of_sentences": 13, "total_number_of_words": 393}, "source": { "body": "ڈھاکہ ۔ یکم دسمبر (اے پی پی) بنگلہ دیش نے زمبابوے کو ٹیسٹ کے بعد ون ڈے سیریز میں بھی وائٹ واش کر دیا۔ سیریز کے پانچویں اور آخری ون ڈے میچ میں بنگال ٹائیگرز نے زمبابوے کو 5 وکٹوں سے شکست دے دی، مہمان ٹیم پہلے بیٹنگ کرتے ہوئے 128 رنز پر ڈھیر ہوگئی۔ تیج الاسلام نے کیریئر کے پہلے ون ڈے میچ میں ہیٹ ٹرک کرکے نئی تاریخ رقم کر دی، انہوں نے 4 کھلاڑیوں کو آؤٹ کیا۔ جواب میں بنگلہ دیش نے ہدف 24.3 اوورز میں 5 وکٹوں کے نقصان پر حاصل کر لیا۔ محمد اللہ نے 51 رنز کی ناقابل شکست اننگز کھیلی۔ تفصیلات کے مطابق پیر کو شیر بنگلہ نیشنل سٹیڈیم، میرپور میں پانچویں اور آخری ون ڈے میچ میں زمبابوے کے کپتان ایلٹن چگمبورا نے ٹاس جیت کر پہلے بیٹنگ کا فیصلہ کیا جو غلط ثابت ہوا۔ زمبابوے کی پوری ٹیم ڈیبیو ون ڈے کھیلنے والے نوجوان لیفٹ آرم سپنر تیج الاسلام اور شکیب الحسن کی تباہ کن باؤلنگ کے باعث 30 اوورز میں 128 رنز پر ڈھیر ہوگئی۔ ہیملٹن ماساکڈزا 52 اور ووسی سبانڈا 37 رنز کے ساتھ نمایاں رہے، ان کے علاوہ کوئی بھی بلے باز دوہرا ہندسہ عبور نہ کر سکا۔ اپنا پہلا ون ڈے کھیلنے والے تیج الاسلام نے 11 رنز کے عوض 4 وکٹیں حاصل کیں جس میں شاندار ہیٹ ٹرک بھی شامل ہے، اس طرح وہ ڈیبیو میں ہیٹ ٹرک کرنے والے دنیا کے پہلے باؤلر بن گئے ہیں۔ شکیب الحسن نے تین اور زبیر حسین نے دو وکٹیں حاصل کیں۔ جواب میں بنگلہ دیش نے ہدف 24.3 اوورز میں 5 وکٹوں کے نقصان پر حاصل کر لیا۔ محمد اللہ نے 51 رنز کی ناقابل شکست اننگز کھیل کر ٹیم کی فتح میں اہم کردار ادا کیا۔ زمبابوے کی جانب سے ٹینڈائی چتارا نے تین اور تناشے پینگارا نے دو وکٹیں حاصل کیں۔", "classification": 1, # partially_derived "domain": 1, # sports "filename": "0001.xml", "headline": "بنگال ٹائیگرز نے کمزور زمبابوے کو ٹیسٹ کے بعد ون ڈے سیریز میں بھی وائٹ واش کر دیا، پانچویں اور آخری ون ڈے میچ میں بنگلہ دیش 5 وکٹوں سے فتح یاب، تیج الاسلام نے ڈیبیو ون ڈے میں ہیٹ ٹرک کرکے نئی تاریخ رقم کر دی" "newsdate": "01.12.14", "newspaper": "APP", "number_of_words_with_swr": 245, "total_number_of_sentences": 15, "total_number_of_words": 352}} ``` ### Data Fields ```source```: The source document ```derived```: The derived document For each pair of source and derived documents. we have the following fields: ```filename (str)```: Name of the file in dataset ```headline(str)```: Headline of the news item ```body(str)```: Main text of the news item ```total_number_of_words(int)```: Number of words in document ```total_number_of_sentences(int)```: Number of sentences in document ```number_of_words_with_swr(int)```: Number of words after stop word removal ```newspaper(str)```: The newspaper in which the news item was published ```newsdate(str)```: The date on which the news item is published DD.MM.YY ```domain(int)```: The category of news item from this list: "business", "sports", "national", "foreign", "showbiz". ```classification (int)```: Three classes of reuse from this list: Wholly Derived (WD), Partially Derived (PD) and Non Derived (ND) ### Data Splits One split train with 600 pairs of documents. The corpus is composed of two main document types: (1) source documents and (2) derived documents. There are total 1200 documents in the corpus: 600 are newsagency articles (source documents) and 600 are newspapers stories (derived documents). The corpus contains in total 275,387 words (tokens8), 21,426 unique words and 10,841 sentences. The average length of a source document is 227 words while for derived documents it is 254 words. ## Dataset Creation ### Curation Rationale Our main intention was to develop a standard benchmark resource for the evaluation of existing systems available for text reuse detection in general and specifically for Urdu language. To generate a corpus with realistic examples, we opted for the field of journalism. In journalism, the same news story is published in different newspapers in different forms. It is a standard practice followed by all the newspapers (reporters and editors) to reuse (verbatim or modified) a news story released by the news agency. ### Source Data #### Initial Data Collection and Normalization The COUNTER corpus consists of news articles (source documents) released by five news agencies in Pakistan i.e. Associated Press of Pakistan (APP), InternationalNews Network (INN), Independent News Pakistan (INP), News Network International (NNI) and South Asian News Agency (SANA). The corresponding news stories (derived documents) were extracted from nine daily published and large circulation national news papers of the All Pakistan Newspapers Society (APNS), who are subscribed to these news agencies. These include Nawa-e-Waqt, Daily Dunya, Express, Jang, Daily Waqt, Daily Insaf, Daily Aaj, Daily Islam and DailyPakistan. All of them are part of the mainstream national press, long established dailies with total circulation figures of over four million.7News agency texts (source documents) were provided (in electronic form) by the news agencies on a daily basis when they released the news. Newspaper stories (derived documents) were collected by three volunteers over a period of six months (from July to December 2014).National, Foreign, Business, Sports and Showbiz were the domains targeted for data collection. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process The corpus has been annotated at the document level with three classes of reuse i.e.Wholly Derived (WD), Partially Derived (PD) and Non Derived (ND). The derived collection contains documents with various degrees of text reuse. Some of the newspaper stories (derived documents)are rewritten (either verbatim or paraphrased) from the new agencys text (source document) while others have been written by the journalists independently on their own. For the former case, source-derived document pairs are either tagged as Wholly Derived (WD) or Partially Derived (PD) depending on the volume of text reused from the news agencys text for creating the newspaper article while for the latter case, they are tagged as Non Derived (ND) as the journalists have not reused anything from the news agencys text but based on their own observations and findings, developed and documented the story. The annotations were carried out in three phases: (1) training phase, (2) annotations, (3)conflict resolving. During the training phase, annotators A and B manually annotated 60 document pairs, following a preliminary version of the annotation guidelines. A detailed meeting was carried out afterwards, discussing the problems and disagreements. It was observed that the highest number of disagreements were between PD and ND cases, as both found it difficult to distinguish between these two classes. The reason being that adjusting the threshold where a text is heavily paraphrased or new information added to it that it becomes independently written(ND). Following the discussion, the annotation guidelines were slightly revised, and the first 60 annotations results were saved. In the annotation phase, the remaining540 document pairs were manually examined by the two annotators (A and B). Both were asked to judge, and classify (at document level) whether a document(newspaper story) depending on the volume of text rewritten from the source (news agency article) falls into one of the following categories:remaining540 document pairs were manually examined by the two annotators (A and B). Both were asked to judge, and classify (at document level) whether a document(newspaper story) depending on the volume of text rewritten from the source (news agency article) falls into one of the following categories: Wholly Derived (WD)The News agency text is the only source for the reused newspaper text, which means it is a verbatim copy of the source. In this case, most of the reused text is word-to-word copy of the source text.Partially Derived (PD)The Newspaper text has been either derived from more than one news agency or most of the text is paraphrased by the editor when rewriting from news agency text source. In this case, most parts of the derived document contain paraphrased text or new facts and figures added by the journalists own findings. Non Derived (ND)The News agency text has not been used in the production of the newspaper text (though words may still co-occur in both documents), it has completely different facts and figures or is heavily paraphrased from the newsagencys copy. In this case, the derived document is independently written and has a lot more new text. #### Who are the annotators? The annotations were performed by three annotators (A, B and C), who were native Urdu language speakers and experts of paraphrasing mechanisms. All three were graduates, experienced in text annotations and having an advanced Urdu level. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This dataset is licensed under the Creative Common Attribution-NonCommercial-ShareAlike 4.0 International License. [(CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). ### Citation Information ``` @Article{Sharjeel2016, author="Sharjeel, Muhammad and Nawab, Rao Muhammad Adeel and Rayson, Paul", title="COUNTER: corpus of Urdu news text reuse", journal="Language Resources and Evaluation", year="2016", pages="1--27", issn="1574-0218", doi="10.1007/s10579-016-9367-2", url="http://dx.doi.org/10.1007/s10579-016-9367-2" } ``` ### Contributions Thanks to [@arkhalid](https://github.com/arkhalid) for adding this dataset.
covid_qa_ucsd
2023-06-01T14:59:47.000Z
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "annotations_creators:found", "language_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:en", "languag...
null
null
@article{ju2020CovidDialog, title={CovidDialog: Medical Dialogue Datasets about COVID-19}, author={Ju, Zeqian and Chakravorty, Subrato and He, Xuehai and Chen, Shu and Yang, Xingyi and Xie, Pengtao}, journal={ https://github.com/UCSD-AI4H/COVID-Dialogue}, year={2020} }
null
1
7
--- annotations_creators: - found language_creators: - expert-generated - found language: - en - zh license: - unknown multilinguality: - monolingual size_categories: - 1K<n<10K - n<1K source_datasets: - original task_categories: - question-answering task_ids: - closed-domain-qa pretty_name: CovidQaUcsd dataset_info: - config_name: en features: - name: dialogue_id dtype: int32 - name: dialogue_url dtype: string - name: dialogue_turns sequence: - name: speaker dtype: class_label: names: '0': Patient '1': Doctor - name: utterance dtype: string splits: - name: train num_bytes: 484944 num_examples: 572 download_size: 0 dataset_size: 484944 - config_name: zh features: - name: dialogue_id dtype: int32 - name: dialogue_url dtype: string - name: dialogue_turns sequence: - name: speaker dtype: class_label: names: '0': 病人 '1': 医生 - name: utterance dtype: string splits: - name: train num_bytes: 1352377 num_examples: 1088 download_size: 0 dataset_size: 1352377 config_names: - en - zh --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UCSD-AI4H/COVID-Dialogue - **Repository:** The data is also present in the same [GIT](https://github.com/UCSD-AI4H/COVID-Dialogue) repository - **Paper:** https://pengtaoxie.github.io/coviddiag.pdf - **Leaderboard:** - **Point of Contact:** ### Dataset Summary COVID-Dialogue-Dataset-English is an English medical dialogue dataset about COVID-19 and other types of pneumonia. Patients who are concerned that they may be infected by COVID-19 or other pneumonia consult doctors and doctors provide advice. There are 603 consultations. COVID-Dialogue-Dataset-Chinese is a Chinese medical dialogue dataset about COVID-19 and other types of pneumonia. Patients who are concerned that they may be infected by COVID-19 or other pneumonia consult doctors and doctors provide advice. There are 1393 consultations. The dataset is present as a single text file. COVID-Dialogue-Dataset-Chinese.txt for Chinese and COVID-Dialogue-Dataset-English.txt for English. ### Supported Tasks and Leaderboards Used for QA tasks. There is also a COVID-19 dialogue generation model available for the Chinese Data. The pre-print and more information is available in [this arxiv pre-print](https://arxiv.org/abs/2005.05442). ### Languages Monolingual. The datasets are in English (EN) and Chinese (ZH) ## Dataset Structure ### Data Instances An example of dialogue is: ``` { 'dialogue_id': 602, 'dialogue_url': 'https://www.healthtap.com/member/fg?page=/search/covid', 'dialogue_turns': [{'speaker': 'Patient', 'utterance': 'Can coronavirus symptoms be mild for some people versus severe? For example, could it just involve being very fatigued, low grade fever for a few days and not the extreme symptoms? Or is it always a full blown cold and struggle to breathe?Can coronavirus symptoms be mild for some people versus severe? For example, could it just involve being very fatigued, low grade fever for a few days and not the extreme symptoms? Or is it always a full blown cold and struggle to breathe?'}, {'speaker': 'Doctor', 'utterance': 'In brief: Symptoms vary. Some may have no symptoms at all. Some can be life threatening. Would you like to video or text chat with me?'}] } ``` The dataset is built from [icliniq.com](https://www.icliniq.com/), [healthcaremagic.com](https://www.healthcaremagic.com/), [healthtap.com](https://www.healthtap.com/) and all copyrights of the data belong to these websites. _(for English)_ The dataset is built from [Haodf.com](https://www.haodf.com/) and all copyrights of the data belong to [Haodf.com](https://www.haodf.com/). _(for Chinese)_ ### Data Fields Each consultation consists of the below: - ID - URL - Description of patient’s medical condition - Dialogue - Diagnosis and suggestions (Optional, mostly for Chinese) For generating the QA only the below fields have been considered: - ID : Consultatation Identifier (restarts for each file) - URL: The url link of the extracted conversation - Dialogue : The conversation between the doctor and the patient. These are arranged as below in the prepared dataset. Each item will be represented with these parameters. - "file_name": string - signifies the file from which the conversation was extracted - "dialogue_id": int32 - the dialogue id - "dialogue_url": string - url of the conversation - "dialogue_turns": datasets.Sequence - sequence of dialogues between patient and the doctor.Consists ClassLabel(names=["病人", "医生"]), and "utterance"(string) for each turn. (ClassLable(names=["Patient", "Doctor"]) for english) ### Data Splits There are no data splits on the original data ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @article{ju2020CovidDialog, title={CovidDialog: Medical Dialogue Datasets about COVID-19}, author={Ju, Zeqian and Chakravorty, Subrato and He, Xuehai and Chen, Shu and Yang, Xingyi and Xie, Pengtao}, journal={ https://github.com/UCSD-AI4H/COVID-Dialogue}, year={2020} } ### Contributions Thanks to [@vrindaprabhu](https://github.com/vrindaprabhu) for adding this dataset.
curiosity_dialogs
2023-01-25T14:28:58.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "co...
null
This dataset contains 14K dialogs (181K utterances) where users and assistants converse about geographic topics like geopolitical entities and locations. This dataset is annotated with pre-existing user knowledge, message-level dialog acts, grounding to Wikipedia, and user reactions to messages.
@inproceedings{rodriguez2020curiosity, title = {Information Seeking in the Spirit of Learning: a Dataset for Conversational Curiosity}, author = {Pedro Rodriguez and Paul Crook and Seungwhan Moon and Zhiguang Wang}, year = 2020, booktitle = {Empirical Methods in Natural Language Processing} }
null
6
7
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - cc-by-nc-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - dialogue-modeling paperswithcode_id: curiosity pretty_name: Curiosity Dataset tags: - conversational-curiosity dataset_info: features: - name: messages sequence: - name: message dtype: string - name: liked dtype: class_label: names: '0': 'False' '1': 'True' - name: sender dtype: class_label: names: '0': user '1': assistant - name: facts sequence: - name: fid dtype: int32 - name: used dtype: class_label: names: '0': 'False' '1': 'True' - name: source dtype: class_label: names: '0': section '1': known '2': random - name: message_id dtype: string - name: dialog_acts sequence: string - name: known_entities sequence: string - name: focus_entity dtype: string - name: dialog_id dtype: int32 - name: inferred_steps dtype: class_label: names: '0': 'False' '1': 'True' - name: created_time dtype: int64 - name: aspects sequence: string - name: first_aspect dtype: string - name: second_aspect dtype: string - name: shuffle_facts dtype: class_label: names: '0': 'False' '1': 'True' - name: related_entities sequence: string - name: tag dtype: string - name: user_id dtype: int32 - name: assistant_id dtype: int32 - name: is_annotated dtype: class_label: names: '0': 'False' '1': 'True' - name: user_dialog_rating dtype: int32 - name: user_other_agent_rating dtype: int32 - name: assistant_dialog_rating dtype: int32 - name: assistant_other_agent_rating dtype: int32 - name: reported dtype: class_label: names: '0': 'False' '1': 'True' - name: annotated dtype: class_label: names: '0': 'False' '1': 'True' config_name: curiosity_dialogs splits: - name: train num_bytes: 37198297 num_examples: 10287 - name: val num_bytes: 4914487 num_examples: 1287 - name: test num_bytes: 4915613 num_examples: 1287 - name: test_zero num_bytes: 4333191 num_examples: 1187 download_size: 92169165 dataset_size: 51361588 --- # Dataset Card for Curiosity Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Curiosity Dataset Homepage](https://www.pedro.ai/curiosity) - **Repository:** [Curiosity Dataset Repository](https://github.com/facebookresearch/curiosity) - **Paper:** [ACL Anthology](https://www.aclweb.org/anthology/2020.emnlp-main.655/) - **Point of Contact:** [Pedro Rodriguez](https://mailhide.io/e/wbfjM) ### Dataset Summary Curiosity dataset consists of 14K English dialogs (181K utterances) where users and assistants converse about geographic topics like geopolitical entities and locations. This dataset is annotated with pre-existing user knowledge, message-level dialog acts, grounding to Wikipedia, and user reactions to messages. ### Supported Tasks and Leaderboards * `text-generation-other-conversational-curiosity`: The dataset can be used to train a model for Conversational Curiosity, which consists in the testing of the hypothesis that engagement increases when users are presented with facts related to what they know. Success on this task is typically measured by achieving a *high* [Accuracy](https://huggingface.co/metrics/accuracy) and [F1 Score](https://huggingface.co/metrics/f1). ### Languages The text in the dataset is in English collected by crowd-souring. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances A typical data point consists of dialogs between an user and an assistant, which is followed by the different attributes of the particular dialog. An example from the Curiosity Dataset train set looks as follows: ``` {'annotated': 1, 'aspects': ['Media', 'Politics and government'], 'assistant_dialog_rating': 5, 'assistant_id': 341, 'assistant_other_agent_rating': 5, 'created_time': 1571783665, 'dialog_id': 21922, 'first_aspect': 'Media', 'focus_entity': 'Namibia', 'inferred_steps': 1, 'is_annotated': 0, 'known_entities': ['South Africa', 'United Kingdom', 'Portugal'], 'messages': {'dialog_acts': [['request_topic'], ['inform_response'], ['request_aspect'], ['inform_response'], ['request_followup'], ['inform_response'], ['request_aspect', 'feedback_positive'], ['inform_response'], ['request_followup'], ['inform_response'], [], []], 'facts': [{'fid': [], 'source': [], 'used': []}, {'fid': [77870, 77676, 77816, 77814, 77775, 77659, 77877, 77785, 77867], 'source': [0, 1, 2, 2, 0, 2, 0, 1, 1], 'used': [0, 0, 0, 0, 0, 0, 0, 0, 0]}, {'fid': [], 'source': [], 'used': []}, {'fid': [77725, 77870, 77676, 77863, 77814, 77775, 77659, 77877, 77867], 'source': [2, 0, 1, 1, 2, 0, 2, 0, 1], 'used': [0, 0, 0, 0, 0, 0, 0, 0, 0]}, {'fid': [], 'source': [], 'used': []}, {'fid': [77694, 77661, 77863, 77780, 77671, 77704, 77869, 77693, 77877], 'source': [1, 2, 1, 0, 2, 2, 0, 1, 0], 'used': [0, 0, 0, 0, 0, 0, 0, 0, 1]}, {'fid': [], 'source': [], 'used': []}, {'fid': [77816, 77814, 77864, 77659, 77877, 77803, 77738, 77784, 77789], 'source': [2, 2, 0, 2, 0, 1, 1, 0, 1], 'used': [0, 0, 0, 0, 0, 0, 0, 0, 0]}, {'fid': [], 'source': [], 'used': []}, {'fid': [77694, 77776, 77780, 77696, 77707, 77693, 77778, 77702, 77743], 'source': [1, 0, 0, 2, 1, 1, 0, 2, 2], 'used': [0, 0, 0, 0, 0, 0, 0, 0, 0]}, {'fid': [], 'source': [], 'used': []}, {'fid': [77662, 77779, 77742, 77734, 77663, 77777, 77702, 77731, 77778], 'source': [1, 0, 2, 1, 2, 0, 2, 1, 0], 'used': [0, 0, 0, 0, 0, 0, 0, 0, 1]}], 'liked': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'message': ['Hi. I want information about Namibia.', 'Nmbia is a country in southern Africa.', 'Do you have information about the media there?', 'A mentional amount of foriegn', 'What about it?', "Media and journalists in Namibia are represented by the Namibia chapter of the Media Institute of 'southern Africa and the Editors Forum of Namibia.", 'Interesting! What can you tell me about the politics and government?', 'Namibia formed the Namibian Defence Force, comprising former enemies in a 23-year bush war.', 'Do you have more information about it?', "With a small army and a fragile economy , the Namibian government's principal foreign policy concern is developing strengthened ties within the Southern African region.", "That's all I wanted to know. Thank you!", 'My pleasure!'], 'message_id': ['617343895', '2842515356', '4240816985', '520711081', '1292358002', '3677078227', '1563061125', '1089028270', '1607063839', '113037558', '1197873991', '1399017322'], 'sender': [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1]}, 'related_entities': ['Western Roman Empire', 'United Kingdom', 'Portuguese language', 'Southern African Development Community', 'South Africa', 'Kalahari Desert', 'Namib Desert', 'League of Nations', 'Afrikaans', 'Sub-Saharan Africa', 'Portugal', 'South-West Africa', 'Warmbad, Namibia', 'German language', 'NBC'], 'reported': 0, 'second_aspect': 'Politics and government', 'shuffle_facts': 1, 'tag': 'round_2', 'user_dialog_rating': 5, 'user_id': 207, 'user_other_agent_rating': 5} ``` ### Data Fields * `messages`: List of dialogs between the user and the assistant and their associated attributes * `dialog_acts`: List of actions performed in the dialogs * `facts`: List of facts returned by the assistant * `fid`: Fact ID * `source`: Source for the fact * `used`: Whether facts were used before in the same dialog * `liked`: List of values indicating whether each dialog was liked * `message`: List of dialogs (messages) between the user and the assistant * `message_id`: Message ID * `sender`: Message author ID (numeric) * `known_entities`: Rooted facts about entities the user knows * `focus_entity` : Entity in focus in the dialogs * `dialog_id `: Dialog ID * `inferred_steps`: Number of inferred steps * `created_time`: Time of creation of the dialog * `aspects`: List of two aspects which the dialog is about * `first_aspect`: First aspect * `second_aspect`: Second aspect * `shuffle_facts`: Whether facts were shuffled * `related_entities` : List of fifteen related entities to the focus entity * `tag`: Conversation tag * `user_id`: User ID * `assistant_id`: Assistant ID * `is_annotated`: 0 or 1 (More Information Needed) * `user_dialog_rating`: 1 - 5 (More Information Needed) * `user_other_agent_rating`: 1 - 5 (More Information Needed) * `assistant_dialog_rating`: 1 - 5 (More Information Needed) * `assistant_other_agent_rating`: 1 - 5 (More Information Needed) * `reported`: Whether the dialog was reported inappropriate * `annotated`: 0 or 1 (More Information Needed) ### Data Splits The data is split into a training, validation, test and test_zero set as per the original dataset split. | | train | validation | test | test_zero | |-----------------------|------:|-----------:|-----:|----------:| | Input dialog examples | 10287 | 1287 | 1287 | 1187 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/legalcode) ### Citation Information ``` @inproceedings{rodriguez2020curiosity, title = {Information Seeking in the Spirit of Learning: a Dataset for Conversational Curiosity}, author = {Pedro Rodriguez and Paul Crook and Seungwhan Moon and Zhiguang Wang}, year = 2020, booktitle = {Empirical Methods in Natural Language Processing} } ``` ### Contributions Thanks to [@vineeths96](https://github.com/vineeths96) for adding this dataset.
fake_news_english
2023-05-30T04:42:32.000Z
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
null
Fake news has become a major societal issue and a technical challenge for social media companies to identify. This content is difficult to identify because the term "fake news" covers intentionally false, deceptive stories as well as factual errors, satire, and sometimes, stories that a person just does not like. Addressing the problem requires clear definitions and examples. In this work, we present a dataset of fake news and satire stories that are hand coded, verified, and, in the case of fake news, include rebutting stories. We also include a thematic content analysis of the articles, identifying major themes that include hyperbolic support or condemnation of a gure, conspiracy theories, racist themes, and discrediting of reliable sources. In addition to releasing this dataset for research use, we analyze it and show results based on language that are promising for classification purposes. Overall, our contribution of a dataset and initial analysis are designed to support future work by fake news researchers.
@inproceedings{inproceedings, author = {Golbeck, Jennifer and Everett, Jennine and Falak, Waleed and Gieringer, Carl and Graney, Jack and Hoffman, Kelly and Huth, Lindsay and Ma, Zhenya and Jha, Mayanka and Khan, Misbah and Kori, Varsha and Mauriello, Matthew and Lewis, Elo and Mirano, George and IV, William and Mussenden, Sean and Nelson, Tammie and Mcwillie, Sean and Pant, Akshat and Cheakalos, Paul}, year = {2018}, month = {05}, pages = {17-21}, title = {Fake News vs Satire: A Dataset and Analysis}, doi = {10.1145/3201064.3201100} }
null
0
7
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - unknown multilinguality: - monolingual size_categories: - n<1K source_datasets: - original task_categories: - text-classification task_ids: [] pretty_name: Fake News English dataset_info: features: - name: article_number dtype: int32 - name: url_of_article dtype: string - name: fake_or_satire dtype: class_label: names: '0': Satire '1': Fake - name: url_of_rebutting_article dtype: string splits: - name: train num_bytes: 78078 num_examples: 492 download_size: 3002233 dataset_size: 78078 --- # Dataset Card for Fake News English ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://dl.acm.org/doi/10.1145/3201064.3201100** - **Repository:** https://github.com/jgolbeck/fakenews/ - **Paper:** https://doi.org/10.1145/3201064.3201100 - **Leaderboard:** - **Point of Contact:** Jennifer Golbeck (http://www.jengolbeck.com) ### Dataset Summary This dataset contains URLs of news articles classified as either fake or satire. The articles classified as fake also have the URL of a rebutting article. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances ``` { "article_number": 102 , "url_of_article": https://newslo.com/roger-stone-blames-obama-possibility-trump-alzheimers-attacks-president-caused-severe-stress/ , "fake_or_satire": 1, # Fake "url_of_rebutting_article": https://www.snopes.com/fact-check/donald-trumps-intelligence-quotient/ } ``` ### Data Fields - article_number: An integer used as an index for each row - url_of_article: A string which contains URL of an article to be assessed and classified as either Fake or Satire - fake_or_satire: A classlabel for the above variable which can take two values- Fake (1) and Satire (0) - url_of_rebutting_article: A string which contains a URL of the article used to refute the article in question (present - in url_of_article) ### Data Splits This dataset is not split, only the train split is available. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Golbeck, Jennifer Everett, Jennine Falak, Waleed Gieringer, Carl Graney, Jack Hoffman, Kelly Huth, Lindsay Ma, Zhenya Jha, Mayanka Khan, Misbah Kori, Varsha Mauriello, Matthew Lewis, Elo Mirano, George IV, William Mussenden, Sean Nelson, Tammie Mcwillie, Sean Pant, Akshat Cheakalos, Paul ### Licensing Information [More Information Needed] ### Citation Information @inproceedings{inproceedings, author = {Golbeck, Jennifer and Everett, Jennine and Falak, Waleed and Gieringer, Carl and Graney, Jack and Hoffman, Kelly and Huth, Lindsay and Ma, Zhenya and Jha, Mayanka and Khan, Misbah and Kori, Varsha and Mauriello, Matthew and Lewis, Elo and Mirano, George and IV, William and Mussenden, Sean and Nelson, Tammie and Mcwillie, Sean and Pant, Akshat and Cheakalos, Paul}, year = {2018}, month = {05}, pages = {17-21}, title = {Fake News vs Satire: A Dataset and Analysis}, doi = {10.1145/3201064.3201100} } ### Contributions Thanks to [@MisbahKhan789](https://github.com/MisbahKhan789), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
hansards
2023-04-05T10:07:00.000Z
[ "region:us" ]
null
This release contains 1.3 million pairs of aligned text chunks (sentences or smaller fragments) from the official records (Hansards) of the 36th Canadian Parliament. The complete Hansards of the debates in the House and Senate of the 36th Canadian Parliament, as far as available, were aligned. The corpus was then split into 5 sets of sentence pairs: training (80% of the sentence pairs), two sets of sentence pairs for testing (5% each), and two sets of sentence pairs for final evaluation (5% each). The current release consists of the training and testing sets. The evaluation sets are reserved for future MT evaluation purposes and currently not available. Caveats 1. This release contains only sentence pairs. Even though the order of the sentences is the same as in the original, there may be gaps resulting from many-to-one, many-to-many, or one-to-many alignments that were filtered out. Therefore, this release may not be suitable for discourse-related research. 2. Neither the sentence splitting nor the alignments are perfect. In particular, watch out for pairs that differ considerably in length. You may want to filter these out before you do any statistical training. The alignment of the Hansards was performed as part of the ReWrite project under funding from the DARPA TIDES program.
null
0
7
--- paperswithcode_id: null pretty_name: hansards dataset_info: - config_name: senate features: - name: fr dtype: string - name: en dtype: string splits: - name: test num_bytes: 5711686 num_examples: 25553 - name: train num_bytes: 40324278 num_examples: 182135 download_size: 15247360 dataset_size: 46035964 - config_name: house features: - name: fr dtype: string - name: en dtype: string splits: - name: test num_bytes: 22906629 num_examples: 122290 - name: train num_bytes: 191459584 num_examples: 947969 download_size: 67584000 dataset_size: 214366213 --- # Dataset Card for "hansards" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.isi.edu/natural-language/download/hansard/](https://www.isi.edu/natural-language/download/hansard/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 82.83 MB - **Size of the generated dataset:** 260.40 MB - **Total amount of disk used:** 343.23 MB ### Dataset Summary This release contains 1.3 million pairs of aligned text chunks (sentences or smaller fragments) from the official records (Hansards) of the 36th Canadian Parliament. The complete Hansards of the debates in the House and Senate of the 36th Canadian Parliament, as far as available, were aligned. The corpus was then split into 5 sets of sentence pairs: training (80% of the sentence pairs), two sets of sentence pairs for testing (5% each), and two sets of sentence pairs for final evaluation (5% each). The current release consists of the training and testing sets. The evaluation sets are reserved for future MT evaluation purposes and currently not available. Caveats 1. This release contains only sentence pairs. Even though the order of the sentences is the same as in the original, there may be gaps resulting from many-to-one, many-to-many, or one-to-many alignments that were filtered out. Therefore, this release may not be suitable for discourse-related research. 2. Neither the sentence splitting nor the alignments are perfect. In particular, watch out for pairs that differ considerably in length. You may want to filter these out before you do any statistical training. The alignment of the Hansards was performed as part of the ReWrite project under funding from the DARPA TIDES program. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### house - **Size of downloaded dataset files:** 67.58 MB - **Size of the generated dataset:** 214.37 MB - **Total amount of disk used:** 281.95 MB An example of 'train' looks as follows. ``` { "en": "Mr. Walt Lastewka (Parliamentary Secretary to Minister of Industry, Lib.):", "fr": "M. Walt Lastewka (secrétaire parlementaire du ministre de l'Industrie, Lib.):" } ``` #### senate - **Size of downloaded dataset files:** 15.25 MB - **Size of the generated dataset:** 46.03 MB - **Total amount of disk used:** 61.28 MB An example of 'train' looks as follows. ``` { "en": "Mr. Walt Lastewka (Parliamentary Secretary to Minister of Industry, Lib.):", "fr": "M. Walt Lastewka (secrétaire parlementaire du ministre de l'Industrie, Lib.):" } ``` ### Data Fields The data fields are the same among all splits. #### house - `fr`: a `string` feature. - `en`: a `string` feature. #### senate - `fr`: a `string` feature. - `en`: a `string` feature. ### Data Splits | name |train | test | |------|-----:|-----:| |house |947969|122290| |senate|182135| 25553| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
kor_sae
2023-01-25T14:34:03.000Z
[ "task_categories:text-classification", "task_ids:intent-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ko", "license:cc-by-sa-4.0", "arxiv:1912.00342"...
null
This new dataset is designed to extract intent from non-canonical directives which will help dialog managers extract intent from user dialog that may have no clear objective or are paraphrased forms of utterances.
@article{cho2019machines, title={Machines Getting with the Program: Understanding Intent Arguments of Non-Canonical Directives}, author={Cho, Won Ik and Moon, Young Ki and Moon, Sangwhan and Kim, Seok Min and Kim, Nam Soo}, journal={arXiv preprint arXiv:1912.00342}, year={2019} }
null
2
7
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - ko license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - intent-classification pretty_name: Structured Argument Extraction for Korean dataset_info: features: - name: intent_pair1 dtype: string - name: intent_pair2 dtype: string - name: label dtype: class_label: names: '0': yes/no '1': alternative '2': wh- questions '3': prohibitions '4': requirements '5': strong requirements splits: - name: train num_bytes: 2885167 num_examples: 30837 download_size: 2545926 dataset_size: 2885167 --- # Dataset Card for Structured Argument Extraction for Korean ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Structured Argument Extraction for Korean](https://github.com/warnikchow/sae4k) - **Repository:** [Structured Argument Extraction for Korean](https://github.com/warnikchow/sae4k) - **Paper:** [Machines Getting with the Program: Understanding Intent Arguments of Non-Canonical Directives](https://arxiv.org/abs/1912.00342) - **Point of Contact:** [Won Ik Cho](wicho@hi.snu.ac.kr) ### Dataset Summary The Structured Argument Extraction for Korean dataset is a set of question-argument and command-argument pairs with their respective question type label and negativeness label. Often times, agents like Alexa or Siri, encounter conversations without a clear objective from the user. The goal of this dataset is to extract the intent argument of a given utterance pair without a clear directive. This may yield a more robust agent capable of parsing more non-canonical forms of speech. ### Supported Tasks and Leaderboards * `intent_classification`: The dataset can be trained with a Transformer like [BERT](https://huggingface.co/bert-base-uncased) to classify the intent argument or a question/command pair in Korean, and it's performance can be measured by it's BERTScore. ### Languages The text in the dataset is in Korean and the associated is BCP-47 code is `ko-KR`. ## Dataset Structure ### Data Instances An example data instance contains a question or command pair and its label: ``` { "intent_pair1": "내일 오후 다섯시 조별과제 일정 추가해줘" "intent_pair2": "내일 오후 다섯시 조별과제 일정 추가하기" "label": 4 } ``` ### Data Fields * `intent_pair1`: a question/command pair * `intent_pair2`: a corresponding question/command pair * `label`: determines the intent argument of the pair and can be one of `yes/no` (0), `alternative` (1), `wh- questions` (2), `prohibitions` (3), `requirements` (4) and `strong requirements` (5) ### Data Splits The corpus contains 30,837 examples. ## Dataset Creation ### Curation Rationale The Structured Argument Extraction for Korean dataset was curated to help train models extract intent arguments from utterances without a clear objective or when the user uses non-canonical forms of speech. This is especially helpful in Korean because in English, the `Who, what, where, when and why` usually comes in the beginning, but this isn't necessarily the case in the Korean language. So for low-resource languages, this lack of data can be a bottleneck for comprehension performance. ### Source Data #### Initial Data Collection and Normalization The corpus was taken from the one constructed by [Cho et al.](https://arxiv.org/abs/1811.04231), a Korean single utterance corpus for identifying directives/non-directives that contains a wide variety of non-canonical directives. #### Who are the source language producers? Korean speakers are the source language producers. ### Annotations #### Annotation process Utterances were categorized as question or command arguments and then further classified according to their intent argument. #### Who are the annotators? The annotation was done by three Korean natives with a background in computational linguistics. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset is curated by Won Ik Cho, Young Ki Moon, Sangwhan Moon, Seok Min Kim and Nam Soo Kim. ### Licensing Information The dataset is licensed under the CC BY-SA-4.0. ### Citation Information ``` @article{cho2019machines, title={Machines Getting with the Program: Understanding Intent Arguments of Non-Canonical Directives}, author={Cho, Won Ik and Moon, Young Ki and Moon, Sangwhan and Kim, Seok Min and Kim, Nam Soo}, journal={arXiv preprint arXiv:1912.00342}, year={2019} } ``` ### Contributions Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
opus_finlex
2022-11-03T16:08:11.000Z
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:original", "language:fi", "language:sv", "license:unknown", "region:us" ]
null
The Finlex Data Base is a comprehensive collection of legislative and other judicial information of Finland, which is available in Finnish, Swedish and partially in English. This corpus is taken from the Semantic Finlex serice that provides the Finnish and Swedish data as linked open data and also raw XML files.
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
null
1
7
--- annotations_creators: - found language_creators: - found language: - fi - sv license: - unknown multilinguality: - translation size_categories: - 1M<n<10M source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: OpusFinlex dataset_info: features: - name: translation dtype: translation: languages: - fi - sv config_name: fi-sv splits: - name: train num_bytes: 610550215 num_examples: 3114141 download_size: 153886554 dataset_size: 610550215 --- # Dataset Card for [opus_finlex] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[Finlex](http://opus.nlpl.eu/Finlex.php) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Finlex Data Base is a comprehensive collection of legislative and other judicial information of Finland, which is available in Finnish, Swedish and partially in English. This corpus is taken from the Semantic Finlex serice that provides the Finnish and Swedish data as linked open data and also raw XML files. ### Supported Tasks and Leaderboards The underlying task is machine translation for language pair Swedish and Finnish. ### Languages Swedish and Finnish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) ### Contributions Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
opus_paracrawl
2023-06-01T14:59:53.000Z
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1M<n<10M", "source_datasets:original", "language:bg", "language:ca", "language:cs", "language:da",...
null
Parallel corpora from Web Crawls collected in the ParaCrawl project. 42 languages, 43 bitexts total number of files: 59,996 total number of tokens: 56.11G total number of sentence fragments: 3.13G
null
null
5
7
--- annotations_creators: - found language_creators: - found language: - bg - ca - cs - da - de - el - en - es - et - eu - fi - fr - ga - gl - hr - hu - is - it - km - ko - lt - lv - mt - my - nb - ne - nl - nn - pl - pt - ro - ru - si - sk - sl - so - sv - sw - tl - uk - zh license: - cc0-1.0 multilinguality: - multilingual size_categories: - 100K<n<1M - 10K<n<100K - 1M<n<10M source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: OpusParaCrawl dataset_info: - config_name: el-en features: - name: id dtype: string - name: translation dtype: translation: languages: - el - en splits: - name: train num_bytes: 6760375061 num_examples: 21402471 download_size: 2317102846 dataset_size: 6760375061 - config_name: en-ha features: - name: id dtype: string - name: translation dtype: translation: languages: - en - ha splits: - name: train num_bytes: 4618460 num_examples: 19694 download_size: 1757433 dataset_size: 4618460 - config_name: en-ig features: - name: id dtype: string - name: translation dtype: translation: languages: - en - ig splits: - name: train num_bytes: 6709030 num_examples: 28829 download_size: 2691716 dataset_size: 6709030 - config_name: en-km features: - name: id dtype: string - name: translation dtype: translation: languages: - en - km splits: - name: train num_bytes: 31964493 num_examples: 65115 download_size: 9907279 dataset_size: 31964493 - config_name: en-so features: - name: id dtype: string - name: translation dtype: translation: languages: - en - so splits: - name: train num_bytes: 5791003 num_examples: 14880 download_size: 2227727 dataset_size: 5791003 - config_name: de-pl features: - name: id dtype: string - name: translation dtype: translation: languages: - de - pl splits: - name: train num_bytes: 298637031 num_examples: 916643 download_size: 106891602 dataset_size: 298637031 - config_name: fr-nl features: - name: id dtype: string - name: translation dtype: translation: languages: - fr - nl splits: - name: train num_bytes: 862303220 num_examples: 2687673 download_size: 319804705 dataset_size: 862303220 - config_name: en-sw features: - name: id dtype: string - name: translation dtype: translation: languages: - en - sw splits: - name: train num_bytes: 44264442 num_examples: 132520 download_size: 18611087 dataset_size: 44264442 - config_name: en-tl features: - name: id dtype: string - name: translation dtype: translation: languages: - en - tl splits: - name: train num_bytes: 82502798 num_examples: 248689 download_size: 32933118 dataset_size: 82502798 - config_name: es-gl features: - name: id dtype: string - name: translation dtype: translation: languages: - es - gl splits: - name: train num_bytes: 582660901 num_examples: 1879689 download_size: 236696353 dataset_size: 582660901 config_names: - de-pl - el-en - en-ha - en-ig - en-km - en-so - en-sw - en-tl - es-gl - fr-nl --- # Dataset Card for OpusParaCrawl ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/ParaCrawl.php - **Repository:** None - **Paper:** [ParaCrawl: Web-Scale Acquisition of Parallel Corpora](https://aclanthology.org/2020.acl-main.417/) - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary Parallel corpora from Web Crawls collected in the ParaCrawl project. Tha dataset contains: - 42 languages, 43 bitexts - total number of files: 59,996 - total number of tokens: 56.11G - total number of sentence fragments: 3.13G To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs, e.g. ```python dataset = load_dataset("opus_paracrawl", lang1="en", lang2="so") ``` You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/ParaCrawl.php ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The languages in the dataset are: - bg - ca - cs - da - de - el - en - es - et - eu - fi - fr - ga - gl - hr - hu - is - it - km - ko - lt - lv - mt - my - nb - ne - nl - nn - pl - pt - ro - ru - si - sk - sl - so - sv - sw - tl - uk - zh ## Dataset Structure ### Data Instances ``` { 'id': '0', 'translation': { "el": "Συνεχίστε ευθεία 300 μέτρα μέχρι να καταλήξουμε σε μια σωστή οδός (ul. Gagarina)? Περπατήστε περίπου 300 μέτρα μέχρι να φτάσετε το πρώτο ορθή οδός (ul Khotsa Namsaraeva)?", "en": "Go straight 300 meters until you come to a proper street (ul. Gagarina); Walk approximately 300 meters until you reach the first proper street (ul Khotsa Namsaraeva);" } } ``` ### Data Fields - `id` (`str`): Unique identifier of the parallel sentence for the pair of languages. - `translation` (`dict`): Parallel sentences for the pair of languages. ### Data Splits The dataset contains a single `train` split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information - Creative commons CC0 (no rights reserved) ### Citation Information ```bibtex @inproceedings{banon-etal-2020-paracrawl, title = "{P}ara{C}rawl: Web-Scale Acquisition of Parallel Corpora", author = "Ba{\~n}{\'o}n, Marta and Chen, Pinzhen and Haddow, Barry and Heafield, Kenneth and Hoang, Hieu and Espl{\`a}-Gomis, Miquel and Forcada, Mikel L. and Kamran, Amir and Kirefu, Faheem and Koehn, Philipp and Ortiz Rojas, Sergio and Pla Sempere, Leopoldo and Ram{\'\i}rez-S{\'a}nchez, Gema and Sarr{\'\i}as, Elsa and Strelec, Marek and Thompson, Brian and Waites, William and Wiggins, Dion and Zaragoza, Jaume", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.acl-main.417", doi = "10.18653/v1/2020.acl-main.417", pages = "4555--4567", } ``` ```bibtex @InProceedings{TIEDEMANN12.463, author = {Jörg Tiedemann}, title = {Parallel Data, Tools and Interfaces in OPUS}, booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)}, year = {2012}, month = {may}, date = {23-25}, address = {Istanbul, Turkey}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Uğur Doğan and Bente Maegaard and Joseph Mariani and Asuncion Moreno and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-7-7}, language = {english} } ``` ### Contributions Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
etalab-ia/piaf
2022-11-03T16:31:15.000Z
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:fr", "license:mit", "region:us" ]
etalab-ia
Piaf is a reading comprehension dataset. This version, published in February 2020, contains 3835 questions on French Wikipedia.
@InProceedings{keraron-EtAl:2020:LREC, author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo}, title = {Project PIAF: Building a Native French Question-Answering Dataset}, booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference}, month = {May}, year = {2020}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {5483--5492}, abstract = {Motivated by the lack of data for non-English languages, in particular for the evaluation of downstream tasks such as Question Answering, we present a participatory effort to collect a native French Question Answering Dataset. Furthermore, we describe and publicly release the annotation tool developed for our collection effort, along with the data obtained and preliminary baselines.}, url = {https://www.aclweb.org/anthology/2020.lrec-1.673} }
null
6
7
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - fr language_bcp47: - fr-FR license: - mit multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - question-answering task_ids: - extractive-qa - open-domain-qa paperswithcode_id: null pretty_name: Piaf dataset_info: features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: text dtype: string - name: answer_start dtype: int32 config_name: plain_text splits: - name: train num_bytes: 3332905 num_examples: 3835 download_size: 1370384 dataset_size: 3332905 --- # Dataset Card for Piaf ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://piaf.etalab.studio](https://piaf.etalab.studio) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.31 MB - **Size of the generated dataset:** 3.18 MB - **Total amount of disk used:** 4.49 MB ### Dataset Summary Piaf is a reading comprehension dataset. This version, published in February 2020, contains 3835 questions on French Wikipedia. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### plain_text - **Size of downloaded dataset files:** 1.31 MB - **Size of the generated dataset:** 3.18 MB - **Total amount of disk used:** 4.49 MB An example of 'train' looks as follows. ``` { "answers": { "answer_start": [0], "text": ["Voici"] }, "context": "Voici le contexte du premier paragraphe du deuxième article.", "id": "p140295460356960", "question": "Suis-je la troisième question ?", "title": "Jakob Böhme" } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits | name | train | |------------|------:| | plain_text | 3835 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{keraron-EtAl:2020:LREC, author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo}, title = {Project PIAF: Building a Native French Question-Answering Dataset}, booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference}, month = {May}, year = {2020}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {5483--5492}, abstract = {Motivated by the lack of data for non-English languages, in particular for the evaluation of downstream tasks such as Question Answering, we present a participatory effort to collect a native French Question Answering Dataset. Furthermore, we describe and publicly release the annotation tool developed for our collection effort, along with the data obtained and preliminary baselines.}, url = {https://www.aclweb.org/anthology/2020.lrec-1.673} } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@RachelKer](https://github.com/RachelKer) for adding this dataset.
proto_qa
2022-11-03T16:31:01.000Z
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", ...
null
This dataset is for studying computational models trained to reason about prototypical situations. Using deterministic filtering a sampling from a larger set of all transcriptions was built. It contains 9789 instances where each instance represents a survey question from Family Feud game. Each instance exactly is a question, a set of answers, and a count associated with each answer. Each line is a json dictionary, in which: 1. question - contains the question (in original and a normalized form) 2. answerstrings - contains the original answers provided by survey respondents (when available), along with the counts for each string. Because the FamilyFeud data has only cluster names rather than strings, those cluster names are included with 0 weight. 3. answer-clusters - lists clusters, with the count of each cluster and the strings included in that cluster. Each cluster is given a unique ID that can be linked to in the assessment files.
@InProceedings{huggingface:dataset, title = {ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning}, authors={Michael Boratko, Xiang Lorraine Li, Tim O’Gorman, Rajarshi Das, Dan Le, Andrew McCallum}, year={2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished={\\url{https://github.com/iesl/protoqa-data}}, }
null
1
7
--- annotations_creators: - crowdsourced language_creators: - crowdsourced - other language: - en license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - question-answering task_ids: - multiple-choice-qa - open-domain-qa paperswithcode_id: protoqa pretty_name: ProtoQA dataset_info: - config_name: proto_qa features: - name: normalized-question dtype: string - name: question dtype: string - name: answer-clusters sequence: - name: count dtype: int32 - name: clusterid dtype: string - name: answers sequence: string - name: answerstrings sequence: string - name: totalcount dtype: int32 - name: id dtype: string - name: source dtype: string splits: - name: train num_bytes: 3943484 num_examples: 8782 - name: validation num_bytes: 472121 num_examples: 980 download_size: 7352932 dataset_size: 4415605 - config_name: proto_qa_cs features: - name: normalized-question dtype: string - name: question dtype: string - name: answers-cleaned sequence: - name: count dtype: int32 - name: clusterid dtype: string - name: answers sequence: string - name: answerstrings sequence: string - name: totalcount dtype: int32 - name: id dtype: string - name: source dtype: string splits: - name: validation num_bytes: 84466 num_examples: 52 download_size: 115704 dataset_size: 84466 - config_name: proto_qa_cs_assessments features: - name: question dtype: string - name: assessments sequence: string splits: - name: validation num_bytes: 12473 num_examples: 52 download_size: 24755 dataset_size: 12473 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Interactive Demo:** [Interactive demo](http://protoqa.com) - **Repository:** [proto_qa repository](https://github.com/iesl/protoqa-data) - **Paper:** [proto_qa paper](https://arxiv.org/pdf/2005.00771.pdf) - **Point of Contact:** [Michael Boratko](mailto:mboratko@cs.umass.edu) [Xiang Lorraine Li](mailto:xiangl@cs.umass.edu) [Tim O’Gorman](mailto:togorman@cs.umass.edu) [Rajarshi Das](mailto:rajarshi@cs.umass.edu) [Dan Le](mailto:dhle@cs.umass.edu) [Andrew McCallum](mailto:mccallum@cs.umass.edu) ### Dataset Summary This dataset is for studying computational models trained to reason about prototypical situations. It is anticipated that still would not lead to usage in a downstream task, but as a way of studying the knowledge (and biases) of prototypical situations already contained in pre-trained models. The data it is partially based on (Family Feud). Using deterministic filtering a sampling from a larger set of all transcriptions was built. Scraped data was acquired through fan transcriptions at [family feud](https://www.familyfeudinfo.com) and [family feud friends](http://familyfeudfriends.arjdesigns.com/); crowdsourced data was acquired with FigureEight (now Appen) ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in English ## Dataset Structure ### Data Instances **What do the instances that comprise the dataset represent?**<br> Each represents a survey question from Family Feud game and reported answer clusters **How many instances are there in total?**<br> 9789 instances **What data does each instance consist of?**<br> Each instance is a question, a set of answers, and a count associated with each answer. ### Data Fields **Data Files**<br> Each line is a json dictionary, in which:<br> **question** contains the question (in original and a normalized form)<br> **answerstrings** contains the original answers provided by survey respondents (when available), along with the counts for each string. Because the FamilyFeud data has only cluster names rather than strings, those cluster names are included with 0 weight.<br> **answer-clusters** list of clusters, with the count of each cluster and the strings included in that cluster. Each cluster is given a unique ID that can be linked to in the assessment files. The simplified configuration includes: - `question`: contains the original question - `normalized-question`: contains the question in normalized form - `totalcount`: unique identifier of the comment (can be used to look up the entry in the raw dataset) - `id`: unique identifier of the commen - `source`: unique identifier of the commen - `answerstrings`: unique identifier of the commen - `answer-clusters | answers-cleaned`: list clusters of: * `clusterid`: Each cluster is given a unique ID that can be linked to in the assessment files * `count`: the count of each cluster * `answers`: the strings included in that cluster In addition to the above, there is crowdsourced assessments file. The config "proto_qa_cs_assessments" provides mappings from additional human and model answers to clusters, to evaluate different assessment methods. **Assessment files**<br> The file **data/dev/crowdsource_dev.assessments.jsonl** contains mappings from additional human and model answers to clusters, to evaluate different assessment methods. Each line contains:<br> * `question`: contains the ID of the question * `assessments`: maps individual strings to one of three options, either the answer cluster id, "invalid" if the answer is judged to be bad, or "valid_new_cluster" if the answer is valid but does not match any existing clusters. ### Data Splits * proto_qa `Train` : 8781 instances for training or fine-tuning scraped from Family Feud fan sites (see paper). Scraped data has answer clusters with sizes, but only has a single string per cluster (corresponding to the original cluster name * proto_qa `Validation` : 979 instances sampled from the same Family Feud data, for use in model validation and development. * proto_qa_cs `Validation` :: 51 questions collected with exhaustive answer collection and manual clustering, matching the details of the eval test set (roughly 100 human answers per question) **data/dev/crowdsource_dev.assessments.jsonl**: assessment file (format described above) for study of assessment methods. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization **How was the data associated with each instance acquired?**<br> Scraped data was acquired through fan transcriptions at https://www.familyfeudinfo.com and http://familyfeudfriends.arjdesigns.com/ ; crowdsourced data was acquired with FigureEight (now Appen) **If the dataset is a sample from a larger set, what was the sampling strategy?**<br> Deterministic filtering was used (noted elsewhere), but no probabilistic sampling was used. **Who was involved in the data collection process (e.g., students,crowdworkers , contractors) and how were they compensated?**<br> Crowdworkers were used in the evalaution dataset. Time per task was calculated and per-task cost was set to attempt to provide a living wage **Over what timeframe was the data collected?**<br> Crowdsource answers were collected between Fall of 2018 and Spring of 2019. Scraped data covers question-answer pairs collected since the origin of the show in 1976 #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process **Was any preprocessing/cleaning/labeling of the data done?**<br> Obvious typos in the crowdsourced answer set were corrected #### Who are the annotators? The original question-answer pairs were generated by surveys of US English-speakers in a period from 1976 to present day. Crowd-sourced evaluation was constrained geographically to US English speakers but not otherwise constrained. Additional demographic data was not collected. ### Personal and Sensitive Information **Does the dataset contain data that might be considered sensitive in any way?**<br> As the questions address prototypical/stereotypical activities, models trained on more offensive material (such as large language models) may provide offensive answers to such questions. While we had found a few questions which we worried would actually encourage models to provide offensive answers, we cannot guarantee that the data is clean of such questions. Even a perfectly innocent version of this dataset would be encouraging models to express generalizations about situations, and therefore may provoke offensive material that is oontained in language models **Does the dataset contain data that might be considered confidential?**<br> The data does not concern individuals and thus does not contain any information to identify persons. Crowdsourced answers do not provide any user identifiers. ## Considerations for Using the Data ### Social Impact of Dataset **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?**<br> Not egregiously so (questions are all designed to be shown on television or replications thereof), ### Discussion of Biases **Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?** <br>All original questions were written with US television audiences in mind, and therefore characterize prototypical situations with a specific lens. Any usages which deploy this to actually model prototypical situations globally will carry that bias. **Are there tasks for which the dataset should not be used?** <br>We caution regarding free-form use of this dataset for interactive "commonsense question answering" purposes without more study of the biases and stereotypes learned by such models. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The listed authors are maintaining/supporting the dataset. They pledge to help support issues, but cannot guarantee long-term support ### Licensing Information The Proto_qa dataset is licensed under the [Creative Commons Attribution 4.0 International](https://github.com/iesl/protoqa-data/blob/master/LICENSE) ### Citation Information ``` @InProceedings{ huggingface:dataset, title = {ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning}, authors = {Michael Boratko, Xiang Lorraine Li, Tim O’Gorman, Rajarshi Das, Dan Le, Andrew McCallum}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {https://github.com/iesl/protoqa-data}, } ``` ### Contributions Thanks to [@bpatidar](https://github.com/bpatidar) for adding this dataset.
qanta
2023-04-05T13:37:09.000Z
[ "task_categories:question-answering", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:unknown", "quizbowl", "arxiv:1904.04792", "region:us" ]
null
The Qanta dataset is a question answering dataset based on the academic trivia game Quizbowl.
@article{Rodriguez2019QuizbowlTC, title={Quizbowl: The Case for Incremental Question Answering}, author={Pedro Rodriguez and Shi Feng and Mohit Iyyer and He He and Jordan L. Boyd-Graber}, journal={ArXiv}, year={2019}, volume={abs/1904.04792} }
null
3
7
--- annotations_creators: - machine-generated language: - en language_creators: - found license: - unknown multilinguality: - monolingual pretty_name: Quizbowl size_categories: - 100K<n<1M source_datasets: - original task_categories: - question-answering task_ids: [] paperswithcode_id: quizbowl tags: - quizbowl dataset_info: features: - name: id dtype: string - name: qanta_id dtype: int32 - name: proto_id dtype: string - name: qdb_id dtype: int32 - name: dataset dtype: string - name: text dtype: string - name: full_question dtype: string - name: first_sentence dtype: string - name: char_idx dtype: int32 - name: sentence_idx dtype: int32 - name: tokenizations sequence: sequence: int32 length: 2 - name: answer dtype: string - name: page dtype: string - name: raw_answer dtype: string - name: fold dtype: string - name: gameplay dtype: bool - name: category dtype: string - name: subcategory dtype: string - name: tournament dtype: string - name: difficulty dtype: string - name: year dtype: int32 config_name: mode=first,char_skip=25 splits: - name: adversarial num_bytes: 1258844 num_examples: 1145 - name: buzzdev num_bytes: 1553636 num_examples: 1161 - name: buzztest num_bytes: 2653425 num_examples: 1953 - name: buzztrain num_bytes: 19699736 num_examples: 16706 - name: guessdev num_bytes: 1414882 num_examples: 1055 - name: guesstest num_bytes: 2997123 num_examples: 2151 - name: guesstrain num_bytes: 117599750 num_examples: 96221 download_size: 170754918 dataset_size: 147177396 --- # Dataset Card for "qanta" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://www.qanta.org/](http://www.qanta.org/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [Quizbowl: The Case for Incremental Question Answering](https://arxiv.org/abs/1904.04792) - **Point of Contact:** [Jordan Boyd-Graber](mailto:jbg@umiacs.umd.edu) - **Size of downloaded dataset files:** 170.75 MB - **Size of the generated dataset:** 147.18 MB - **Total amount of disk used:** 317.93 MB ### Dataset Summary The Qanta dataset is a question answering dataset based on the academic trivia game Quizbowl. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### mode=first,char_skip=25 - **Size of downloaded dataset files:** 170.75 MB - **Size of the generated dataset:** 147.18 MB - **Total amount of disk used:** 317.93 MB An example of 'guessdev' looks as follows. ``` This example was too long and was cropped: { "answer": "Apollo_program", "category": "History", "char_idx": -1, "dataset": "quizdb.org", "difficulty": "easy_college", "first_sentence": "As part of this program, William Anders took a photo that Galen Rowell called \"the most influential environmental photograph ever taken.\"", "fold": "guessdev", "full_question": "\"As part of this program, William Anders took a photo that Galen Rowell called \\\"the most influential environmental photograph e...", "gameplay": false, "id": "127028-first", "page": "Apollo_program", "proto_id": "", "qanta_id": 127028, "qdb_id": 126689, "raw_answer": "Apollo program [or Project Apollo; accept Apollo 8; accept Apollo 1; accept Apollo 11; prompt on landing on the moon]", "sentence_idx": -1, "subcategory": "American", "text": "As part of this program, William Anders took a photo that Galen Rowell called \"the most influential environmental photograph ever taken.\"", "tokenizations": [[0, 137], [138, 281], [282, 412], [413, 592], [593, 675]], "tournament": "ACF Fall", "year": 2016 } ``` ### Data Fields The data fields are the same among all splits. #### mode=first,char_skip=25 - `id`: a `string` feature. - `qanta_id`: a `int32` feature. - `proto_id`: a `string` feature. - `qdb_id`: a `int32` feature. - `dataset`: a `string` feature. - `text`: a `string` feature. - `full_question`: a `string` feature. - `first_sentence`: a `string` feature. - `char_idx`: a `int32` feature. - `sentence_idx`: a `int32` feature. - `tokenizations`: a dictionary feature containing: - `feature`: a `int32` feature. - `answer`: a `string` feature. - `page`: a `string` feature. - `raw_answer`: a `string` feature. - `fold`: a `string` feature. - `gameplay`: a `bool` feature. - `category`: a `string` feature. - `subcategory`: a `string` feature. - `tournament`: a `string` feature. - `difficulty`: a `string` feature. - `year`: a `int32` feature. ### Data Splits | name |adversarial|buzzdev|buzztrain|guessdev|guesstrain|buzztest|guesstest| |-----------------------|----------:|------:|--------:|-------:|---------:|-------:|--------:| |mode=first,char_skip=25| 1145| 1161| 16706| 1055| 96221| 1953| 2151| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{Rodriguez2019QuizbowlTC, title={Quizbowl: The Case for Incremental Question Answering}, author={Pedro Rodriguez and Shi Feng and Mohit Iyyer and He He and Jordan L. Boyd-Graber}, journal={ArXiv}, year={2019}, volume={abs/1904.04792} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
tamilmixsentiment
2023-06-16T13:07:45.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:ta", "license:unknown", "regio...
null
The first gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. Train: 11,335 Validation: 1,260 and Test: 3,149. This makes the largest general domain sentiment dataset for this relatively low-resource language with code-mixing phenomenon. The dataset contains all the three types of code-mixed sentences - Inter-Sentential switch, Intra-Sentential switch and Tag switching. Most comments were written in Roman script with either Tamil grammar with English lexicon or English grammar with Tamil lexicon. Some comments were written in Tamil script with English expressions in between.
@inproceedings{chakravarthi-etal-2020-corpus, title = "Corpus Creation for Sentiment Analysis in Code-Mixed {T}amil-{E}nglish Text", author = "Chakravarthi, Bharathi Raja and Muralidaran, Vigneshwaran and Priyadharshini, Ruba and McCrae, John Philip", booktitle = "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources association", url = "https://www.aclweb.org/anthology/2020.sltu-1.28", pages = "202--210", abstract = "Understanding the sentiment of a comment from a video or an image is an essential task in many applications. Sentiment analysis of a text can be useful for various decision-making processes. One such application is to analyse the popular sentiments of videos on social media based on viewer comments. However, comments from social media do not follow strict rules of grammar, and they contain mixing of more than one language, often written in non-native scripts. Non-availability of annotated code-mixed data for a low-resourced language like Tamil also adds difficulty to this problem. To overcome this, we created a gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. In this paper, we describe the process of creating the corpus and assigning polarities. We present inter-annotator agreement and show the results of sentiment analysis trained on this corpus as a benchmark.", language = "English", ISBN = "979-10-95546-35-1", }
null
0
7
--- annotations_creators: - expert-generated language_creators: - crowdsourced language: - en - ta license: - unknown multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification pretty_name: Tamilmixsentiment dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': Positive '1': Negative '2': Mixed_feelings '3': unknown_state '4': not-Tamil splits: - name: train num_bytes: 790132 num_examples: 11335 - name: validation num_bytes: 89618 num_examples: 1260 - name: test num_bytes: 218764 num_examples: 3149 download_size: 1150792 dataset_size: 1098514 --- # Dataset Card for Tamilmixsentiment ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Tamilmixsentiment Homepage](https://dravidian-codemix.github.io/2020/index.html) - **Repository:** [Tamilmixsentiment repository](https://dravidian-codemix.github.io/2020/datasets.html) - **Paper:** [Corpus Creation for Sentiment Analysis in Code-Mixed Tamil-English Text](https://www.aclweb.org/anthology/2020.sltu-1.28/) - **Leaderboard:** [Rank list](https://drive.google.com/file/d/1Mf8-No-63koGRwdF13RrO01NAFBlNmI0/view?usp=sharing) - **Point of Contact:** [Bharathi Raja Chakravarthi](mailto:bharathiraja.akr@gmail.com) ### Dataset Summary The first gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. This makes the largest general domain sentiment dataset for this relatively low-resource language with code-mixing phenomenon. The comment/post may contain more than one sentence but the average sentence length of the corpora is 1. Each comment/post is annotated with sentiment polarity at the comment/post level. This dataset also has class imbalance problems depicting real-world scenarios. ### Supported Tasks and Leaderboards To identify sentiment polarity of the code-mixed dataset of comments/posts in Tamil-English collected from social media. ### Languages Tamil-English code-switched. The dataset contains all the three types of code-mixed sentences - Inter-Sentential switch, Intra-Sentential switch and Tag switching. Most comments were written in Roman script with either Tamil grammar with English lexicon or English grammar with Tamil lexicon. Some comments were written in Tamil script with English expressions in between. ## Dataset Structure ### Data Instances An example from the Tamilmixsentiment train set looks as follows: ``` text label Trailer late ah parthavanga like podunga Positive ``` ### Data Fields - `text`: Tamil-English code-mixed comment. - `label`: list of the possible sentiments "Positive", "Negative", "Mixed_feelings", "unknown_state", "not-Tamil" ### Data Splits The entire dataset of 15,744 sentences was randomly shuffled and split into three parts as follows: | | train | validation | test | |------------------------------|------:|-----------:|-----:| | Tamilmixsentiment | 11335 | 1260 | 3149 | ## Dataset Creation ### Curation Rationale Sentiment analysis has become important in social media research (Yang and Eisenstein, 2017). Until recently these applications were created for high-resourced languages which analysed monolingual utterances. But social media in multilingual communities contains more code-mixed text. Code-mixing is common among speakers in a bilingual speech community. As English is seen as the language of prestige and education, the influence of lexicon, connectives and phrases from English language is common in spoken Tamil. Tamil has little annotated data for code-mixed scenarios. An annotated corpus developed for monolingual data cannot deal with code-mixed usage and therefore it fails to yield good results due to mixture of languages at different levels of linguistic analysis. Therefore this dataset of code-mixed Tamil-English sentiment annotated corpus is created. ### Source Data #### Initial Data Collection and Normalization The data was scraped from Youtube. In total 184,573 sentences for Tamil from YouTube comments from the trailers of a movies released in 2019. Many of the them contained sentences that were either entirely written in English or code-mixed Tamil-English or fully written in Tamil. So we filtered out a non-code-mixed corpus based on language identification at comment level using the langdetect library. The comment is written fully in Tamil or English, we discarded that comment since monolingual resources are available for these languages. We also identified if the sentences were written in other languages such as Hindi, Malayalam, Urdu, Telugu, and Kannada. We preprocessed the comments by removing the emoticons and applying a sentence length filter. We want to create a code-mixed corpus of reasonable size with sentences that have fairly defined sentiments which will be useful for future research. Thus our filter removed sentences with less than five words and more than 15 words after cleaning the data. In the end we got 15,744 Tanglish sentences. #### Who are the source language producers? Youtube users ### Annotations #### Annotation process Three steps complete the annotation setup. First, each sentence was annotated by two people. In the second step, the data were collected if both of them agreed. In the case of conflict, a third person annotated the sentence. In the third step, if all the three of them did not agree, then two more annotators annotated the sentences. #### Who are the annotators? Eleven volunteers were involved in the process. All of them were native speakers of Tamil with diversity in gender, educational level and medium of instruction in their school education. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{chakravarthi-etal-2020-corpus, title = "Corpus Creation for Sentiment Analysis in Code-Mixed {T}amil-{E}nglish Text", author = "Chakravarthi, Bharathi Raja and Muralidaran, Vigneshwaran and Priyadharshini, Ruba and McCrae, John Philip", booktitle = "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources association", url = "https://www.aclweb.org/anthology/2020.sltu-1.28", pages = "202--210", abstract = "Understanding the sentiment of a comment from a video or an image is an essential task in many applications. Sentiment analysis of a text can be useful for various decision-making processes. One such application is to analyse the popular sentiments of videos on social media based on viewer comments. However, comments from social media do not follow strict rules of grammar, and they contain mixing of more than one language, often written in non-native scripts. Non-availability of annotated code-mixed data for a low-resourced language like Tamil also adds difficulty to this problem. To overcome this, we created a gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. In this paper, we describe the process of creating the corpus and assigning polarities. We present inter-annotator agreement and show the results of sentiment analysis trained on this corpus as a benchmark.", language = "English", ISBN = "979-10-95546-35-1", } ``` ### Contributions Thanks to [@jamespaultg](https://github.com/jamespaultg) for adding this dataset.
turkish_shrinked_ner
2023-01-25T14:54:44.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|other-turkish_ner", "language:tr", "license:cc-by-4.0", ...
null
Shrinked version (48 entity type) of the turkish_ner. Original turkish_ner dataset: Automatically annotated Turkish corpus for named entity recognition and text categorization using large-scale gazetteers. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 25 different domains. Shrinked entity types are: academic, academic_person, aircraft, album_person, anatomy, animal, architect_person, capital, chemical, clothes, country, culture, currency, date, food, genre, government, government_person, language, location, material, measure, medical, military, military_person, nation, newspaper, organization, organization_person, person, production_art_music, production_art_music_person, quantity, religion, science, shape, ship, software, space, space_person, sport, sport_name, sport_person, structure, subject, tech, train, vehicle
\
null
1
7
--- annotations_creators: - machine-generated language_creators: - expert-generated language: - tr license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended|other-turkish_ner task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: TurkishShrinkedNer dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-academic '2': I-academic '3': B-academic_person '4': I-academic_person '5': B-aircraft '6': I-aircraft '7': B-album_person '8': I-album_person '9': B-anatomy '10': I-anatomy '11': B-animal '12': I-animal '13': B-architect_person '14': I-architect_person '15': B-capital '16': I-capital '17': B-chemical '18': I-chemical '19': B-clothes '20': I-clothes '21': B-country '22': I-country '23': B-culture '24': I-culture '25': B-currency '26': I-currency '27': B-date '28': I-date '29': B-food '30': I-food '31': B-genre '32': I-genre '33': B-government '34': I-government '35': B-government_person '36': I-government_person '37': B-language '38': I-language '39': B-location '40': I-location '41': B-material '42': I-material '43': B-measure '44': I-measure '45': B-medical '46': I-medical '47': B-military '48': I-military '49': B-military_person '50': I-military_person '51': B-nation '52': I-nation '53': B-newspaper '54': I-newspaper '55': B-organization '56': I-organization '57': B-organization_person '58': I-organization_person '59': B-person '60': I-person '61': B-production_art_music '62': I-production_art_music '63': B-production_art_music_person '64': I-production_art_music_person '65': B-quantity '66': I-quantity '67': B-religion '68': I-religion '69': B-science '70': I-science '71': B-shape '72': I-shape '73': B-ship '74': I-ship '75': B-software '76': I-software '77': B-space '78': I-space '79': B-space_person '80': I-space_person '81': B-sport '82': I-sport '83': B-sport_name '84': I-sport_name '85': B-sport_person '86': I-sport_person '87': B-structure '88': I-structure '89': B-subject '90': I-subject '91': B-tech '92': I-tech '93': B-train '94': I-train '95': B-vehicle '96': I-vehicle splits: - name: train num_bytes: 200728389 num_examples: 614515 download_size: 0 dataset_size: 200728389 --- # Dataset Card for turkish_shrinked_ner ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.kaggle.com/behcetsenturk/shrinked-twnertc-turkish-ner-data-by-kuzgunlar - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** https://www.kaggle.com/behcetsenturk ### Dataset Summary Shrinked processed version (48 entity type) of the turkish_ner. Original turkish_ner dataset: Automatically annotated Turkish corpus for named entity recognition and text categorization using large-scale gazetteers. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 25 different domains. Shrinked entity types are: academic, academic_person, aircraft, album_person, anatomy, animal, architect_person, capital, chemical, clothes, country, culture, currency, date, food, genre, government, government_person, language, location, material, measure, medical, military, military_person, nation, newspaper, organization, organization_person, person, production_art_music, production_art_music_person, quantity, religion, science, shape, ship, software, space, space_person, sport, sport_name, sport_person, structure, subject, tech, train, vehicle ### Supported Tasks and Leaderboards [Needs More Information] ### Languages Turkish ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits There's only the training set. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators Behcet Senturk ### Licensing Information Creative Commons Attribution 4.0 International ### Citation Information [Needs More Information] ### Contributions Thanks to [@bhctsntrk](https://github.com/bhctsntrk) for adding this dataset.
ARKseal/YFCC14M_subset_webdataset
2021-11-27T22:47:47.000Z
[ "region:us" ]
ARKseal
null
null
null
0
7
Entry not found
Atsushi/fungi_indexed_mycological_papers_japanese
2023-10-08T21:33:33.000Z
[ "annotations_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ja", "license:cc-by-4.0", "region:us" ]
Atsushi
null
null
null
0
7
--- annotations_creators: - other language: - ja license: - cc-by-4.0 multilinguality: - monolingual source_datasets: - original size_categories: - 1K<n<10K --- fungi_indexed_mycological_papers_japanese 大菌輪「論文3行まとめ」データセット 最終更新日:2023/10/9(R3-11041まで) ==== ### Languages Japanese This dataset is available in Japanese only. # 概要 Atsushi Nakajima(中島淳志)が個人で運営しているWebサイト[大菌輪](http://mycoscouter.coolblog.jp/daikinrin/) では、数千件以上の菌類分類学論文を「論文3行まとめ」という形で要約および索引付け(インデキシング)した情報を提供しています。 本データセットは、「論文3行まとめ」のコンテンツに含まれる各論文の3行抄録、タグ(索引)、掲載種一覧、比較種一覧をまとめたものです。 「論文3行まとめ」は毎日更新していますが、本データセットの更新はおおむね1ヶ月に一度とする予定です。 また、本データセットを可視化したWebアプリを[Observableで公開](https://tinyurl.com/2tvryz8u)しています。 ## 関連データセット 「識別形質まとめ」 [Atsushi/fungi_diagnostic_chars_comparison_japanese](https://huggingface.co/datasets/Atsushi/fungi_diagnostic_chars_comparison_japanese) 「Trait Circusデータセット」(統制形質) [Atsushi/fungi_trait_circus_database](https://huggingface.co/datasets/Atsushi/fungi_trait_circus_database) ## 各カラムの説明 * R3ID … 大菌輪「論文3行まとめ」のIDです。 * ja_title_provisional_translate(仮訳和文題名) … 作成者が翻訳したタイトルです。一部、日本語の原題があるものはそれをそのまま使用しています。 * original_title(原文題名) * published_year(出版年) * journal_title(雑誌名) * source(文献リンク) … 各情報の 出典(文献)のURLです。 * daikinrin_url … 大菌輪「論文3行まとめ」のURLです。 * tags … 作成者が論文を全文読んだ上で独自に付与した索引です。カンマ+半角空白区切りです。形態形質、宿主/基質、実験器具/実験手法/試薬、地理的分布、生理/生化学などを幅広く索引しています。 * R3summary_1 … 3行抄録の「1行目」です。 * R3summary_2 … 3行抄録の「2行目」です。 * R3summary_3 … 3行抄録の「3行目」です。 * species_reported(報告種一覧) … 当該論文内で掲載された種の一覧です。「半角空白+半角スラッシュ+半角空白」区切りです。記号の意味は以下の通りです。 * ★=新種(新亜種・新品種・新変種) * ■= 新産種 * ▲=新組み合わせ * ◆=新学名 * ●=新階級 * (無印)=その他 * species_compared(比較種一覧) … いずれかの報告種と論文中で何らかの比較がなされた種の一覧です。「半角空白+半角スラッシュ+半角空白」区切りです。詳細は「識別形質まとめ」データセット([Atsushi/fungi_diagnostic_chars_comparison_japanese](https://huggingface.co/datasets/Atsushi/fungi_diagnostic_chars_comparison_japanese))を参照してください。 * taxon_reported(分類群一覧) … 報告種に対応する上位分類群をまとめたものです。カンマ+半角空白区切りです。MycoBankの情報を基に付与していますが、最新でない可能性があります。
KBLab/sucx3_ner
2022-10-25T06:13:36.000Z
[ "task_categories:other", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:sv", "license:cc-by-4.0", "structure-predic...
KBLab
The dataset is a conversion of the venerable SUC 3.0 dataset into the huggingface ecosystem. The original dataset does not contain an official train-dev-test split, which is introduced here; the tag distribution for the NER tags between the three splits is mostly the same. The dataset has three different types of tagsets: manually annotated POS, manually annotated NER, and automatically annotated NER. For the automatically annotated NER tags, only sentences were chosen, where the automatic and manual annotations would match (with their respective categories). Additionally we provide remixes of the same data with some or all sentences being lowercased.
@article{gustafson2006documentation, title={Documentation of the Stockholm-Ume{\aa} Corpus}, author={Gustafson-Capkov{\'a}, Sofia and Hartmann, Britt}, journal={Stockholm University: Department of Linguistics}, year={2006} }
null
5
7
--- annotations_creators: - expert-generated language_creators: - other language: - sv license: - cc-by-4.0 multilinguality: - monolingual size_categories: - unknown source_datasets: - original task_categories: - other task_ids: - named-entity-recognition - part-of-speech pretty_name: sucx3_ner tags: - structure-prediction --- # Dataset Card for _SUCX 3.0 - NER_ ## Dataset Description - **Homepage:** [https://spraakbanken.gu.se/en/resources/suc3](https://spraakbanken.gu.se/en/resources/suc3) - **Repository:** [https://github.com/kb-labb/sucx3_ner](https://github.com/kb-labb/sucx3_ner) - **Paper:** [SUC 2.0 manual](http://spraakbanken.gu.se/parole/Docs/SUC2.0-manual.pdf) - **Point of Contact:** ### Dataset Summary The dataset is a conversion of the venerable SUC 3.0 dataset into the huggingface ecosystem. The original dataset does not contain an official train-dev-test split, which is introduced here; the tag distribution for the NER tags between the three splits is mostly the same. The dataset has three different types of tagsets: manually annotated POS, manually annotated NER, and automatically annotated NER. For the automatically annotated NER tags, only sentences were chosen, where the automatic and manual annotations would match (with their respective categories). Additionally we provide remixes of the same data with some or all sentences being lowercased. ### Supported Tasks and Leaderboards - Part-of-Speech tagging - Named-Entity-Recognition ### Languages Swedish ## Dataset Structure ### Data Remixes - `original_tags` contain the manual NER annotations - `lower` the whole dataset uncased - `lower_mix` some of the dataset uncased - `lower_both` every instance both cased and uncased - `simple_tags` contain the automatic NER annotations - `lower` the whole dataset uncased - `lower_mix` some of the dataset uncased - `lower_both` every instance both cased and uncased ### Data Instances For each instance, there is an `id`, with an optional `_lower` suffix to mark that it has been modified, a `tokens` list of strings containing tokens, a `pos_tags` list of strings containing POS-tags, and a `ner_tags` list of strings containing NER-tags. ```json {"id": "e24d782c-e2475603_lower", "tokens": ["-", "dels", "har", "vi", "inget", "index", "att", "g\u00e5", "efter", ",", "vi", "kr\u00e4ver", "allts\u00e5", "ers\u00e4ttning", "i", "40-talets", "penningv\u00e4rde", "."], "pos_tags": ["MID", "KN", "VB", "PN", "DT", "NN", "IE", "VB", "PP", "MID", "PN", "VB", "AB", "NN", "PP", "NN", "NN", "MAD"], "ner_tags": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"]} ``` ### Data Fields - `id`: a string containing the sentence-id - `tokens`: a list of strings containing the sentence's tokens - `pos_tags`: a list of strings containing the tokens' POS annotations - `ner_tags`: a list of strings containing the tokens' NER annotations ### Data Splits | Dataset Split | Size Percentage of Total Dataset Size | Number of Instances for the Original Tags | | ------------- | ------------------------------------- | ----------------------------------------- | | train | 64% | 46\,026 | | dev | 16% | 11\,506 | | test | 20% | 14\,383 | The `simple_tags` remix has fewer instances due to the requirement to match tags. ## Dataset Creation See the [original webpage](https://spraakbanken.gu.se/en/resources/suc3) ## Additional Information ### Dataset Curators [Språkbanken](sb-info@svenska.gu.se) ### Licensing Information CC BY 4.0 (attribution) ### Citation Information [SUC 2.0 manual](http://spraakbanken.gu.se/parole/Docs/SUC2.0-manual.pdf) ### Contributions Thanks to [@robinqrtz](https://github.com/robinqrtz) for adding this dataset.