id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
6.67k
citation
stringlengths
0
10.7k
likes
int64
0
3.66k
downloads
int64
0
8.89M
created
timestamp[us]
card
stringlengths
11
977k
card_len
int64
11
977k
embeddings
list
swedish_ner_corpus
2023-01-25T14:45:21.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:sv", "license:cc-by-4.0", "region:us" ]
null
Webbnyheter 2012 from Spraakbanken, semi-manually annotated and adapted for CoreNLP Swedish NER. Semi-manually defined in this case as: Bootstrapped from Swedish Gazetters then manually correcte/reviewed by two independent native speaking swedish annotators. No annotator agreement calculated.
null
1
91
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - found language: - sv license: - cc-by-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: Swedish NER Corpus dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': '0' '1': LOC '2': MISC '3': ORG '4': PER splits: - name: train num_bytes: 2032630 num_examples: 6886 - name: test num_bytes: 755234 num_examples: 2453 download_size: 1384558 dataset_size: 2787864 --- # Dataset Card for Swedish NER Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/klintan/swedish-ner-corpus]() - **Repository:** [https://github.com/klintan/swedish-ner-corpus]() - **Point of contact:** [Andreas Klintberg](ankl@kth.se) ### Dataset Summary Webbnyheter 2012 from Spraakbanken, semi-manually annotated and adapted for CoreNLP Swedish NER. Semi-manually defined in this case as: Bootstrapped from Swedish Gazetters then manually correcte/reviewed by two independent native speaking swedish annotators. No annotator agreement calculated. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Swedish ## Dataset Structure ### Data Instances A sample dataset instance is provided below: ```json {'id': '3', 'ner_tags': [4, 4, 0, 0, 0, 0, 0, 0, 3, 3, 0], 'tokens': ['Margaretha', 'Fahlgren', ',', 'professor', 'i', 'litteraturvetenskap', ',', 'vice-rektor', 'Uppsala', 'universitet', '.']} ``` ### Data Fields - `id`: id of the sentence - `token`: current token - `ner_tag`: ner tag of the token Full fields: ```json { "id":{ "feature_type":"Value" "dtype":"string" } "tokens":{ "feature_type":"Sequence" "feature":{ "feature_type":"Value" "dtype":"string" } } "ner_tags":{ "feature_type":"Sequence" "dtype":"int32" "feature":{ "feature_type":"ClassLabel" "dtype":"int32" "class_names":[ 0:"0" 1:"LOC" 2:"MISC" 3:"ORG" 4:"PER" ] } } } ``` ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The original dataset was provided by Språkbanken which consists of news from Swedish newspapers' websites. ### Licensing Information https://github.com/klintan/swedish-ner-corpus/blob/master/LICENSE ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
4,589
[ [ -0.032318115234375, -0.0401611328125, 0.0101470947265625, 0.0118255615234375, -0.0226593017578125, 0.0005230903625488281, -0.039581298828125, -0.0213623046875, 0.04412841796875, 0.03662109375, -0.036224365234375, -0.0765380859375, -0.044189453125, 0.02029418...
AlekseyKorshuk/horror-scripts
2022-02-10T18:26:41.000Z
[ "region:us" ]
AlekseyKorshuk
This dataset is designed to generate lyrics with HuggingArtists.
@InProceedings{huggingartists:dataset, title = {Lyrics dataset}, author={Aleksey Korshuk }, year={2021} }
1
91
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
Nexdata/accented_english
2023-08-31T02:59:20.000Z
[ "region:us" ]
Nexdata
null
null
5
91
2022-03-02T23:29:22
--- YAML tags: - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging --- # Dataset Card for accented-english ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://nexdata.ai/?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The dataset contains 20,000 hours of accented English speech data. It's collected from local English speakers in more than 20 countries, such as USA, China, UK, Germany, Japan, India, France, Spain, Russia, Latin America, covering a variety of pronunciation habits and characteristics, accent severity, and the distribution of speakers. The format is 16kHz, 16bit, uncompressed wav, mono channel. The sentence accuracy is over 95%. For more details, please refer to the link: https://nexdata.ai/speechRecognition?source=Huggingface ### Supported Tasks and Leaderboards automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR). ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
3,309
[ [ -0.03692626953125, -0.040771484375, -0.00806427001953125, 0.02752685546875, -0.0095367431640625, 0.006420135498046875, -0.032806396484375, -0.03253173828125, 0.05572509765625, 0.047607421875, -0.045562744140625, -0.07110595703125, -0.0391845703125, 0.0267639...
TristanBehrens/js-fakes-4bars
2022-01-03T15:53:23.000Z
[ "region:us" ]
TristanBehrens
null
null
9
91
2022-03-02T23:29:22
# JSFakes (Dr. Tristan Behrens). This is a tokenized version of the JS-Fakes dataset by Omar Peracha. The original dataset can be found here: [js-fakes.git](https://github.com/omarperacha/js-fakes.git) The representation is four tracks with four bars per track. ## Purpose. This dataset is a good starting point for Music Generation. You could train GPT-2 on the samples to compose music. ## Contact. Find me on [LinkedIn](https://www.linkedin.com/in/dr-tristan-behrens-734967a2/) and say hello. If you find and issue or have a feature request, please contact me. Please be so kind and like this dataset if you find it useful.
635
[ [ -0.006420135498046875, -0.035430908203125, 0.043487548828125, 0.01580810546875, -0.006778717041015625, 0.029205322265625, 0.002117156982421875, 0.0009522438049316406, 0.061859130859375, 0.038421630859375, -0.06451416015625, -0.041961669921875, -0.026809692382812...
allegro/klej-psc
2022-10-26T09:01:54.000Z
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:5K", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:cc-by-sa-3.0", "paraphrase-classification", "region:us" ...
allegro
null
null
0
91
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - other language: - pl license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 5K - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: [] pretty_name: Polish Summaries Corpus tags: - paraphrase-classification --- # klej-psc ## Description The Polish Summaries Corpus (PSC) is a dataset of summaries for 569 news articles. The human annotators created five extractive summaries for each article by choosing approximately 5% of the original text. A different annotator created each summary. The subset of 154 articles was also supplemented with additional five abstractive summaries each, i.e., not created from the fragments of the original article. In huggingface version of this dataset, summaries of the same article are used as positive pairs, and the most similar summaries of different articles are sampled as negatives. ## Tasks (input, output, and metrics) The task is to predict whether the extract text and summary are similar. Based on PSC, we formulate a text-similarity task. We generate the positive pairs (i.e., referring to the same article) using only those news articles with both extractive and abstractive summaries. We match each extractive summary with two least similar abstractive ones of the same article. To create negative pairs, we follow a similar procedure. We find two most similar abstractive summaries for each extractive summary, but from different articles. **Input** (*'extract_text'*, *'summary_text'* columns): extract text and summary text sentences **Output** (*'label'* column): label: 1 indicates summary is similar, 0 means that it is not similar **Domain**: News articles **Measurements**: F1-Score **Example**: Input: `Mit o potopie jest prastary, sięga czasów, gdy topniał lodowiec. Na skutek tego wydarzenia w dziejach planety, poziom mórz i oceanów podniósł się o kilkadziesiąt metrów. Potop polodowcowy z całą, naukową pewnością, miał miejsce, ale najprawdopodobniej został przez ludzkość przegapiony. I oto pojawiła się w tej sprawie kolejna glosa. Jej autorami są amerykańscy geofizycy.` ; `Dwójka amerykańskich geofizyków przedstawiła swój scenariusz pochodzenia mitu o potopie. Przed 7500 laty do będącego jeszcze jeziorem Morza Czarnego wlały się wezbrane wskutek topnienia lodowców wody Morza Śródziemnego. Geofizycy twierdzą, że dzięki temu rozkwitło rolnictwo, bo ludzie musieli migrować i szerzyć rolniczy tryb życia. Środowiska naukowe twierdzą jednak, że potop był tylko jednym z czynników ekspansji rolnictwa.` Input (translated by DeepL): `The myth of the Flood is ancient, dating back to the time when the glacier melted. As a result of this event in the history of the planet, the level of the seas and oceans rose by several tens of meters. The post-glacial flood with all, scientific certainty, took place, but was most likely missed by mankind. And here is another gloss on the matter. Its authors are American geophysicists.` ; `Two American geophysicists presented their scenario of the origin of the Flood myth. 7500 years ago, the waters of the Mediterranean Sea flooded into the Black Sea, which was still a lake, due to the melting of glaciers. Geophysicists claim that this made agriculture flourish because people had to migrate and spread their agricultural lifestyle. However, the scientific community argues that the Flood was only one factor in the expansion of agriculture.` Output: `1` (summary is similar) ## Data splits | Subset | Cardinality | | ----------- | ----------: | | train | 4302 | | val | 0 | | test | 1078 | ## Class distribution | Class | train | validation | test | |:------------|--------:|-------------:|-------:| | not similar | 0.705 | - | 0.696 | | similar | 0.295 | - | 0.304 | ## Citation ``` @inproceedings{ogro:kop:14:lrec, title={The {P}olish {S}ummaries {C}orpus}, author={Ogrodniczuk, Maciej and Kope{'c}, Mateusz}, booktitle = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014", year = "2014", } ``` ## License ``` Creative Commons Attribution ShareAlike 3.0 licence (CC-BY-SA 3.0) ``` ## Links [HuggingFace](https://huggingface.co/datasets/allegro/klej-psc) [Source](http://zil.ipipan.waw.pl/PolishSummariesCorpus) [Paper](https://aclanthology.org/L14-1145/) ## Examples ### Loading ```python from pprint import pprint from datasets import load_dataset dataset = load_dataset("allegro/klej-psc") pprint(dataset['train'][100]) #{'extract_text': 'Nowe prawo energetyczne jest zagrożeniem dla małych ' # 'producentów energii ze źródeł odnawialnych. Sytuacja się ' # 'pogarsza wdobie urynkowienia energii. zniosło preferencje ' # 'wprowadzone dla energetyki wodnej. UE zamierza podwoić ' # 'udział takich źródeł energetyki jak woda, wiatr, słońce do ' # '2010 r.W Polsce 1-1,5 proc. zużycia energii wytwarza się ze ' # 'źródeł odnawialnych. W krajach Unii udział ten wynosi ' # 'średnio 5,6 proc.', # 'label': 1, # 'summary_text': 'W Polsce w niewielkim stopniu wykorzystuje się elektrownie ' # 'wodne oraz inne sposoby tworzenia energii ze źródeł ' # 'odnawialnych. Podczas gdy w innych krajach europejskich jest ' # 'to średnio 5,6 % w Polsce jest to 1-1,5 %. Powodem jest ' # 'niska opłacalność posiadania tego typu elektrowni-zakład ' # 'energetyczny płaci ok. 17 gr. za 1kWh, podczas gdy ' # 'wybudowanie takiej elektrowni kosztuje ok. 100 tyś. zł.'} ``` ### Evaluation ```python import random from pprint import pprint from datasets import load_dataset, load_metric dataset = load_dataset("allegro/klej-psc") dataset = dataset.class_encode_column("label") references = dataset["test"]["label"] # generate random predictions predictions = [random.randrange(max(references) + 1) for _ in range(len(references))] acc = load_metric("accuracy") f1 = load_metric("f1") acc_score = acc.compute(predictions=predictions, references=references) f1_score = f1.compute(predictions=predictions, references=references, average="macro") pprint(acc_score) pprint(f1_score) # {'accuracy': 0.18588469184890655} # {'f1': 0.17511412402843068} ```
6,458
[ [ -0.047821044921875, -0.03515625, 0.04302978515625, 0.02191162109375, -0.0161285400390625, -0.017730712890625, -0.01100921630859375, -0.03448486328125, 0.0537109375, 0.015045166015625, -0.0297698974609375, -0.046295166015625, -0.05426025390625, 0.014602661132...
animesh/autonlp-data-peptides
2021-10-12T09:08:03.000Z
[ "region:us" ]
animesh
null
null
0
91
2022-03-02T23:29:22
[Deep learning the collisional cross sections of the peptide universe from a million experimental values](https://www.nature.com/articles/s41467-021-21352-8) [Data](http://proteomecentral.proteomexchange.org/cgi/GetDataset?ID=PXD017703) generated from [MaxQuant](http://coxdocs.org/doku.php?id=maxquant:start) output ``` wget https://ftp.pride.ebi.ac.uk/pride/data/archive/2020/12/PXD017703/HeLa_200ng_Library_MaxQuant.zip unzip HeLa_200ng_Library_MaxQuant.zip awk -F '\t' '{print $1,",",$40}' evidence.txt > pepCCS.csv wc pepCCS.csv 352111 1056333 12736697 pepCCS.csv ``` [Code](https://github.com/mannlabs/DeepCollisionalCrossSection)
642
[ [ -0.025390625, -0.030731201171875, 0.0292205810546875, -0.0254669189453125, -0.01169586181640625, 0.0193328857421875, 0.012939453125, -0.02398681640625, 0.007045745849609375, -0.001735687255859375, -0.036651611328125, -0.0350341796875, -0.0438232421875, 0.020...
anuragshas/ha_opus100_processed
2022-02-03T22:29:36.000Z
[ "region:us" ]
anuragshas
null
null
0
91
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.014984130859375, 0.057220458984375, 0.0288238525390625, -0.03509521484375, 0.04656982421875, 0.052520751953125, 0.00506591796875, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060455322265625, 0.03793334...
anuragshas/hi_opus100_processed
2022-01-30T15:14:09.000Z
[ "region:us" ]
anuragshas
null
null
0
91
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.014984130859375, 0.057220458984375, 0.0288238525390625, -0.03509521484375, 0.04656982421875, 0.052520751953125, 0.00506591796875, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060455322265625, 0.03793334...
anuragshas/mr_cc100_processed
2022-02-06T14:46:02.000Z
[ "region:us" ]
anuragshas
null
null
0
91
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
anuragshas/mt_opus100_processed
2022-02-02T08:14:25.000Z
[ "region:us" ]
anuragshas
null
null
0
91
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
anuragshas/sk_opus100_processed
2022-02-05T08:22:48.000Z
[ "region:us" ]
anuragshas
null
null
0
91
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
anuragshas/sl_opus100_processed
2022-02-02T13:54:18.000Z
[ "region:us" ]
anuragshas
null
null
0
91
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
anzorq/kbd-ru-jsonl-tmp
2022-01-14T01:19:45.000Z
[ "region:us" ]
anzorq
null
null
0
91
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
davanstrien/embellishments-sample
2022-02-24T15:44:29.000Z
[ "region:us" ]
davanstrien
null
null
0
91
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
DFKI-SLT/cross_ner
2023-01-19T09:17:38.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|conll2003", "language:en", "cross domain", "ai", "news", "musi...
DFKI-SLT
CrossNER is a fully-labeled collected of named entity recognition (NER) data spanning over five diverse domains (Politics, Natural Science, Music, Literature, and Artificial Intelligence) with specialized entity categories for different domains. Additionally, CrossNER also includes unlabeled domain-related corpora for the corresponding five domains. For details, see the paper: [CrossNER: Evaluating Cross-Domain Named Entity Recognition](https://arxiv.org/abs/2012.04373)
@article{liu2020crossner, title={CrossNER: Evaluating Cross-Domain Named Entity Recognition}, author={Zihan Liu and Yan Xu and Tiezheng Yu and Wenliang Dai and Ziwei Ji and Samuel Cahyawijaya and Andrea Madotto and Pascale Fung}, year={2020}, eprint={2012.04373}, archivePrefix={arXiv}, primaryClass={cs.CL} }
0
91
2023-01-19T09:17:08
--- annotations_creators: - expert-generated language: - en language_creators: - found license: [] multilinguality: - monolingual pretty_name: CrossNER is a cross-domain dataset for named entity recognition size_categories: - 10K<n<100K source_datasets: - extended|conll2003 tags: - cross domain - ai - news - music - literature - politics - science task_categories: - token-classification task_ids: - named-entity-recognition dataset_info: - config_name: ai features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-academicjournal '2': I-academicjournal '3': B-album '4': I-album '5': B-algorithm '6': I-algorithm '7': B-astronomicalobject '8': I-astronomicalobject '9': B-award '10': I-award '11': B-band '12': I-band '13': B-book '14': I-book '15': B-chemicalcompound '16': I-chemicalcompound '17': B-chemicalelement '18': I-chemicalelement '19': B-conference '20': I-conference '21': B-country '22': I-country '23': B-discipline '24': I-discipline '25': B-election '26': I-election '27': B-enzyme '28': I-enzyme '29': B-event '30': I-event '31': B-field '32': I-field '33': B-literarygenre '34': I-literarygenre '35': B-location '36': I-location '37': B-magazine '38': I-magazine '39': B-metrics '40': I-metrics '41': B-misc '42': I-misc '43': B-musicalartist '44': I-musicalartist '45': B-musicalinstrument '46': I-musicalinstrument '47': B-musicgenre '48': I-musicgenre '49': B-organisation '50': I-organisation '51': B-person '52': I-person '53': B-poem '54': I-poem '55': B-politicalparty '56': I-politicalparty '57': B-politician '58': I-politician '59': B-product '60': I-product '61': B-programlang '62': I-programlang '63': B-protein '64': I-protein '65': B-researcher '66': I-researcher '67': B-scientist '68': I-scientist '69': B-song '70': I-song '71': B-task '72': I-task '73': B-theory '74': I-theory '75': B-university '76': I-university '77': B-writer '78': I-writer splits: - name: train num_bytes: 65080 num_examples: 100 - name: validation num_bytes: 189453 num_examples: 350 - name: test num_bytes: 225691 num_examples: 431 download_size: 289173 dataset_size: 480224 - config_name: literature features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-academicjournal '2': I-academicjournal '3': B-album '4': I-album '5': B-algorithm '6': I-algorithm '7': B-astronomicalobject '8': I-astronomicalobject '9': B-award '10': I-award '11': B-band '12': I-band '13': B-book '14': I-book '15': B-chemicalcompound '16': I-chemicalcompound '17': B-chemicalelement '18': I-chemicalelement '19': B-conference '20': I-conference '21': B-country '22': I-country '23': B-discipline '24': I-discipline '25': B-election '26': I-election '27': B-enzyme '28': I-enzyme '29': B-event '30': I-event '31': B-field '32': I-field '33': B-literarygenre '34': I-literarygenre '35': B-location '36': I-location '37': B-magazine '38': I-magazine '39': B-metrics '40': I-metrics '41': B-misc '42': I-misc '43': B-musicalartist '44': I-musicalartist '45': B-musicalinstrument '46': I-musicalinstrument '47': B-musicgenre '48': I-musicgenre '49': B-organisation '50': I-organisation '51': B-person '52': I-person '53': B-poem '54': I-poem '55': B-politicalparty '56': I-politicalparty '57': B-politician '58': I-politician '59': B-product '60': I-product '61': B-programlang '62': I-programlang '63': B-protein '64': I-protein '65': B-researcher '66': I-researcher '67': B-scientist '68': I-scientist '69': B-song '70': I-song '71': B-task '72': I-task '73': B-theory '74': I-theory '75': B-university '76': I-university '77': B-writer '78': I-writer splits: - name: train num_bytes: 63181 num_examples: 100 - name: validation num_bytes: 244076 num_examples: 400 - name: test num_bytes: 270092 num_examples: 416 download_size: 334380 dataset_size: 577349 - config_name: music features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-academicjournal '2': I-academicjournal '3': B-album '4': I-album '5': B-algorithm '6': I-algorithm '7': B-astronomicalobject '8': I-astronomicalobject '9': B-award '10': I-award '11': B-band '12': I-band '13': B-book '14': I-book '15': B-chemicalcompound '16': I-chemicalcompound '17': B-chemicalelement '18': I-chemicalelement '19': B-conference '20': I-conference '21': B-country '22': I-country '23': B-discipline '24': I-discipline '25': B-election '26': I-election '27': B-enzyme '28': I-enzyme '29': B-event '30': I-event '31': B-field '32': I-field '33': B-literarygenre '34': I-literarygenre '35': B-location '36': I-location '37': B-magazine '38': I-magazine '39': B-metrics '40': I-metrics '41': B-misc '42': I-misc '43': B-musicalartist '44': I-musicalartist '45': B-musicalinstrument '46': I-musicalinstrument '47': B-musicgenre '48': I-musicgenre '49': B-organisation '50': I-organisation '51': B-person '52': I-person '53': B-poem '54': I-poem '55': B-politicalparty '56': I-politicalparty '57': B-politician '58': I-politician '59': B-product '60': I-product '61': B-programlang '62': I-programlang '63': B-protein '64': I-protein '65': B-researcher '66': I-researcher '67': B-scientist '68': I-scientist '69': B-song '70': I-song '71': B-task '72': I-task '73': B-theory '74': I-theory '75': B-university '76': I-university '77': B-writer '78': I-writer splits: - name: train num_bytes: 65077 num_examples: 100 - name: validation num_bytes: 259702 num_examples: 380 - name: test num_bytes: 327195 num_examples: 465 download_size: 414065 dataset_size: 651974 - config_name: conll2003 features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-academicjournal '2': I-academicjournal '3': B-album '4': I-album '5': B-algorithm '6': I-algorithm '7': B-astronomicalobject '8': I-astronomicalobject '9': B-award '10': I-award '11': B-band '12': I-band '13': B-book '14': I-book '15': B-chemicalcompound '16': I-chemicalcompound '17': B-chemicalelement '18': I-chemicalelement '19': B-conference '20': I-conference '21': B-country '22': I-country '23': B-discipline '24': I-discipline '25': B-election '26': I-election '27': B-enzyme '28': I-enzyme '29': B-event '30': I-event '31': B-field '32': I-field '33': B-literarygenre '34': I-literarygenre '35': B-location '36': I-location '37': B-magazine '38': I-magazine '39': B-metrics '40': I-metrics '41': B-misc '42': I-misc '43': B-musicalartist '44': I-musicalartist '45': B-musicalinstrument '46': I-musicalinstrument '47': B-musicgenre '48': I-musicgenre '49': B-organisation '50': I-organisation '51': B-person '52': I-person '53': B-poem '54': I-poem '55': B-politicalparty '56': I-politicalparty '57': B-politician '58': I-politician '59': B-product '60': I-product '61': B-programlang '62': I-programlang '63': B-protein '64': I-protein '65': B-researcher '66': I-researcher '67': B-scientist '68': I-scientist '69': B-song '70': I-song '71': B-task '72': I-task '73': B-theory '74': I-theory '75': B-university '76': I-university '77': B-writer '78': I-writer splits: - name: train num_bytes: 3561081 num_examples: 14041 - name: validation num_bytes: 891431 num_examples: 3250 - name: test num_bytes: 811470 num_examples: 3453 download_size: 2694794 dataset_size: 5263982 - config_name: politics features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-academicjournal '2': I-academicjournal '3': B-album '4': I-album '5': B-algorithm '6': I-algorithm '7': B-astronomicalobject '8': I-astronomicalobject '9': B-award '10': I-award '11': B-band '12': I-band '13': B-book '14': I-book '15': B-chemicalcompound '16': I-chemicalcompound '17': B-chemicalelement '18': I-chemicalelement '19': B-conference '20': I-conference '21': B-country '22': I-country '23': B-discipline '24': I-discipline '25': B-election '26': I-election '27': B-enzyme '28': I-enzyme '29': B-event '30': I-event '31': B-field '32': I-field '33': B-literarygenre '34': I-literarygenre '35': B-location '36': I-location '37': B-magazine '38': I-magazine '39': B-metrics '40': I-metrics '41': B-misc '42': I-misc '43': B-musicalartist '44': I-musicalartist '45': B-musicalinstrument '46': I-musicalinstrument '47': B-musicgenre '48': I-musicgenre '49': B-organisation '50': I-organisation '51': B-person '52': I-person '53': B-poem '54': I-poem '55': B-politicalparty '56': I-politicalparty '57': B-politician '58': I-politician '59': B-product '60': I-product '61': B-programlang '62': I-programlang '63': B-protein '64': I-protein '65': B-researcher '66': I-researcher '67': B-scientist '68': I-scientist '69': B-song '70': I-song '71': B-task '72': I-task '73': B-theory '74': I-theory '75': B-university '76': I-university '77': B-writer '78': I-writer splits: - name: train num_bytes: 143507 num_examples: 200 - name: validation num_bytes: 422760 num_examples: 541 - name: test num_bytes: 472690 num_examples: 651 download_size: 724168 dataset_size: 1038957 - config_name: science features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-academicjournal '2': I-academicjournal '3': B-album '4': I-album '5': B-algorithm '6': I-algorithm '7': B-astronomicalobject '8': I-astronomicalobject '9': B-award '10': I-award '11': B-band '12': I-band '13': B-book '14': I-book '15': B-chemicalcompound '16': I-chemicalcompound '17': B-chemicalelement '18': I-chemicalelement '19': B-conference '20': I-conference '21': B-country '22': I-country '23': B-discipline '24': I-discipline '25': B-election '26': I-election '27': B-enzyme '28': I-enzyme '29': B-event '30': I-event '31': B-field '32': I-field '33': B-literarygenre '34': I-literarygenre '35': B-location '36': I-location '37': B-magazine '38': I-magazine '39': B-metrics '40': I-metrics '41': B-misc '42': I-misc '43': B-musicalartist '44': I-musicalartist '45': B-musicalinstrument '46': I-musicalinstrument '47': B-musicgenre '48': I-musicgenre '49': B-organisation '50': I-organisation '51': B-person '52': I-person '53': B-poem '54': I-poem '55': B-politicalparty '56': I-politicalparty '57': B-politician '58': I-politician '59': B-product '60': I-product '61': B-programlang '62': I-programlang '63': B-protein '64': I-protein '65': B-researcher '66': I-researcher '67': B-scientist '68': I-scientist '69': B-song '70': I-song '71': B-task '72': I-task '73': B-theory '74': I-theory '75': B-university '76': I-university '77': B-writer '78': I-writer splits: - name: train num_bytes: 121928 num_examples: 200 - name: validation num_bytes: 276118 num_examples: 450 - name: test num_bytes: 334181 num_examples: 543 download_size: 485191 dataset_size: 732227 --- # Dataset Card for CrossRE ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [CrossNER](https://github.com/zliucr/CrossNER) - **Paper:** [CrossNER: Evaluating Cross-Domain Named Entity Recognition](https://arxiv.org/abs/2012.04373) ### Dataset Summary CrossNER is a fully-labeled collected of named entity recognition (NER) data spanning over five diverse domains (Politics, Natural Science, Music, Literature, and Artificial Intelligence) with specialized entity categories for different domains. Additionally, CrossNER also includes unlabeled domain-related corpora for the corresponding five domains. For details, see the paper: [CrossNER: Evaluating Cross-Domain Named Entity Recognition](https://arxiv.org/abs/2012.04373) ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages The language data in CrossNER is in English (BCP-47 en) ## Dataset Structure ### Data Instances #### conll2003 - **Size of downloaded dataset files:** 2.69 MB - **Size of the generated dataset:** 5.26 MB An example of 'train' looks as follows: ```json { "id": "0", "tokens": ["EU", "rejects", "German", "call", "to", "boycott", "British", "lamb", "."], "ner_tags": [49, 0, 41, 0, 0, 0, 41, 0, 0] } ``` #### politics - **Size of downloaded dataset files:** 0.72 MB - **Size of the generated dataset:** 1.04 MB An example of 'train' looks as follows: ```json { "id": "0", "tokens": ["Parties", "with", "mainly", "Eurosceptic", "views", "are", "the", "ruling", "United", "Russia", ",", "and", "opposition", "parties", "the", "Communist", "Party", "of", "the", "Russian", "Federation", "and", "Liberal", "Democratic", "Party", "of", "Russia", "."], "ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 55, 56, 0, 0, 0, 0, 0, 55, 56, 56, 56, 56, 56, 0, 55, 56, 56, 56, 56, 0] } ``` #### science - **Size of downloaded dataset files:** 0.49 MB - **Size of the generated dataset:** 0.73 MB An example of 'train' looks as follows: ```json { "id": "0", "tokens": ["They", "may", "also", "use", "Adenosine", "triphosphate", ",", "Nitric", "oxide", ",", "and", "ROS", "for", "signaling", "in", "the", "same", "ways", "that", "animals", "do", "."], "ner_tags": [0, 0, 0, 0, 15, 16, 0, 15, 16, 0, 0, 15, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] } ``` #### music - **Size of downloaded dataset files:** 0.41 MB - **Size of the generated dataset:** 0.65 MB An example of 'train' looks as follows: ```json { "id": "0", "tokens": ["In", "2003", ",", "the", "Stade", "de", "France", "was", "the", "primary", "site", "of", "the", "2003", "World", "Championships", "in", "Athletics", "."], "ner_tags": [0, 0, 0, 0, 35, 36, 36, 0, 0, 0, 0, 0, 0, 29, 30, 30, 30, 30, 0] } ``` #### literature - **Size of downloaded dataset files:** 0.33 MB - **Size of the generated dataset:** 0.58 MB An example of 'train' looks as follows: ```json { "id": "0", "tokens": ["In", "1351", ",", "during", "the", "reign", "of", "Emperor", "Toghon", "Temür", "of", "the", "Yuan", "dynasty", ",", "93rd-generation", "descendant", "Kong", "Huan", "(", "孔浣", ")", "'", "s", "2nd", "son", "Kong", "Shao", "(", "孔昭", ")", "moved", "from", "China", "to", "Korea", "during", "the", "Goryeo", ",", "and", "was", "received", "courteously", "by", "Princess", "Noguk", "(", "the", "Mongolian-born", "wife", "of", "the", "future", "king", "Gongmin", ")", "."], "ner_tags": [0, 0, 0, 0, 0, 0, 0, 51, 52, 52, 0, 0, 21, 22, 0, 0, 0, 77, 78, 0, 77, 0, 0, 0, 0, 0, 77, 78, 0, 77, 0, 0, 0, 21, 0, 21, 0, 0, 41, 0, 0, 0, 0, 0, 0, 51, 52, 0, 0, 41, 0, 0, 0, 0, 0, 51, 0, 0] } ``` #### ai - **Size of downloaded dataset files:** 0.29 MB - **Size of the generated dataset:** 0.48 MB An example of 'train' looks as follows: ```json { "id": "0", "tokens": ["Popular", "approaches", "of", "opinion-based", "recommender", "system", "utilize", "various", "techniques", "including", "text", "mining", ",", "information", "retrieval", ",", "sentiment", "analysis", "(", "see", "also", "Multimodal", "sentiment", "analysis", ")", "and", "deep", "learning", "X.Y.", "Feng", ",", "H.", "Zhang", ",", "Y.J.", "Ren", ",", "P.H.", "Shang", ",", "Y.", "Zhu", ",", "Y.C.", "Liang", ",", "R.C.", "Guan", ",", "D.", "Xu", ",", "(", "2019", ")", ",", ",", "21", "(", "5", ")", ":", "e12957", "."], "ner_tags": [0, 0, 0, 59, 60, 60, 0, 0, 0, 0, 31, 32, 0, 71, 72, 0, 71, 72, 0, 0, 0, 71, 72, 72, 0, 0, 31, 32, 65, 66, 0, 65, 66, 0, 65, 66, 0, 65, 66, 0, 65, 66, 0, 65, 66, 0, 65, 66, 0, 65, 66, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] } ``` ### Data Fields The data fields are the same among all splits. - `id`: the instance id of this sentence, a `string` feature. - `tokens`: the list of tokens of this sentence, a `list` of `string` features. - `ner_tags`: the list of entity tags, a `list` of classification labels. ```json {"O": 0, "B-academicjournal": 1, "I-academicjournal": 2, "B-album": 3, "I-album": 4, "B-algorithm": 5, "I-algorithm": 6, "B-astronomicalobject": 7, "I-astronomicalobject": 8, "B-award": 9, "I-award": 10, "B-band": 11, "I-band": 12, "B-book": 13, "I-book": 14, "B-chemicalcompound": 15, "I-chemicalcompound": 16, "B-chemicalelement": 17, "I-chemicalelement": 18, "B-conference": 19, "I-conference": 20, "B-country": 21, "I-country": 22, "B-discipline": 23, "I-discipline": 24, "B-election": 25, "I-election": 26, "B-enzyme": 27, "I-enzyme": 28, "B-event": 29, "I-event": 30, "B-field": 31, "I-field": 32, "B-literarygenre": 33, "I-literarygenre": 34, "B-location": 35, "I-location": 36, "B-magazine": 37, "I-magazine": 38, "B-metrics": 39, "I-metrics": 40, "B-misc": 41, "I-misc": 42, "B-musicalartist": 43, "I-musicalartist": 44, "B-musicalinstrument": 45, "I-musicalinstrument": 46, "B-musicgenre": 47, "I-musicgenre": 48, "B-organisation": 49, "I-organisation": 50, "B-person": 51, "I-person": 52, "B-poem": 53, "I-poem": 54, "B-politicalparty": 55, "I-politicalparty": 56, "B-politician": 57, "I-politician": 58, "B-product": 59, "I-product": 60, "B-programlang": 61, "I-programlang": 62, "B-protein": 63, "I-protein": 64, "B-researcher": 65, "I-researcher": 66, "B-scientist": 67, "I-scientist": 68, "B-song": 69, "I-song": 70, "B-task": 71, "I-task": 72, "B-theory": 73, "I-theory": 74, "B-university": 75, "I-university": 76, "B-writer": 77, "I-writer": 78} ``` ### Data Splits | | Train | Dev | Test | |--------------|--------|-------|-------| | conll2003 | 14,987 | 3,466 | 3,684 | | politics | 200 | 541 | 651 | | science | 200 | 450 | 543 | | music | 100 | 380 | 456 | | literature | 100 | 400 | 416 | | ai | 100 | 350 | 431 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{liu2020crossner, title={CrossNER: Evaluating Cross-Domain Named Entity Recognition}, author={Zihan Liu and Yan Xu and Tiezheng Yu and Wenliang Dai and Ziwei Ji and Samuel Cahyawijaya and Andrea Madotto and Pascale Fung}, year={2020}, eprint={2012.04373}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset.
26,199
[ [ -0.050567626953125, -0.0269775390625, 0.01456451416015625, 0.0031375885009765625, -0.01031494140625, 0.01236724853515625, -0.018310546875, -0.027069091796875, 0.04815673828125, 0.024993896484375, -0.053955078125, -0.0667724609375, -0.04931640625, 0.017654418...
0-hero/OIG-small-chip2
2023-03-16T20:10:19.000Z
[ "task_categories:conversational", "task_categories:text2text-generation", "language:en", "region:us" ]
0-hero
null
null
6
91
2023-03-16T17:59:26
--- dataset_info: features: - name: user dtype: string - name: chip2 dtype: string splits: - name: train num_bytes: 82154419 num_examples: 210289 download_size: 51736759 dataset_size: 82154419 task_categories: - conversational - text2text-generation language: - en --- # Dataset Card for "OIG-small-chip2" OIG-small-chip2 dataset from https://laion.ai/blog/oig-dataset/ <br> Original Dataset - https://github.com/LAION-AI/Open-Instruction-Generalist
480
[ [ -0.0137939453125, -0.022064208984375, 0.0156707763671875, -0.0283660888671875, -0.021453857421875, -0.0139007568359375, 0.0081787109375, -0.00200653076171875, 0.039093017578125, 0.042388916015625, -0.0447998046875, -0.0321044921875, -0.01448822021484375, -0....
zhengyun21/PMC-Patients
2023-07-13T06:11:59.000Z
[ "size_categories:100K<n<1M", "language:en", "license:cc-by-nc-sa-4.0", "patient summary", "medical", "biology", "arxiv:2202.13876", "region:us" ]
zhengyun21
null
null
60
91
2023-04-25T13:20:16
--- license: cc-by-nc-sa-4.0 language: - en tags: - patient summary - medical - biology size_categories: - 100K<n<1M --- # Dataset Card for PMC-Patients ## Dataset Description - **Homepage:** https://github.com/pmc-patients/pmc-patients - **Repository:** https://github.com/pmc-patients/pmc-patients - **Paper:** https://arxiv.org/pdf/2202.13876.pdf - **Leaderboard:** https://pmc-patients.github.io/ - **Point of Contact:** zhengyun21@mails.tsinghua.edu.cn ### Dataset Summary **PMC-Patients** is a first-of-its-kind dataset consisting of 167k patient summaries extracted from case reports in PubMed Central (PMC), 3.1M patient-article relevance and 293k patient-patient similarity annotations defined by PubMed citation graph. ### Supported Tasks and Leaderboards **This is purely the patient summary dataset with relational annotations. For ReCDS benchmark, refer to [this dataset](https://huggingface.co/datasets/zhengyun21/PMC-Patients-ReCDS)** Based on PMC-Patients, we define two tasks to benchmark Retrieval-based Clinical Decision Support (ReCDS) systems: Patient-to-Article Retrieval (PAR) and Patient-to-Patient Retrieval (PPR). For details, please refer to [our paper](https://arxiv.org/pdf/2202.13876.pdf) and [leaderboard](https://pmc-patients.github.io/). ### Languages English (en). ## Dataset Structure ### PMC-Paitents_full.json This file contains all information about patients summaries in PMC-Patients, which is a list of dict with keys: - `patient_id`: string. A continuous id of patients, starting from 0. - `patient_uid`: string. Unique ID for each patient, with format PMID-x, where PMID is the PubMed Identifier of the source article of the patient and x denotes index of the patient in source article. - `PMID`: string. PMID for source article. - `file_path`: string. File path of xml file of source article. - `title`: string. Source article title. - `patient`: string. Patient summary. - `age`: list of tuples. Each entry is in format `(value, unit)` where value is a float number and unit is in 'year', 'month', 'week', 'day' and 'hour' indicating age unit. For example, `[[1.0, 'year'], [2.0, 'month']]` indicating the patient is a one-year- and two-month-old infant. - `gender`: 'M' or 'F'. Male or Female. - `similar_patients`: list of string. `patient_uid` of the similar patients. - `relevant_articles`: list of string. `PMID` of the relevant articles. ## Dataset Creation If you are interested in the collection of PMC-Patients and reproducing our baselines, please refer to [this reporsitory](https://github.com/zhao-zy15/PMC-Patients). ### Citation Information If you find PMC-Patients helpful in your research, please cite our work by: ``` @misc{zhao2023pmcpatients, title={PMC-Patients: A Large-scale Dataset of Patient Summaries and Relations for Benchmarking Retrieval-based Clinical Decision Support Systems}, author={Zhengyun Zhao and Qiao Jin and Fangyuan Chen and Tuorui Peng and Sheng Yu}, year={2023}, eprint={2202.13876}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
3,077
[ [ -0.00714874267578125, -0.0111846923828125, 0.04669189453125, 0.02197265625, -0.026611328125, -0.01073455810546875, 0.0026702880859375, -0.004581451416015625, 0.0175933837890625, 0.0323486328125, -0.032470703125, -0.06304931640625, -0.033233642578125, 0.03280...
sukantan/nyaya-acts
2023-06-08T06:56:32.000Z
[ "region:us" ]
sukantan
null
null
0
91
2023-06-04T11:58:07
--- dataset_info: features: - name: section_href dtype: string - name: act_enactment_date dtype: timestamp[s] - name: act_short_title dtype: string - name: act_long_title dtype: string - name: act_id dtype: string - name: ministry dtype: string - name: section_number dtype: string - name: section_title dtype: string - name: section_content dtype: string - name: section_part_no dtype: string - name: section_part dtype: string splits: - name: train num_bytes: 648190303 num_examples: 53788 download_size: 35939204 dataset_size: 648190303 --- # Dataset Card for "nyaya-acts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
792
[ [ -0.0225830078125, -0.0248870849609375, 0.00005263090133666992, 0.010223388671875, -0.01448822021484375, -0.01172637939453125, 0.033294677734375, -0.0185546875, 0.08026123046875, 0.035369873046875, -0.062164306640625, -0.0496826171875, -0.043243408203125, -0....
sukantan/nyaya-ae-all-mpnet-base-v2-ftlegal-v3
2023-06-25T12:23:15.000Z
[ "region:us" ]
sukantan
null
null
0
91
2023-06-25T10:20:41
--- dataset_info: features: - name: '0' dtype: float32 - name: '1' dtype: float32 - name: '2' dtype: float32 - name: '3' dtype: float32 - name: '4' dtype: float32 - name: '5' dtype: float32 - name: '6' dtype: float32 - name: '7' dtype: float32 - name: '8' dtype: float32 - name: '9' dtype: float32 - name: '10' dtype: float32 - name: '11' dtype: float32 - name: '12' dtype: float32 - name: '13' dtype: float32 - name: '14' dtype: float32 - name: '15' dtype: float32 - name: '16' dtype: float32 - name: '17' dtype: float32 - name: '18' dtype: float32 - name: '19' dtype: float32 - name: '20' dtype: float32 - name: '21' dtype: float32 - name: '22' dtype: float32 - name: '23' dtype: float32 - name: '24' dtype: float32 - name: '25' dtype: float32 - name: '26' dtype: float32 - name: '27' dtype: float32 - name: '28' dtype: float32 - name: '29' dtype: float32 - name: '30' dtype: float32 - name: '31' dtype: float32 - name: '32' dtype: float32 - name: '33' dtype: float32 - name: '34' dtype: float32 - name: '35' dtype: float32 - name: '36' dtype: float32 - name: '37' dtype: float32 - name: '38' dtype: float32 - name: '39' dtype: float32 - name: '40' dtype: float32 - name: '41' dtype: float32 - name: '42' dtype: float32 - name: '43' dtype: float32 - name: '44' dtype: float32 - name: '45' dtype: float32 - name: '46' dtype: float32 - name: '47' dtype: float32 - name: '48' dtype: float32 - name: '49' dtype: float32 - name: '50' dtype: float32 - name: '51' dtype: float32 - name: '52' dtype: float32 - name: '53' dtype: float32 - name: '54' dtype: float32 - name: '55' dtype: float32 - name: '56' dtype: float32 - name: '57' dtype: float32 - name: '58' dtype: float32 - name: '59' dtype: float32 - name: '60' dtype: float32 - name: '61' dtype: float32 - name: '62' dtype: float32 - name: '63' dtype: float32 - name: '64' dtype: float32 - name: '65' dtype: float32 - name: '66' dtype: float32 - name: '67' dtype: float32 - name: '68' dtype: float32 - name: '69' dtype: float32 - name: '70' dtype: float32 - name: '71' dtype: float32 - name: '72' dtype: float32 - name: '73' dtype: float32 - name: '74' dtype: float32 - name: '75' dtype: float32 - name: '76' dtype: float32 - name: '77' dtype: float32 - name: '78' dtype: float32 - name: '79' dtype: float32 - name: '80' dtype: float32 - name: '81' dtype: float32 - name: '82' dtype: float32 - name: '83' dtype: float32 - name: '84' dtype: float32 - name: '85' dtype: float32 - name: '86' dtype: float32 - name: '87' dtype: float32 - name: '88' dtype: float32 - name: '89' dtype: float32 - name: '90' dtype: float32 - name: '91' dtype: float32 - name: '92' dtype: float32 - name: '93' dtype: float32 - name: '94' dtype: float32 - name: '95' dtype: float32 - name: '96' dtype: float32 - name: '97' dtype: float32 - name: '98' dtype: float32 - name: '99' dtype: float32 - name: '100' dtype: float32 - name: '101' dtype: float32 - name: '102' dtype: float32 - name: '103' dtype: float32 - name: '104' dtype: float32 - name: '105' dtype: float32 - name: '106' dtype: float32 - name: '107' dtype: float32 - name: '108' dtype: float32 - name: '109' dtype: float32 - name: '110' dtype: float32 - name: '111' dtype: float32 - name: '112' dtype: float32 - name: '113' dtype: float32 - name: '114' dtype: float32 - name: '115' dtype: float32 - name: '116' dtype: float32 - name: '117' dtype: float32 - name: '118' dtype: float32 - name: '119' dtype: float32 - name: '120' dtype: float32 - name: '121' dtype: float32 - name: '122' dtype: float32 - name: '123' dtype: float32 - name: '124' dtype: float32 - name: '125' dtype: float32 - name: '126' dtype: float32 - name: '127' dtype: float32 - name: '128' dtype: float32 - name: '129' dtype: float32 - name: '130' dtype: float32 - name: '131' dtype: float32 - name: '132' dtype: float32 - name: '133' dtype: float32 - name: '134' dtype: float32 - name: '135' dtype: float32 - name: '136' dtype: float32 - name: '137' dtype: float32 - name: '138' dtype: float32 - name: '139' dtype: float32 - name: '140' dtype: float32 - name: '141' dtype: float32 - name: '142' dtype: float32 - name: '143' dtype: float32 - name: '144' dtype: float32 - name: '145' dtype: float32 - name: '146' dtype: float32 - name: '147' dtype: float32 - name: '148' dtype: float32 - name: '149' dtype: float32 - name: '150' dtype: float32 - name: '151' dtype: float32 - name: '152' dtype: float32 - name: '153' dtype: float32 - name: '154' dtype: float32 - name: '155' dtype: float32 - name: '156' dtype: float32 - name: '157' dtype: float32 - name: '158' dtype: float32 - name: '159' dtype: float32 - name: '160' dtype: float32 - name: '161' dtype: float32 - name: '162' dtype: float32 - name: '163' dtype: float32 - name: '164' dtype: float32 - name: '165' dtype: float32 - name: '166' dtype: float32 - name: '167' dtype: float32 - name: '168' dtype: float32 - name: '169' dtype: float32 - name: '170' dtype: float32 - name: '171' dtype: float32 - name: '172' dtype: float32 - name: '173' dtype: float32 - name: '174' dtype: float32 - name: '175' dtype: float32 - name: '176' dtype: float32 - name: '177' dtype: float32 - name: '178' dtype: float32 - name: '179' dtype: float32 - name: '180' dtype: float32 - name: '181' dtype: float32 - name: '182' dtype: float32 - name: '183' dtype: float32 - name: '184' dtype: float32 - name: '185' dtype: float32 - name: '186' dtype: float32 - name: '187' dtype: float32 - name: '188' dtype: float32 - name: '189' dtype: float32 - name: '190' dtype: float32 - name: '191' dtype: float32 - name: '192' dtype: float32 - name: '193' dtype: float32 - name: '194' dtype: float32 - name: '195' dtype: float32 - name: '196' dtype: float32 - name: '197' dtype: float32 - name: '198' dtype: float32 - name: '199' dtype: float32 - name: '200' dtype: float32 - name: '201' dtype: float32 - name: '202' dtype: float32 - name: '203' dtype: float32 - name: '204' dtype: float32 - name: '205' dtype: float32 - name: '206' dtype: float32 - name: '207' dtype: float32 - name: '208' dtype: float32 - name: '209' dtype: float32 - name: '210' dtype: float32 - name: '211' dtype: float32 - name: '212' dtype: float32 - name: '213' dtype: float32 - name: '214' dtype: float32 - name: '215' dtype: float32 - name: '216' dtype: float32 - name: '217' dtype: float32 - name: '218' dtype: float32 - name: '219' dtype: float32 - name: '220' dtype: float32 - name: '221' dtype: float32 - name: '222' dtype: float32 - name: '223' dtype: float32 - name: '224' dtype: float32 - name: '225' dtype: float32 - name: '226' dtype: float32 - name: '227' dtype: float32 - name: '228' dtype: float32 - name: '229' dtype: float32 - name: '230' dtype: float32 - name: '231' dtype: float32 - name: '232' dtype: float32 - name: '233' dtype: float32 - name: '234' dtype: float32 - name: '235' dtype: float32 - name: '236' dtype: float32 - name: '237' dtype: float32 - name: '238' dtype: float32 - name: '239' dtype: float32 - name: '240' dtype: float32 - name: '241' dtype: float32 - name: '242' dtype: float32 - name: '243' dtype: float32 - name: '244' dtype: float32 - name: '245' dtype: float32 - name: '246' dtype: float32 - name: '247' dtype: float32 - name: '248' dtype: float32 - name: '249' dtype: float32 - name: '250' dtype: float32 - name: '251' dtype: float32 - name: '252' dtype: float32 - name: '253' dtype: float32 - name: '254' dtype: float32 - name: '255' dtype: float32 - name: '256' dtype: float32 - name: '257' dtype: float32 - name: '258' dtype: float32 - name: '259' dtype: float32 - name: '260' dtype: float32 - name: '261' dtype: float32 - name: '262' dtype: float32 - name: '263' dtype: float32 - name: '264' dtype: float32 - name: '265' dtype: float32 - name: '266' dtype: float32 - name: '267' dtype: float32 - name: '268' dtype: float32 - name: '269' dtype: float32 - name: '270' dtype: float32 - name: '271' dtype: float32 - name: '272' dtype: float32 - name: '273' dtype: float32 - name: '274' dtype: float32 - name: '275' dtype: float32 - name: '276' dtype: float32 - name: '277' dtype: float32 - name: '278' dtype: float32 - name: '279' dtype: float32 - name: '280' dtype: float32 - name: '281' dtype: float32 - name: '282' dtype: float32 - name: '283' dtype: float32 - name: '284' dtype: float32 - name: '285' dtype: float32 - name: '286' dtype: float32 - name: '287' dtype: float32 - name: '288' dtype: float32 - name: '289' dtype: float32 - name: '290' dtype: float32 - name: '291' dtype: float32 - name: '292' dtype: float32 - name: '293' dtype: float32 - name: '294' dtype: float32 - name: '295' dtype: float32 - name: '296' dtype: float32 - name: '297' dtype: float32 - name: '298' dtype: float32 - name: '299' dtype: float32 - name: '300' dtype: float32 - name: '301' dtype: float32 - name: '302' dtype: float32 - name: '303' dtype: float32 - name: '304' dtype: float32 - name: '305' dtype: float32 - name: '306' dtype: float32 - name: '307' dtype: float32 - name: '308' dtype: float32 - name: '309' dtype: float32 - name: '310' dtype: float32 - name: '311' dtype: float32 - name: '312' dtype: float32 - name: '313' dtype: float32 - name: '314' dtype: float32 - name: '315' dtype: float32 - name: '316' dtype: float32 - name: '317' dtype: float32 - name: '318' dtype: float32 - name: '319' dtype: float32 - name: '320' dtype: float32 - name: '321' dtype: float32 - name: '322' dtype: float32 - name: '323' dtype: float32 - name: '324' dtype: float32 - name: '325' dtype: float32 - name: '326' dtype: float32 - name: '327' dtype: float32 - name: '328' dtype: float32 - name: '329' dtype: float32 - name: '330' dtype: float32 - name: '331' dtype: float32 - name: '332' dtype: float32 - name: '333' dtype: float32 - name: '334' dtype: float32 - name: '335' dtype: float32 - name: '336' dtype: float32 - name: '337' dtype: float32 - name: '338' dtype: float32 - name: '339' dtype: float32 - name: '340' dtype: float32 - name: '341' dtype: float32 - name: '342' dtype: float32 - name: '343' dtype: float32 - name: '344' dtype: float32 - name: '345' dtype: float32 - name: '346' dtype: float32 - name: '347' dtype: float32 - name: '348' dtype: float32 - name: '349' dtype: float32 - name: '350' dtype: float32 - name: '351' dtype: float32 - name: '352' dtype: float32 - name: '353' dtype: float32 - name: '354' dtype: float32 - name: '355' dtype: float32 - name: '356' dtype: float32 - name: '357' dtype: float32 - name: '358' dtype: float32 - name: '359' dtype: float32 - name: '360' dtype: float32 - name: '361' dtype: float32 - name: '362' dtype: float32 - name: '363' dtype: float32 - name: '364' dtype: float32 - name: '365' dtype: float32 - name: '366' dtype: float32 - name: '367' dtype: float32 - name: '368' dtype: float32 - name: '369' dtype: float32 - name: '370' dtype: float32 - name: '371' dtype: float32 - name: '372' dtype: float32 - name: '373' dtype: float32 - name: '374' dtype: float32 - name: '375' dtype: float32 - name: '376' dtype: float32 - name: '377' dtype: float32 - name: '378' dtype: float32 - name: '379' dtype: float32 - name: '380' dtype: float32 - name: '381' dtype: float32 - name: '382' dtype: float32 - name: '383' dtype: float32 - name: '384' dtype: float32 - name: '385' dtype: float32 - name: '386' dtype: float32 - name: '387' dtype: float32 - name: '388' dtype: float32 - name: '389' dtype: float32 - name: '390' dtype: float32 - name: '391' dtype: float32 - name: '392' dtype: float32 - name: '393' dtype: float32 - name: '394' dtype: float32 - name: '395' dtype: float32 - name: '396' dtype: float32 - name: '397' dtype: float32 - name: '398' dtype: float32 - name: '399' dtype: float32 - name: '400' dtype: float32 - name: '401' dtype: float32 - name: '402' dtype: float32 - name: '403' dtype: float32 - name: '404' dtype: float32 - name: '405' dtype: float32 - name: '406' dtype: float32 - name: '407' dtype: float32 - name: '408' dtype: float32 - name: '409' dtype: float32 - name: '410' dtype: float32 - name: '411' dtype: float32 - name: '412' dtype: float32 - name: '413' dtype: float32 - name: '414' dtype: float32 - name: '415' dtype: float32 - name: '416' dtype: float32 - name: '417' dtype: float32 - name: '418' dtype: float32 - name: '419' dtype: float32 - name: '420' dtype: float32 - name: '421' dtype: float32 - name: '422' dtype: float32 - name: '423' dtype: float32 - name: '424' dtype: float32 - name: '425' dtype: float32 - name: '426' dtype: float32 - name: '427' dtype: float32 - name: '428' dtype: float32 - name: '429' dtype: float32 - name: '430' dtype: float32 - name: '431' dtype: float32 - name: '432' dtype: float32 - name: '433' dtype: float32 - name: '434' dtype: float32 - name: '435' dtype: float32 - name: '436' dtype: float32 - name: '437' dtype: float32 - name: '438' dtype: float32 - name: '439' dtype: float32 - name: '440' dtype: float32 - name: '441' dtype: float32 - name: '442' dtype: float32 - name: '443' dtype: float32 - name: '444' dtype: float32 - name: '445' dtype: float32 - name: '446' dtype: float32 - name: '447' dtype: float32 - name: '448' dtype: float32 - name: '449' dtype: float32 - name: '450' dtype: float32 - name: '451' dtype: float32 - name: '452' dtype: float32 - name: '453' dtype: float32 - name: '454' dtype: float32 - name: '455' dtype: float32 - name: '456' dtype: float32 - name: '457' dtype: float32 - name: '458' dtype: float32 - name: '459' dtype: float32 - name: '460' dtype: float32 - name: '461' dtype: float32 - name: '462' dtype: float32 - name: '463' dtype: float32 - name: '464' dtype: float32 - name: '465' dtype: float32 - name: '466' dtype: float32 - name: '467' dtype: float32 - name: '468' dtype: float32 - name: '469' dtype: float32 - name: '470' dtype: float32 - name: '471' dtype: float32 - name: '472' dtype: float32 - name: '473' dtype: float32 - name: '474' dtype: float32 - name: '475' dtype: float32 - name: '476' dtype: float32 - name: '477' dtype: float32 - name: '478' dtype: float32 - name: '479' dtype: float32 - name: '480' dtype: float32 - name: '481' dtype: float32 - name: '482' dtype: float32 - name: '483' dtype: float32 - name: '484' dtype: float32 - name: '485' dtype: float32 - name: '486' dtype: float32 - name: '487' dtype: float32 - name: '488' dtype: float32 - name: '489' dtype: float32 - name: '490' dtype: float32 - name: '491' dtype: float32 - name: '492' dtype: float32 - name: '493' dtype: float32 - name: '494' dtype: float32 - name: '495' dtype: float32 - name: '496' dtype: float32 - name: '497' dtype: float32 - name: '498' dtype: float32 - name: '499' dtype: float32 - name: '500' dtype: float32 - name: '501' dtype: float32 - name: '502' dtype: float32 - name: '503' dtype: float32 - name: '504' dtype: float32 - name: '505' dtype: float32 - name: '506' dtype: float32 - name: '507' dtype: float32 - name: '508' dtype: float32 - name: '509' dtype: float32 - name: '510' dtype: float32 - name: '511' dtype: float32 - name: '512' dtype: float32 - name: '513' dtype: float32 - name: '514' dtype: float32 - name: '515' dtype: float32 - name: '516' dtype: float32 - name: '517' dtype: float32 - name: '518' dtype: float32 - name: '519' dtype: float32 - name: '520' dtype: float32 - name: '521' dtype: float32 - name: '522' dtype: float32 - name: '523' dtype: float32 - name: '524' dtype: float32 - name: '525' dtype: float32 - name: '526' dtype: float32 - name: '527' dtype: float32 - name: '528' dtype: float32 - name: '529' dtype: float32 - name: '530' dtype: float32 - name: '531' dtype: float32 - name: '532' dtype: float32 - name: '533' dtype: float32 - name: '534' dtype: float32 - name: '535' dtype: float32 - name: '536' dtype: float32 - name: '537' dtype: float32 - name: '538' dtype: float32 - name: '539' dtype: float32 - name: '540' dtype: float32 - name: '541' dtype: float32 - name: '542' dtype: float32 - name: '543' dtype: float32 - name: '544' dtype: float32 - name: '545' dtype: float32 - name: '546' dtype: float32 - name: '547' dtype: float32 - name: '548' dtype: float32 - name: '549' dtype: float32 - name: '550' dtype: float32 - name: '551' dtype: float32 - name: '552' dtype: float32 - name: '553' dtype: float32 - name: '554' dtype: float32 - name: '555' dtype: float32 - name: '556' dtype: float32 - name: '557' dtype: float32 - name: '558' dtype: float32 - name: '559' dtype: float32 - name: '560' dtype: float32 - name: '561' dtype: float32 - name: '562' dtype: float32 - name: '563' dtype: float32 - name: '564' dtype: float32 - name: '565' dtype: float32 - name: '566' dtype: float32 - name: '567' dtype: float32 - name: '568' dtype: float32 - name: '569' dtype: float32 - name: '570' dtype: float32 - name: '571' dtype: float32 - name: '572' dtype: float32 - name: '573' dtype: float32 - name: '574' dtype: float32 - name: '575' dtype: float32 - name: '576' dtype: float32 - name: '577' dtype: float32 - name: '578' dtype: float32 - name: '579' dtype: float32 - name: '580' dtype: float32 - name: '581' dtype: float32 - name: '582' dtype: float32 - name: '583' dtype: float32 - name: '584' dtype: float32 - name: '585' dtype: float32 - name: '586' dtype: float32 - name: '587' dtype: float32 - name: '588' dtype: float32 - name: '589' dtype: float32 - name: '590' dtype: float32 - name: '591' dtype: float32 - name: '592' dtype: float32 - name: '593' dtype: float32 - name: '594' dtype: float32 - name: '595' dtype: float32 - name: '596' dtype: float32 - name: '597' dtype: float32 - name: '598' dtype: float32 - name: '599' dtype: float32 - name: '600' dtype: float32 - name: '601' dtype: float32 - name: '602' dtype: float32 - name: '603' dtype: float32 - name: '604' dtype: float32 - name: '605' dtype: float32 - name: '606' dtype: float32 - name: '607' dtype: float32 - name: '608' dtype: float32 - name: '609' dtype: float32 - name: '610' dtype: float32 - name: '611' dtype: float32 - name: '612' dtype: float32 - name: '613' dtype: float32 - name: '614' dtype: float32 - name: '615' dtype: float32 - name: '616' dtype: float32 - name: '617' dtype: float32 - name: '618' dtype: float32 - name: '619' dtype: float32 - name: '620' dtype: float32 - name: '621' dtype: float32 - name: '622' dtype: float32 - name: '623' dtype: float32 - name: '624' dtype: float32 - name: '625' dtype: float32 - name: '626' dtype: float32 - name: '627' dtype: float32 - name: '628' dtype: float32 - name: '629' dtype: float32 - name: '630' dtype: float32 - name: '631' dtype: float32 - name: '632' dtype: float32 - name: '633' dtype: float32 - name: '634' dtype: float32 - name: '635' dtype: float32 - name: '636' dtype: float32 - name: '637' dtype: float32 - name: '638' dtype: float32 - name: '639' dtype: float32 - name: '640' dtype: float32 - name: '641' dtype: float32 - name: '642' dtype: float32 - name: '643' dtype: float32 - name: '644' dtype: float32 - name: '645' dtype: float32 - name: '646' dtype: float32 - name: '647' dtype: float32 - name: '648' dtype: float32 - name: '649' dtype: float32 - name: '650' dtype: float32 - name: '651' dtype: float32 - name: '652' dtype: float32 - name: '653' dtype: float32 - name: '654' dtype: float32 - name: '655' dtype: float32 - name: '656' dtype: float32 - name: '657' dtype: float32 - name: '658' dtype: float32 - name: '659' dtype: float32 - name: '660' dtype: float32 - name: '661' dtype: float32 - name: '662' dtype: float32 - name: '663' dtype: float32 - name: '664' dtype: float32 - name: '665' dtype: float32 - name: '666' dtype: float32 - name: '667' dtype: float32 - name: '668' dtype: float32 - name: '669' dtype: float32 - name: '670' dtype: float32 - name: '671' dtype: float32 - name: '672' dtype: float32 - name: '673' dtype: float32 - name: '674' dtype: float32 - name: '675' dtype: float32 - name: '676' dtype: float32 - name: '677' dtype: float32 - name: '678' dtype: float32 - name: '679' dtype: float32 - name: '680' dtype: float32 - name: '681' dtype: float32 - name: '682' dtype: float32 - name: '683' dtype: float32 - name: '684' dtype: float32 - name: '685' dtype: float32 - name: '686' dtype: float32 - name: '687' dtype: float32 - name: '688' dtype: float32 - name: '689' dtype: float32 - name: '690' dtype: float32 - name: '691' dtype: float32 - name: '692' dtype: float32 - name: '693' dtype: float32 - name: '694' dtype: float32 - name: '695' dtype: float32 - name: '696' dtype: float32 - name: '697' dtype: float32 - name: '698' dtype: float32 - name: '699' dtype: float32 - name: '700' dtype: float32 - name: '701' dtype: float32 - name: '702' dtype: float32 - name: '703' dtype: float32 - name: '704' dtype: float32 - name: '705' dtype: float32 - name: '706' dtype: float32 - name: '707' dtype: float32 - name: '708' dtype: float32 - name: '709' dtype: float32 - name: '710' dtype: float32 - name: '711' dtype: float32 - name: '712' dtype: float32 - name: '713' dtype: float32 - name: '714' dtype: float32 - name: '715' dtype: float32 - name: '716' dtype: float32 - name: '717' dtype: float32 - name: '718' dtype: float32 - name: '719' dtype: float32 - name: '720' dtype: float32 - name: '721' dtype: float32 - name: '722' dtype: float32 - name: '723' dtype: float32 - name: '724' dtype: float32 - name: '725' dtype: float32 - name: '726' dtype: float32 - name: '727' dtype: float32 - name: '728' dtype: float32 - name: '729' dtype: float32 - name: '730' dtype: float32 - name: '731' dtype: float32 - name: '732' dtype: float32 - name: '733' dtype: float32 - name: '734' dtype: float32 - name: '735' dtype: float32 - name: '736' dtype: float32 - name: '737' dtype: float32 - name: '738' dtype: float32 - name: '739' dtype: float32 - name: '740' dtype: float32 - name: '741' dtype: float32 - name: '742' dtype: float32 - name: '743' dtype: float32 - name: '744' dtype: float32 - name: '745' dtype: float32 - name: '746' dtype: float32 - name: '747' dtype: float32 - name: '748' dtype: float32 - name: '749' dtype: float32 - name: '750' dtype: float32 - name: '751' dtype: float32 - name: '752' dtype: float32 - name: '753' dtype: float32 - name: '754' dtype: float32 - name: '755' dtype: float32 - name: '756' dtype: float32 - name: '757' dtype: float32 - name: '758' dtype: float32 - name: '759' dtype: float32 - name: '760' dtype: float32 - name: '761' dtype: float32 - name: '762' dtype: float32 - name: '763' dtype: float32 - name: '764' dtype: float32 - name: '765' dtype: float32 - name: '766' dtype: float32 - name: '767' dtype: float32 splits: - name: train num_bytes: 165236736 num_examples: 53788 download_size: 199560362 dataset_size: 165236736 --- # Dataset Card for "nyaya-ae-all-mpnet-base-v2-ftlegal-v3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
27,123
[ [ -0.0255279541015625, -0.00354766845703125, 0.007221221923828125, 0.01043701171875, -0.0292816162109375, -0.005214691162109375, 0.044342041015625, -0.008758544921875, 0.06500244140625, 0.0295867919921875, -0.049407958984375, -0.04449462890625, -0.051849365234375,...
MAPS-research/GEMRec-Metadata
2023-08-07T04:42:05.000Z
[ "task_categories:text-to-image", "size_categories:10K<n<100K", "language:en", "license:openrail", "art", "stable diffusion", "diffusers", "region:us" ]
MAPS-research
null
null
0
91
2023-06-30T06:40:35
--- dataset_info: features: - name: image_id dtype: string - name: tag dtype: string - name: model_id dtype: int64 - name: modelVersion_id dtype: int64 - name: prompt_id dtype: int64 - name: size dtype: string - name: seed dtype: int64 - name: prompt dtype: string - name: negativePrompt dtype: string - name: cfgScale dtype: int64 - name: sampler dtype: string - name: note dtype: string - name: nsfw_score dtype: float64 - name: mcos_score dtype: float64 - name: clip_score dtype: float64 - name: norm_clip dtype: float64 - name: norm_mcos dtype: float64 - name: norm_nsfw dtype: float64 - name: norm_pop dtype: float64 splits: - name: train num_bytes: 7955010 num_examples: 18000 download_size: 0 dataset_size: 7955010 license: openrail task_categories: - text-to-image language: - en tags: - art - stable diffusion - diffusers size_categories: - 10K<n<100K --- # GEMRec-18k -- Metadata This is the official image metadata dataset for the paper [Towards Personalized Prompt-Model Retrieval for Generative Recommendation](https://github.com/MAPS-research/GEMRec). ## Dataset Intro `GEMRec-18K` is a prompt-model interaction dataset with 18K images generated by 200 publicly-available generative models paired with a diverse set of 90 textual prompts. We randomly sampled a subset of 197 models from the full set of models (all finetuned from Stable Diffusion) on [Civitai](https://civitai.com/) according to the popularity distribution (i.e., download counts) and added 3 original Stable Diffusion checkpoints (v1.4, v1.5, v2.1) from HuggingFace. All the model checkpoints have been converted to the [Diffusers](https://huggingface.co/docs/diffusers/index) format. The textual prompts were drawn from three sources: 60 prompts were sampled from [Parti Prompts](https://github.com/google-research/parti); 10 prompts were sampled from [Civitai](https://civitai.com/) by popularity; we also handcrafted 10 prompts following the prompting guide from [DreamStudio](https://beta.dreamstudio.ai/prompt-guide), and then extended them to 20 by creating a shortened and simplified version following the tips from [Midjourney](https://docs.midjourney.com/docs/prompts). The textual prompts were classified into 12 categories: abstract, animal, architecture, art, artifact, food, illustration, people, produce & plant, scenery, vehicle, and world knowledge. ## Links #### Dataset - [GEMRec-Promptbook](https://huggingface.co/datasets/MAPS-research/GEMRec-PromptBook): The full version of our GemRec-18k dataset (images & metadata). - [GEMRec-Metadata](https://huggingface.co/datasets/MAPS-research/GEMRec-Metadata): The pruned version of our GemRec-18k dataset (metadata only). - [GEMRec-Roster](https://huggingface.co/datasets/MAPS-research/GEMRec-Roster): The metadata for the 200 model checkpoints fetched from [Civitai](https://civitai.com/). #### Space - [GEMRec-Gallery](https://huggingface.co/spaces/MAPS-research/GEMRec-Gallery): Our web application for browsing and comparing the generated images. #### Github Code - [GEMRec](https://github.com/MAPS-research/GEMRec) ## Acknowledgement This work was supported through the NYU High Performance Computing resources, services, and staff expertise. ## Citation If you find our work helpful, please consider cite it as follows: ```bibtex @article{guo2023towards, title={Towards Personalized Prompt-Model Retrieval for Generative Recommendation}, author={Guo, Yuanhe and Liu, Haoming and Wen, Hongyi}, journal={arXiv preprint arXiv:2308.02205}, year={2023} } ```
3,653
[ [ -0.0635986328125, -0.041778564453125, 0.059234619140625, 0.0044708251953125, 0.0026397705078125, -0.0171356201171875, -0.005336761474609375, -0.0227508544921875, 0.002025604248046875, 0.042327880859375, -0.062164306640625, -0.07147216796875, -0.0228424072265625,...
tanvinsharma/books100
2023-09-27T10:38:26.000Z
[ "region:us" ]
tanvinsharma
null
null
0
91
2023-09-27T10:38:04
Entry not found
15
[ [ -0.0213775634765625, -0.014984130859375, 0.05718994140625, 0.0288543701171875, -0.0350341796875, 0.046478271484375, 0.052520751953125, 0.005062103271484375, 0.051361083984375, 0.016998291015625, -0.0521240234375, -0.01496124267578125, -0.0604248046875, 0.037...
hippocrates/DDI2013_train
2023-10-12T19:18:48.000Z
[ "region:us" ]
hippocrates
null
null
0
91
2023-10-12T19:18:42
--- dataset_info: features: - name: id dtype: string - name: conversations list: - name: from dtype: string - name: value dtype: string - name: text dtype: string splits: - name: train num_bytes: 6078356 num_examples: 3000 - name: valid num_bytes: 6758153 num_examples: 3000 - name: test num_bytes: 6233436 num_examples: 3000 download_size: 3401816 dataset_size: 19069945 --- # Dataset Card for "DDI2013_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
619
[ [ -0.047332763671875, -0.002838134765625, 0.02471923828125, 0.033477783203125, -0.005390167236328125, -0.0098114013671875, 0.0335693359375, -0.0015239715576171875, 0.04669189453125, 0.01331329345703125, -0.0762939453125, -0.036529541015625, -0.0423583984375, -...
AmanK1202/CNNOVEng_train
2023-10-19T15:59:14.000Z
[ "region:us" ]
AmanK1202
null
null
0
91
2023-10-18T21:57:49
Entry not found
15
[ [ -0.0213775634765625, -0.014984130859375, 0.05718994140625, 0.0288543701171875, -0.0350341796875, 0.046478271484375, 0.052520751953125, 0.005062103271484375, 0.051361083984375, 0.016998291015625, -0.0521240234375, -0.01496124267578125, -0.0604248046875, 0.037...
MemGPT/example_short_stories
2023-10-19T02:04:57.000Z
[ "region:us" ]
MemGPT
null
null
1
91
2023-10-19T02:04:37
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
bswac
2022-11-03T16:15:55.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100M<n<1B", "source_datasets:original", "language:bs",...
null
The Bosnian web corpus bsWaC was built by crawling the .ba top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Bosnian vs. Croatian vs. Serbian). Version 1.0 of this corpus is described in http://www.aclweb.org/anthology/W14-0405. Version 1.1 contains newer and better linguistic annotations.
@misc{11356/1062, title = {Bosnian web corpus {bsWaC} 1.1}, author = {Ljube{\v s}i{\'c}, Nikola and Klubi{\v c}ka, Filip}, url = {http://hdl.handle.net/11356/1062}, note = {Slovenian language resource repository {CLARIN}.{SI}}, copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)}, year = {2016} }
0
90
2022-03-02T23:29:22
--- annotations_creators: - no-annotation language_creators: - found language: - bs license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 100M<n<1B source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling paperswithcode_id: null pretty_name: BsWac dataset_info: features: - name: sentence dtype: string config_name: bswac splits: - name: train num_bytes: 9156258478 num_examples: 354581267 download_size: 1988514951 dataset_size: 9156258478 --- # Dataset Card for BsWac ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://nlp.ffzg.hr/resources/corpora/bswac/ - **Repository:** https://www.clarin.si/repository/xmlui/handle/11356/1062 - **Paper:** http://nlp.ffzg.hr/data/publications/nljubesi/ljubesic14-bs.pdf - **Leaderboard:** - **Point of Contact:** [Nikola Ljubešič](mailto:nikola.ljubesic@ffzg.hr) ### Dataset Summary The Bosnian web corpus bsWaC was built by crawling the .ba top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Bosnian vs. Croatian vs. Serbian). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Dataset is monolingual in Bosnian language. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/) license. ### Citation Information ``` @misc{11356/1062, title = {Bosnian web corpus {bsWaC} 1.1}, author = {Ljube{\v s}i{\'c}, Nikola and Klubi{\v c}ka, Filip}, url = {http://hdl.handle.net/11356/1062}, note = {Slovenian language resource repository {CLARIN}.{SI}}, copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)}, year = {2016} } ``` ### Contributions Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset.
3,965
[ [ -0.039581298828125, -0.034912109375, 0.000431060791015625, 0.037811279296875, -0.033416748046875, 0.0114898681640625, -0.025299072265625, -0.046478271484375, 0.0254669189453125, 0.026885986328125, -0.0638427734375, -0.079345703125, -0.053680419921875, 0.0299...
ehealth_kd
2023-01-25T14:29:46.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:es", "license:cc-by-nc-sa-4.0", "relation-pre...
null
Dataset of the eHealth Knowledge Discovery Challenge at IberLEF 2020. It is designed for the identification of semantic entities and relations in Spanish health documents.
@inproceedings{overview_ehealthkd2020, author = {Piad{-}Morffis, Alejandro and Guti{\'{e}}rrez, Yoan and Cañizares-Diaz, Hian and Estevez{-}Velarde, Suilan and Almeida{-}Cruz, Yudivi{\'{a}}n and Muñoz, Rafael and Montoyo, Andr{\'{e}}s}, title = {Overview of the eHealth Knowledge Discovery Challenge at IberLEF 2020}, booktitle = , year = {2020}, }
1
90
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - es license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: eHealth-KD tags: - relation-prediction dataset_info: features: - name: sentence dtype: string - name: entities list: - name: ent_id dtype: string - name: ent_text dtype: string - name: ent_label dtype: class_label: names: '0': Concept '1': Action '2': Predicate '3': Reference - name: start_character dtype: int32 - name: end_character dtype: int32 - name: relations list: - name: rel_id dtype: string - name: rel_label dtype: class_label: names: '0': is-a '1': same-as '2': has-property '3': part-of '4': causes '5': entails '6': in-time '7': in-place '8': in-context '9': subject '10': target '11': domain '12': arg - name: arg1 dtype: string - name: arg2 dtype: string config_name: ehealth_kd splits: - name: train num_bytes: 425713 num_examples: 800 - name: validation num_bytes: 108154 num_examples: 199 - name: test num_bytes: 47314 num_examples: 100 download_size: 565900 dataset_size: 581181 --- # Dataset Card for eHealth-KD ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [eHealth-KD homepage](https://knowledge-learning.github.io/ehealthkd-2020/) - **Repository:** [eHealth-KD repository](https://github.com/knowledge-learning/ehealthkd-2020) - **Paper:** [eHealth-KD overview paper](http://ceur-ws.org/Vol-2664/eHealth-KD_overview.pdf) - **Leaderboard:** [eHealth-KD Challenge 2020 official results](https://knowledge-learning.github.io/ehealthkd-2020/results) - **Point of Contact:** [Yoan Gutiérrez Vázquez](mailto:ygutierrez@dlsi.ua.es) (Organization Committee), [María Grandury](mailto:yacine@huggingface.co) (Dataset Submitter) ### Dataset Summary Dataset of the eHealth-KD Challenge at IberLEF 2020. It is designed for the identification of semantic entities and relations in Spanish health documents. ### Supported Tasks and Leaderboards The eHealth-KD challenge proposes two computational subtasks: - `named-entity-recognition`: Given a sentence of an eHealth document written in Spanish, the goal of this subtask is to identify all the entities and their types. - `relation-prediction`: The purpose of this subtask is to recognise all relevant semantic relationships between the entities recognised. For an analysis of the most successful approaches of this challenge, read the [eHealth-KD overview paper](http://ceur-ws.org/Vol-2664/eHealth-KD_overview.pdf). ### Languages The text in the dataset is in Spanish (BCP-47 code: `es`). ## Dataset Structure ### Data Instances The first example of the eHeatlh-KD Corpus train set looks as follows: ``` { 'sentence': 'En la leucemia linfocítica crónica, hay demasiados linfocitos, un tipo de glóbulos blancos.', 'entities': { [ 'ent_id: 'T1', 'ent_text': 'leucemia linfocítica crónica', 'ent_label': 0, 'start_character': 6, 'end_character': 34 ], [ 'ent_id: 'T2', 'ent_text': 'linfocitos', 'ent_label': 0, 'start_character': 51, 'end_character': 61 ], [ 'ent_id: 'T3', 'ent_text': 'glóbulos blancos', 'ent_label': 0, 'start_character': 74, 'end_character': 90 ] }, relations: { [ 'rel_id: 'R0' 'rel_label': 0, 'arg1': T2 'arg2': T3 ], [ 'rel_id': 'R1' 'rel_label': 5, 'arg1': T1, 'arg2': T2 ] } } ``` ### Data Fields - `sentence`: sentence of an eHealth document written in Spanish - `entities`: list of entities identified in the sentence - `ent_id`: entity identifier (`T`+ a number) - `ent_text`: entity, can consist of one or more complete words (i.e., not a prefix or a suffix of a word), and will never include any surrounding punctuation symbols, parenthesis, etc. - `ent_label`: type of entity (`Concept`, `Action`, `Predicate` or `Reference`) - `start_character`: position of the first character of the entity - `end_character`: position of the last character of the entity - `relations`: list of semantic relationships between the entities recognised - `rel_id`: relation identifier (`R` + a number) - `rel_label`: type of relation, can be a general relation (`is-a`, `same-as`, `has-property`, `part-of`, `causes`, `entails`), a contextual relation (`in-time`, `in-place`, `in-context`) an action role (`subject`, `target`) or a predicate role (`domain`, `arg`). - `arg1`: ID of the first entity of the relation - `arg2`: ID of the second entity of the relation For more information about the types of entities and relations, click [here](https://knowledge-learning.github.io/ehealthkd-2020/tasks). ### Data Splits The data is split into a training, validation and test set. The split sizes are as follow: | | Train | Val | Test | | ----- | ------ | ----- | ---- | | eHealth-KD 2020 | 800 | 199 | 100 | In the challenge there are 4 different scenarios for testing. The test data of this dataset corresponds to the third scenario. More information about the testing data [here](https://github.com/knowledge-learning/ehealthkd-2020/tree/master/data/testing). ## Dataset Creation ### Curation Rationale The vast amount of clinical text available online has motivated the development of automatic knowledge discovery systems that can analyse this data and discover relevant facts. The eHealth Knowledge Discovery (eHealth-KD) challenge, in its third edition, leverages a semantic model of human language that encodes the most common expressions of factual knowledge, via a set of four general-purpose entity types and thirteen semantic relations among them. The challenge proposes the design of systems that can automatically annotate entities and relations in clinical text in the Spanish language. ### Source Data #### Initial Data Collection and Normalization As in the previous edition, the corpus for eHealth-KD 2020 has been extracted from MedlinePlus sources. This platform freely provides large health textual data from which we have made a selection for constituting the eHealth-KD corpus. The selection has been made by sampling specific XML files from the collection available in the [Medline website](https://medlineplus.gov/xml.html). ``` “MedlinePlus is the National Institutes of Health’s Website for patients and their families and friends. Produced by the National Library of Medicine, the world’s largest medical library, it brings you information about diseases, conditions, and wellness issues in language you can understand. MedlinePlus offers reliable, up-to-date health information, anytime, anywhere, for free.” ``` These files contain several entries related to health and medicine topics and have been processed to remove all XML markup to extract the textual content. Only Spanish language items were considered. Once cleaned, each individual item was converted to a plain text document, and some further post-processing is applied to remove unwanted sentences, such as headers, footers and similar elements, and to flatten HTML lists into plain sentences. #### Who are the source language producers? As in the previous edition, the corpus for eHealth-KD 2020 was extracted from [MedlinePlus](https://medlineplus.gov/xml.html) sources. ### Annotations #### Annotation process Once the MedlinePlus files were cleaned, they were manually tagged using [BRAT](http://brat.nlplab.org/) by a group of annotators. After tagging, a post-processing was applied to BRAT’s output files (ANN format) to obtain the output files in the formats needed for the challenge. #### Who are the annotators? The data was manually tagged. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset "The eHealth-KD 2020 proposes –as the previous editions– modeling the human language in a scenario in which Spanish electronic health documents could be machine-readable from a semantic point of view. With this task, we expect to encourage the development of software technologies to automatically extract a large variety of knowledge from eHealth documents written in the Spanish Language." ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators #### Organization Committee | Name | Email | Institution | |:---------------------------------------:|:---------------------:|:-----------------------------:| | Yoan Gutiérrez Vázquez (contact person) | ygutierrez@dlsi.ua.es | University of Alicante, Spain | | Suilan Estévez Velarde | sestevez@matcom.uh.cu | University of Havana, Cuba | | Alejandro Piad Morffis | apiad@matcom.uh.cu | University of Havana, Cuba | | Yudivián Almeida Cruz | yudy@matcom.uh.cu | University of Havana, Cuba | | Andrés Montoyo Guijarro | montoyo@dlsi.ua.es | University of Alicante, Spain | | Rafael Muñoz Guillena | rafael@dlsi.ua.es | University of Alicante, Spain | #### Funding This research has been supported by a Carolina Foundation grant in agreement with University of Alicante and University of Havana. Moreover, it has also been partially funded by both aforementioned universities, IUII, Generalitat Valenciana, Spanish Government, Ministerio de Educación, Cultura y Deporte through the projects SIIA (PROMETEU/2018/089) and LIVINGLANG (RTI2018-094653-B-C22). ### Licensing Information This dataset is under the Attribution-NonCommercial-ShareAlike 4.0 International [(CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). To accept the distribution terms, please fill in the following [form](https://forms.gle/pUJutSDq2FYLwNWQA). ### Citation Information In the following link you can find the [preliminar bibtexts of the systems’ working-notes](https://knowledge-learning.github.io/ehealthkd-2020/shared/eHealth-KD_2020_bibtexts.zip). In addition, to cite the eHealth-KD challenge you can use the following preliminar bibtext: ``` @inproceedings{overview_ehealthkd2020, author = {Piad{-}Morffis, Alejandro and Guti{\'{e}}rrez, Yoan and Ca{\~{n}}izares-Diaz, Hian and Estevez{-}Velarde, Suilan and Almeida{-}Cruz, Yudivi{\'{a}}n and Mu{\~{n}}oz, Rafael and Montoyo, Andr{\'{e}}s}, title = {Overview of the eHealth Knowledge Discovery Challenge at IberLEF 2020}, booktitle = , year = {2020}, } ``` ### Contributions Thanks to [@mariagrandury](https://github.com/mariagrandury) for adding this dataset.
12,488
[ [ -0.021728515625, -0.052886962890625, 0.046356201171875, 0.01123046875, -0.01108551025390625, 0.000010967254638671875, -0.00618743896484375, -0.04705810546875, 0.0452880859375, 0.042205810546875, -0.053314208984375, -0.07659912109375, -0.033599853515625, 0.01...
igbo_english_machine_translation
2022-11-18T20:08:55.000Z
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:ig", "license:unknown", "arxiv:2004.00648", "region:us" ]
null
Parallel Igbo-English Dataset
@misc{ezeani2020igboenglish, title={Igbo-English Machine Translation: An Evaluation Benchmark}, author={Ignatius Ezeani and Paul Rayson and Ikechukwu Onyenwe and Chinedu Uchechukwu and Mark Hepple}, year={2020}, eprint={2004.00648}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2004.00648} }
1
90
2022-03-02T23:29:22
--- annotations_creators: - found language_creators: - found language: - en - ig license: - unknown multilinguality: - multilingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: igbonlp-datasets pretty_name: IgboNLP Datasets dataset_info: features: - name: id dtype: string - name: translation dtype: translation: languages: - ig - en config_name: ig-en splits: - name: train num_bytes: 2367989 num_examples: 10000 - name: validation num_bytes: 60154 num_examples: 200 - name: test num_bytes: 298670 num_examples: 552 download_size: 2580255 dataset_size: 2726813 --- # Dataset Card for IgboNLP Datasets ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** None - **Repository:** https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_en_mt - **Paper:** https://arxiv.org/abs/2004.00648 - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
3,374
[ [ -0.0447998046875, -0.0284881591796875, -0.0009784698486328125, 0.0265350341796875, -0.0165252685546875, 0.0227813720703125, -0.0259246826171875, -0.0301666259765625, 0.036590576171875, 0.03717041015625, -0.07373046875, -0.0703125, -0.053802490234375, 0.00175...
ms_terms
2022-11-03T16:08:00.000Z
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:multilingual", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:af", "language:am", "language:ar", "language:as", "la...
null
The Microsoft Terminology Collection can be used to develop localized versions of applications that integrate with Microsoft products. It can also be used to integrate Microsoft terminology into other terminology collections or serve as a base IT glossary for language development in the nearly 100 languages available. Terminology is provided in .tbx format, an industry standard for terminology exchange.
null
3
90
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - af - am - ar - as - az - be - bg - bn - bs - ca - chr - cs - cy - da - de - el - en - es - et - eu - fa - fi - fil - fr - ga - gd - gl - gu - guc - ha - he - hi - hr - hu - hy - id - ig - is - it - iu - ja - ka - kk - km - kn - knn - ko - ku - ky - lb - lo - lt - lv - mi - mk - ml - mn - mr - ms - mt - nb - ne - nl - nn - ory - pa - pl - prs - pst - pt - qu - quc - ro - ru - rw - sd - si - sk - sl - sq - sr - st - sv - swh - ta - te - tg - th - ti - tk - tn - tr - tt - ug - uk - ur - uz - vi - wo - xh - yo - zh - zu language_bcp47: - bn-IN - bs-Latn - es-MX - fr-CA - ms-BN - pt-BR - sr-BH - sr-Latn - zh-Hant-HK - zh-Hant-TW license: - ms-pl multilinguality: - multilingual - translation size_categories: - 10K<n<100K source_datasets: - original task_categories: - translation task_ids: [] paperswithcode_id: null pretty_name: MsTerms dataset_info: features: - name: entry_id dtype: string - name: term_source dtype: string - name: pos dtype: string - name: definition dtype: string - name: term_target dtype: string splits: - name: train num_bytes: 6995497 num_examples: 33738 download_size: 0 dataset_size: 6995497 --- # Dataset Card for [ms_terms] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Microsoft Terminology Collection](https://www.microsoft.com/en-us/language/terminology) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Microsoft Terminology Collection can be used to develop localized versions of applications that integrate with Microsoft products. It can also be used to integrate Microsoft terminology into other terminology collections or serve as a base IT glossary for language development in the nearly 100 languages available. Terminology is provided in .tbx format, an industry standard for terminology exchange. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Nearly 100 Languages. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@leoxzhao](https://github.com/leoxzhao), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
4,200
[ [ -0.032928466796875, -0.035400390625, -0.0012407302856445312, 0.01873779296875, -0.01287078857421875, 0.031585693359375, -0.0220947265625, -0.02423095703125, 0.044342041015625, 0.043975830078125, -0.0546875, -0.07843017578125, -0.046478271484375, 0.0109939575...
msr_genomics_kbcomp
2023-01-25T14:40:48.000Z
[ "task_categories:other", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:other", "genomics-knowledge-base-bompletion", "region:us" ]
null
The database is derived from the NCI PID Pathway Interaction Database, and the textual mentions are extracted from cooccurring pairs of genes in PubMed abstracts, processed and annotated by Literome (Poon et al. 2014). This dataset was used in the paper “Compositional Learning of Embeddings for Relation Paths in Knowledge Bases and Text” (Toutanova, Lin, Yih, Poon, and Quirk, 2016).
@inproceedings{toutanova-etal-2016-compositional, title = "Compositional Learning of Embeddings for Relation Paths in Knowledge Base and Text", author = "Toutanova, Kristina and Lin, Victoria and Yih, Wen-tau and Poon, Hoifung and Quirk, Chris", booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2016", address = "Berlin, Germany", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P16-1136", doi = "10.18653/v1/P16-1136", pages = "1434--1444", }
0
90
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: - other multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - other task_ids: [] pretty_name: MsrGenomicsKbcomp tags: - genomics-knowledge-base-bompletion dataset_info: features: - name: GENE1 dtype: string - name: relation dtype: class_label: names: '0': Positive_regulation '1': Negative_regulation '2': Family - name: GENE2 dtype: string splits: - name: train num_bytes: 256789 num_examples: 12160 - name: test num_bytes: 58116 num_examples: 2784 - name: validation num_bytes: 27457 num_examples: 1315 download_size: 0 dataset_size: 342362 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [NCI-PID-PubMed Genomics Knowledge Base Completion Dataset](https://msropendata.com/datasets/80b4f6e8-5d7c-4abc-9c79-2e51dfedd791) - **Repository:** [NCI-PID-PubMed Genomics Knowledge Base Completion Dataset](NCI-PID-PubMed Genomics Knowledge Base Completion Dataset) - **Paper:** [Compositional Learning of Embeddings for Relation Paths in Knowledge Base and Text](https://www.aclweb.org/anthology/P16-1136/) - **Point of Contact:** [Kristina Toutanova](mailto:kristout@google.com) ### Dataset Summary The database is derived from the NCI PID Pathway Interaction Database, and the textual mentions are extracted from cooccurring pairs of genes in PubMed abstracts, processed and annotated by Literome (Poon et al. 2014). This dataset was used in the paper “Compositional Learning of Embeddings for Relation Paths in Knowledge Bases and Text” (Toutanova, Lin, Yih, Poon, and Quirk, 2016). More details can be found in the included README. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure NCI-PID-PubMed Genomics Knowledge Base Completion Dataset This dataset includes a database of regulation relationships among genes and corresponding textual mentions of pairs of genes in PubMed article abstracts. The database is derived from the NCI PID Pathway Interaction Database, and the textual mentions are extracted from cooccurring pairs of genes in PubMed abstracts, processed and annotated by Literome. This dataset was used in the paper "Compositional Learning of Embeddings for Relation Paths in Knowledge Bases and Text". FILE FORMAT DETAILS The files train.txt, valid.txt, and test.text contain the training, development, and test set knowledge base (database of regulation relationships) triples used in. The file text.txt contains the textual triples derived from PubMed via entity linking and processing with Literome. The textual mentions were used for knowledge base completion in. The separator is a tab character; the relations are Positive_regulation, Negative_regulation, and Family (Family relationships occur only in the training set). The format is: | GENE1 | relation | GENE2 | Example: ABL1 Positive_regulation CDK2 The separator is a tab character; the relations are Positive_regulation, Negative_regulation, and Family (Family relationships occur only in the training set). ### Data Instances [More Information Needed] ### Data Fields The format is: | GENE1 | relation | GENE2 | ### Data Splits [More Information Needed] ## Dataset Creation [More Information Needed] ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data [More Information Needed] ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information [More Information Needed] ### Dataset Curators The dataset was initially created by Kristina Toutanova, Victoria Lin, Wen-tau Yih, Hoifung Poon and Chris Quirk, during work done at Microsoft Research. ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{toutanova-etal-2016-compositional, title = "Compositional Learning of Embeddings for Relation Paths in Knowledge Base and Text", author = "Toutanova, Kristina and Lin, Victoria and Yih, Wen-tau and Poon, Hoifung and Quirk, Chris", booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = aug, year = "2016", address = "Berlin, Germany", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P16-1136", doi = "10.18653/v1/P16-1136", pages = "1434--1444", } ``` ### Contributions Thanks to [@manandey](https://github.com/manandey) for adding this dataset.
6,312
[ [ -0.0166168212890625, -0.048675537109375, 0.028900146484375, 0.01511383056640625, -0.01629638671875, -0.005588531494140625, -0.023529052734375, -0.0176849365234375, 0.0307464599609375, 0.03741455078125, -0.053436279296875, -0.0731201171875, -0.03826904296875, ...
mutual_friends
2022-11-18T21:31:53.000Z
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "arxiv:1...
null
Our goal is to build systems that collaborate with people by exchanging information through natural language and reasoning over structured knowledge base. In the MutualFriend task, two agents, A and B, each have a private knowledge base, which contains a list of friends with multiple attributes (e.g., name, school, major, etc.). The agents must chat with each other to find their unique mutual friend.
@inproceedings{he-etal-2017-learning, title = "Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings", author = "He, He and Balakrishnan, Anusha and Eric, Mihail and Liang, Percy", booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2017", address = "Vancouver, Canada", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P17-1162", doi = "10.18653/v1/P17-1162", pages = "1766--1776", abstract = "We study a \textit{symmetric collaborative dialogue} setting in which two agents, each with private knowledge, must strategically communicate to achieve a common goal. The open-ended dialogue state in this setting poses new challenges for existing dialogue systems. We collected a dataset of 11K human-human dialogues, which exhibits interesting lexical, semantic, and strategic elements. To model both structured knowledge and unstructured language, we propose a neural model with dynamic knowledge graph embeddings that evolve as the dialogue progresses. Automatic and human evaluations show that our model is both more effective at achieving the goal and more human-like than baseline neural and rule-based models.", }
2
90
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-generation - fill-mask task_ids: - dialogue-modeling paperswithcode_id: mutualfriends pretty_name: MutualFriends dataset_info: features: - name: uuid dtype: string - name: scenario_uuid dtype: string - name: scenario_alphas sequence: float32 - name: scenario_attributes sequence: - name: unique dtype: bool_ - name: value_type dtype: string - name: name dtype: string - name: scenario_kbs sequence: sequence: sequence: sequence: string - name: agents struct: - name: '1' dtype: string - name: '0' dtype: string - name: outcome_reward dtype: int32 - name: events struct: - name: actions sequence: string - name: start_times sequence: float32 - name: data_messages sequence: string - name: data_selects sequence: - name: attributes sequence: string - name: values sequence: string - name: agents sequence: int32 - name: times sequence: float32 config_name: plain_text splits: - name: train num_bytes: 26979472 num_examples: 8967 - name: test num_bytes: 3327158 num_examples: 1107 - name: validation num_bytes: 3267881 num_examples: 1083 download_size: 41274578 dataset_size: 33574511 --- # Dataset Card for MutualFriends ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [COCOA](https://stanfordnlp.github.io/cocoa/) - **Repository:** [Github repository](https://github.com/stanfordnlp/cocoa) - **Paper:** [Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings (ACL 2017)](https://arxiv.org/abs/1704.07130) - **Codalab**: [Codalab](https://worksheets.codalab.org/worksheets/0xc757f29f5c794e5eb7bfa8ca9c945573/) ### Dataset Summary Our goal is to build systems that collaborate with people by exchanging information through natural language and reasoning over structured knowledge base. In the MutualFriend task, two agents, A and B, each have a private knowledge base, which contains a list of friends with multiple attributes (e.g., name, school, major, etc.). The agents must chat with each other to find their unique mutual friend. ### Supported Tasks and Leaderboards We consider two agents, each with a private knowledge base of items, who must communicate their knowledge to achieve a common goal. Specifically, we designed the MutualFriends task (see the figure below). Each agent has a list of friends with attributes like school, major etc. They must chat with each other to find the unique mutual friend. ### Languages The text in the dataset is in English. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances An example looks like this. ``` { 'uuid': 'C_423324a5fff045d78bef75a6f295a3f4' 'scenario_uuid': 'S_hvmRM4YNJd55ecT5', 'scenario_alphas': [0.30000001192092896, 1.0, 1.0], 'scenario_attributes': { 'name': ['School', 'Company', 'Location Preference'], 'unique': [False, False, False], 'value_type': ['school', 'company', 'loc_pref'] }, 'scenario_kbs': [ [ [['School', 'Company', 'Location Preference'], ['Longwood College', 'Alton Steel', 'indoor']], [['School', 'Company', 'Location Preference'], ['Salisbury State University', 'Leonard Green & Partners', 'indoor']], [['School', 'Company', 'Location Preference'], ['New Mexico Highlands University', 'Crazy Eddie', 'indoor']], [['School', 'Company', 'Location Preference'], ['Rhodes College', "Tully's Coffee", 'indoor']], [['School', 'Company', 'Location Preference'], ['Sacred Heart University', 'AMR Corporation', 'indoor']], [['School', 'Company', 'Location Preference'], ['Salisbury State University', 'Molycorp', 'indoor']], [['School', 'Company', 'Location Preference'], ['New Mexico Highlands University', 'The Hartford Financial Services Group', 'indoor']], [['School', 'Company', 'Location Preference'], ['Sacred Heart University', 'Molycorp', 'indoor']], [['School', 'Company', 'Location Preference'], ['Babson College', 'The Hartford Financial Services Group', 'indoor']] ], [ [['School', 'Company', 'Location Preference'], ['National Technological University', 'Molycorp', 'indoor']], [['School', 'Company', 'Location Preference'], ['Fairmont State College', 'Leonard Green & Partners', 'outdoor']], [['School', 'Company', 'Location Preference'], ['Johnson C. Smith University', 'Data Resources Inc.', 'outdoor']], [['School', 'Company', 'Location Preference'], ['Salisbury State University', 'Molycorp', 'indoor']], [['School', 'Company', 'Location Preference'], ['Fairmont State College', 'Molycorp', 'outdoor']], [['School', 'Company', 'Location Preference'], ['University of South Carolina - Aiken', 'Molycorp', 'indoor']], [['School', 'Company', 'Location Preference'], ['University of South Carolina - Aiken', 'STX', 'outdoor']], [['School', 'Company', 'Location Preference'], ['National Technological University', 'STX', 'outdoor']], [['School', 'Company', 'Location Preference'], ['Johnson C. Smith University', 'Rockstar Games', 'indoor']] ] ], 'agents': { '0': 'human', '1': 'human' }, 'outcome_reward': 1, 'events': { 'actions': ['message', 'message', 'message', 'message', 'select', 'select'], 'agents': [1, 1, 0, 0, 1, 0], 'data_messages': ['Hello', 'Do you know anyone who works at Molycorp?', 'Hi. All of my friends like the indoors.', 'Ihave two friends that work at Molycorp. They went to Salisbury and Sacred Heart.', '', ''], 'data_selects': { 'attributes': [ [], [], [], [], ['School', 'Company', 'Location Preference'], ['School', 'Company', 'Location Preference'] ], 'values': [ [], [], [], [], ['Salisbury State University', 'Molycorp', 'indoor'], ['Salisbury State University', 'Molycorp', 'indoor'] ] }, 'start_times': [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0], 'times': [1480737280.0, 1480737280.0, 1480737280.0, 1480737280.0, 1480737280.0, 1480737280.0] }, } ``` ### Data Fields - `uuid`: example id. - `scenario_uuid`: scenario id. - `scenario_alphas`: scenario alphas. - `scenario_attributes`: all the attributes considered in the scenario. The dictionaries are liniearized: to reconstruct the dictionary of attribute i-th, one should extract the i-th elements of `unique`, `value_type` and `name`. - `unique`: bool. - `value_type`: code/type of the attribute. - `name`: name of the attribute. - `scenario_kbs`: descriptions of the persons present in the two users' databases. List of two (one for each user in the dialogue). `scenario_kbs[i]` is a list of persons. Each person is represented as two lists (one for attribute names and the other for attribute values). The j-th element of attribute names corresponds to the j-th element of attribute values (linearized dictionary). - `agents`: the two users engaged in the dialogue. - `outcome_reward`: reward of the present dialogue. - `events`: dictionary describing the dialogue. The j-th element of each sub-element of the dictionary describes the turn along the axis of the sub-element. - `actions`: type of turn (either `message` or `select`). - `agents`: who is talking? Agent 1 or 0? - `data_messages`: the string exchanged if `action==message`. Otherwise, empty string. - `data_selects`: selection of the user if `action==select`. Otherwise, empty selection/dictionary. - `start_times`: always -1 in these data. - `times`: sending time. ### Data Splits There are 8967 dialogues for training, 1083 for validation and 1107 for testing. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{he-etal-2017-learning, title = "Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings", author = "He, He and Balakrishnan, Anusha and Eric, Mihail and Liang, Percy", booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2017", address = "Vancouver, Canada", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P17-1162", doi = "10.18653/v1/P17-1162", pages = "1766--1776", abstract = "We study a \textit{symmetric collaborative dialogue} setting in which two agents, each with private knowledge, must strategically communicate to achieve a common goal. The open-ended dialogue state in this setting poses new challenges for existing dialogue systems. We collected a dataset of 11K human-human dialogues, which exhibits interesting lexical, semantic, and strategic elements. To model both structured knowledge and unstructured language, we propose a neural model with dynamic knowledge graph embeddings that evolve as the dialogue progresses. Automatic and human evaluations show that our model is both more effective at achieving the goal and more human-like than baseline neural and rule-based models.", } ``` ### Contributions Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
11,255
[ [ -0.03564453125, -0.049285888671875, 0.0352783203125, 0.0167083740234375, 0.0158538818359375, 0.0052032470703125, 0.0002028942108154297, -0.0175018310546875, 0.045013427734375, 0.026275634765625, -0.054229736328125, -0.0684814453125, -0.036041259765625, 0.002...
nkjp-ner
2023-01-25T14:41:28.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:gpl-3.0", "region:us" ]
null
The NKJP-NER is based on a human-annotated part of National Corpus of Polish (NKJP). We extracted sentences with named entities of exactly one type. The task is to predict the type of the named entity.
@book{przepiorkowski2012narodowy, title={Narodowy korpus jezyka polskiego}, author={Przepi{\'o}rkowski, Adam}, year={2012}, publisher={Naukowe PWN} }
1
90
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - other language: - pl license: - gpl-3.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: NJKP NER dataset_info: features: - name: sentence dtype: string - name: target dtype: class_label: names: '0': geogName '1': noEntity '2': orgName '3': persName '4': placeName '5': time splits: - name: train num_bytes: 1612125 num_examples: 15794 - name: test num_bytes: 221092 num_examples: 2058 - name: validation num_bytes: 196652 num_examples: 1941 download_size: 821629 dataset_size: 2029869 --- # Dataset Card for NJKP NER ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://nkjp.pl/index.php?page=0&lang=1 - **Repository:** - **Paper:** @book{przepiorkowski2012narodowy, title={Narodowy korpus j{\k{e}}zyka polskiego}, author={Przepi{\'o}rkowski, Adam}, year={2012}, publisher={Naukowe PWN} - **Leaderboard:** - **Point of Contact:** adamp@ipipan.waw.pl ### Dataset Summary A linguistic corpus is a collection of texts where one can find the typical use of a single word or a phrase, as well as their meaning and grammatical function. Nowadays, without access to a language corpus, it has become impossible to do linguistic research, to write dictionaries, grammars and language teaching books, to create search engines sensitive to Polish inflection, machine translation engines and software of advanced language technology. Language corpora have become an essential tool for linguists, but they are also helpful for software engineers, scholars of literature and culture, historians, librarians and other specialists of art and computer sciences. The manually annotated 1-million word subcorpus of the NJKP, available on GNU GPL v.3 ### Supported Tasks and Leaderboards Named entity recognition [More Information Needed] ### Languages Polish ## Dataset Structure ### Data Instances Two tsv files (train, dev) with two columns (sentence, target) and one (test) with just one (sentence). ### Data Fields - sentence - target ### Data Splits Data is splitted in train/dev/test split. ## Dataset Creation ### Curation Rationale This dataset is one of nine evaluation tasks to improve polish language processing. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information GNU GPL v.3 ### Citation Information @book{przepiorkowski2012narodowy, title={Narodowy korpus j{\k{e}}zyka polskiego}, author={Przepi{\'o}rkowski, Adam}, year={2012}, publisher={Naukowe PWN} } ### Contributions Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
4,446
[ [ -0.040740966796875, -0.052276611328125, 0.00583648681640625, 0.0196685791015625, -0.020721435546875, -0.00559234619140625, -0.044769287109375, -0.0316162109375, 0.04669189453125, 0.051361083984375, -0.053985595703125, -0.06134033203125, -0.0457763671875, 0.0...
wmt20_mlqe_task3
2023-01-25T15:02:49.000Z
[ "task_categories:translation", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:translation", "size_categories:1K<n<10K", "source_datasets:extended|amazon_us_reviews", "language:en", "language:fr", "license:unknown", "...
null
This shared task (part of WMT20) will build on its previous editions to further examine automatic methods for estimating the quality of neural machine translation output at run-time, without relying on reference translations. As in previous years, we cover estimation at various levels. Important elements introduced this year include: a new task where sentences are annotated with Direct Assessment (DA) scores instead of labels based on post-editing; a new multilingual sentence-level dataset mainly from Wikipedia articles, where the source articles can be retrieved for document-wide context; the availability of NMT models to explore system-internal information for the task. The goal of this task 3 is to predict document-level quality scores as well as fine-grained annotations.
Not available.
0
90
2022-03-02T23:29:22
--- annotations_creators: - expert-generated - machine-generated language_creators: - found language: - en - fr license: - unknown multilinguality: - translation size_categories: - 1K<n<10K source_datasets: - extended|amazon_us_reviews task_categories: - translation task_ids: [] pretty_name: WMT20 - MultiLingual Quality Estimation (MLQE) Task3 dataset_info: features: - name: document_id dtype: string - name: source_segments sequence: string - name: source_tokenized sequence: string - name: mt_segments sequence: string - name: mt_tokenized sequence: string - name: annotations sequence: - name: segment_id sequence: int32 - name: annotation_start sequence: int32 - name: annotation_length sequence: int32 - name: severity dtype: class_label: names: '0': minor '1': major '2': critical - name: severity_weight dtype: float32 - name: category dtype: class_label: names: '0': Addition '1': Agreement '2': Ambiguous Translation '3': Capitalization '4': Character Encoding '5': Company Terminology '6': Date/Time '7': Diacritics '8': Duplication '9': False Friend '10': Grammatical Register '11': Hyphenation '12': Inconsistency '13': Lexical Register '14': Lexical Selection '15': Named Entity '16': Number '17': Omitted Auxiliary Verb '18': Omitted Conjunction '19': Omitted Determiner '20': Omitted Preposition '21': Omitted Pronoun '22': Orthography '23': Other POS Omitted '24': Over-translation '25': Overly Literal '26': POS '27': Punctuation '28': Shouldn't Have Been Translated '29': Shouldn't have been translated '30': Spelling '31': Tense/Mood/Aspect '32': Under-translation '33': Unidiomatic '34': Unintelligible '35': Unit Conversion '36': Untranslated '37': Whitespace '38': Word Order '39': Wrong Auxiliary Verb '40': Wrong Conjunction '41': Wrong Determiner '42': Wrong Language Variety '43': Wrong Preposition '44': Wrong Pronoun - name: token_annotations sequence: - name: segment_id sequence: int32 - name: first_token sequence: int32 - name: last_token sequence: int32 - name: token_after_gap sequence: int32 - name: severity dtype: class_label: names: '0': minor '1': major '2': critical - name: category dtype: class_label: names: '0': Addition '1': Agreement '2': Ambiguous Translation '3': Capitalization '4': Character Encoding '5': Company Terminology '6': Date/Time '7': Diacritics '8': Duplication '9': False Friend '10': Grammatical Register '11': Hyphenation '12': Inconsistency '13': Lexical Register '14': Lexical Selection '15': Named Entity '16': Number '17': Omitted Auxiliary Verb '18': Omitted Conjunction '19': Omitted Determiner '20': Omitted Preposition '21': Omitted Pronoun '22': Orthography '23': Other POS Omitted '24': Over-translation '25': Overly Literal '26': POS '27': Punctuation '28': Shouldn't Have Been Translated '29': Shouldn't have been translated '30': Spelling '31': Tense/Mood/Aspect '32': Under-translation '33': Unidiomatic '34': Unintelligible '35': Unit Conversion '36': Untranslated '37': Whitespace '38': Word Order '39': Wrong Auxiliary Verb '40': Wrong Conjunction '41': Wrong Determiner '42': Wrong Language Variety '43': Wrong Preposition '44': Wrong Pronoun - name: token_index sequence: sequence: sequence: int32 - name: total_words dtype: int32 config_name: plain_text splits: - name: train num_bytes: 10762355 num_examples: 1448 - name: test num_bytes: 745260 num_examples: 180 - name: validation num_bytes: 1646596 num_examples: 200 download_size: 3534634 dataset_size: 13154211 --- # Dataset Card for WMT20 - MultiLingual Quality Estimation (MLQE) Task3 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [WMT20 Quality Estimation Shared Task](http://www.statmt.org/wmt20/quality-estimation-task.html) - **Repository**: [Github repository](https://github.com/deep-spin/deep-spin.github.io/tree/master/docs/data/wmt2020_qe) - **Paper:** *Not available* ### Dataset Summary From the homepage: *This shared task (part of WMT20) will build on its previous editions to further examine automatic methods for estimating the quality of neural machine translation output at run-time, without relying on reference translations. As in previous years, we cover estimation at various levels. Important elements introduced this year include: a new task where sentences are annotated with Direct Assessment (DA) scores instead of labels based on post-editing; a new multilingual sentence-level dataset mainly from Wikipedia articles, where the source articles can be retrieved for document-wide context; the availability of NMT models to explore system-internal information for the task.* *The goal of this task 3 is to predict document-level quality scores as well as fine-grained annotations.* *Each document has a product title and its description, and is annotated for translation errors according to the MQM framework. Each error annotation has:* - ***Word span(s).*** *Errors may consist of one or more words, not necessarily contiguous.* - ***Severity.*** *An error can be minor (if it doesn't lead to a loss of meaning and it doesn't confuse or mislead the user), major (if it changes the meaning) or critical (if it changes the meaning and carry any type of implication, or could be seen as offensive).* - ***Type.*** *A label specifying the error type, such as wrong word order, missing words, agreement, etc. They may provide additional information, but systems don't need to predict them.* ### Supported Tasks and Leaderboards From the homepage: *Submissions will be evaluated as in Task 1, in terms of Pearson's correlation between the true and predicted MQM document-level scores. Additionally, the predicted annotations will be evaluated in terms of their F1 scores with respect to the gold annotations. The [official evaluation scripts](https://github.com/sheffieldnlp/qe-eval-scripts) are available.* ### Languages There is a single language pair in the dataset: English (`en`) - French (`fr`). ## Dataset Structure ### Data Instances An example looks like this: ``` { 'document_id': 'B0000568SY', 'source_segments': ['Razor Scooter Replacement Wheels Set with Bearings', 'Scooter Wheels w/Bearings-Blue'], 'source_tokenized': ['Razor Scooter Replacement Wheels Set with Bearings', 'Scooter Wheels w / Bearings-Blue'], 'mt_segments': ['Roues de rechange Razor Scooter sertie de roulements', 'Roues de scooter w/roulements-bleu'], 'mt_tokenized': ['Roues de rechange Razor Scooter sertie de roulements', 'Roues de scooter w / roulements-bleu'], 'annotations': { 'segment_id': [[0], [1], [1], [0, 0], [0], [1], [1]], 'annotation_start': [[42], [19], [9], [0, 32], [9], [17], [30]], 'annotation_length': [[10], [10], [7], [5, 6], [8], [1], [4]], 'severity': [0, 0, 0, 0, 0, 1, 0], 'severity_weight': [1.0, 1.0, 1.0, 1.0, 1.0, 5.0, 1.0] 'category': [3, 3, 3, 1, 3, 36, 3], }, 'token_annotations': { 'category': [3, 3, 3, 1, 3, 36, 3], 'first_token': [[7], [5], [2], [0, 5], [2], [3], [5]], 'last_token': [[7], [5], [2], [0, 5], [2], [3], [5]], 'segment_id': [[0], [1], [1], [0, 0], [0], [1], [1]], 'severity': [0, 0, 0, 0, 0, 1, 0], 'token_after_gap': [[-1], [-1], [-1], [-1, -1], [-1], [-1], [-1]] }, 'token_index': [[[0, 5], [6, 2], [9, 8], [18, 5], [24, 7], [32, 6], [39, 2], [42, 10]], [[0, 5], [6, 2], [9, 7], [17, 1], [18, 1], [19, 15]]], 'total_words': 16 } ``` ### Data Fields - `document_id`: the document id (name of the folder). - `source_segments`: the original source text, one sentence per line (i.e. per element of the list). - `source_tokenized`: a tokenized version of `source_segments`. - `mt_segments`: the original machine-translated text, one sentence per line (i.e. per element of the list). - `mt_tokenized`: a tokenized version of `mt_segments`. Default value is `[]` when this information is not available (it happens 3 times in the train set: `B0001BW0PQ`, `B0001GS19U` and `B000A6SMJ0`). - `annotations`: error annotations for the document. Each item of the list corresponds to an error annotation, which in turn may contain one or more error spans. Error fields are encoded in a dictionary. In the case of a multi-span error, multiple starting positions and lengths are encoded in the list. Note that these positions points to `mt.segments`, not `mt_tokenized`. - `segment_id`: List of list of integers. Id of each error. - `annotation_start`: List of list of integers. Start of each error. - `annotation_length`: List of list of intergers. Length of each error. - `severity`: List of one hot. Severity category of each error. - `severity_weight`: List of floats. Severity weight of each error. - `category`: List of one hot. Category of each error. See the 45 categories in `_ANNOTATION_CATEGORIES_MAPPING`. - `token_annotations`: tokenized version of `annotations`. Each error span that contains one or more tokens has a "first token" and "last token". Again, multi-span errors have their first and last tokens encoded in a list. When a span is over a gap between two tokens, the "first" and "last" positions are `-1` (encoded as `-` in the original data), and instead the `token_after_gap` column points to the token immediately after the gap. In case of a gap occurring at the end of the sentence, this value will be equal to the number of tokens. - `segment_id`: List of list of integers. Id of each error. - `first_token`: List of list of integers. Start of each error. - `last_token`: List of list of intergers. End of each error. - `token_after_gap`: List of list of integers. Token after gap of each error. - `severity`: List of one hot. Severity category of each error. - `category`: List of one hot. Category of each error. See the 45 categories in `_ANNOTATION_CATEGORIES_MAPPING`. - `token_index`: a mapping of tokens to their start and ending positions in `mt_segments`. For each token, a start and end value are encoded in a list of length 2, and all tokens represent one item in the list. - `total_words`: total number of words in the document ``` _ANNOTATION_CATEGORIES_MAPPING = { 0: 'Addition', 1: 'Agreement', 2: 'Ambiguous Translation', 3: 'Capitalization', 4: 'Character Encoding', 5: 'Company Terminology', 6: 'Date/Time', 7: 'Diacritics', 8: 'Duplication', 9: 'False Friend', 10: 'Grammatical Register', 11: 'Hyphenation', 12: 'Inconsistency', 13: 'Lexical Register', 14: 'Lexical Selection', 15: 'Named Entity', 16: 'Number', 17: 'Omitted Auxiliary Verb', 18: 'Omitted Conjunction', 19: 'Omitted Determiner', 20: 'Omitted Preposition', 21: 'Omitted Pronoun', 22: 'Orthography', 23: 'Other POS Omitted', 24: 'Over-translation', 25: 'Overly Literal', 26: 'POS', 27: 'Punctuation', 28: "Shouldn't Have Been Translated", 29: "Shouldn't have been translated", 30: 'Spelling', 31: 'Tense/Mood/Aspect', 32: 'Under-translation', 33: 'Unidiomatic', 34: 'Unintelligible', 35: 'Unit Conversion', 36: 'Untranslated', 37: 'Whitespace', 38: 'Word Order', 39: 'Wrong Auxiliary Verb', 40: 'Wrong Conjunction', 41: 'Wrong Determiner', 42: 'Wrong Language Variety', 43: 'Wrong Preposition', 44: 'Wrong Pronoun' } ``` ### Data Splits The dataset contains 1,448 documents for training, 200 documents for validation and 180 for (blind) test (all English-French). ## Dataset Creation ### Curation Rationale The data is dervied from the [Amazon Product Reviews dataset](http://jmcauley.ucsd.edu/data/amazon/). ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Unknown ### Citation Information ``` Not available. ``` ### Contributions Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
14,790
[ [ -0.034912109375, -0.04119873046875, 0.0200042724609375, 0.031524658203125, -0.018890380859375, -0.0029850006103515625, -0.01355743408203125, -0.0228118896484375, 0.030242919921875, 0.032989501953125, -0.059783935546875, -0.06524658203125, -0.054168701171875, ...
GEM/cochrane-simplification
2022-10-24T15:30:10.000Z
[ "task_categories:text2text-generation", "task_ids:text-simplification", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
GEM
This dataset measures the ability for a model to simplify paragraphs of medical text through the omission non-salient information and simplification of medical jargon.
@inproceedings{devaraj-etal-2021-paragraph, title = "Paragraph-level Simplification of Medical Texts", author = "Devaraj, Ashwin and Marshall, Iain and Wallace, Byron and Li, Junyi Jessy", booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.naacl-main.395", doi = "10.18653/v1/2021.naacl-main.395", pages = "4972--4984", }
3
90
2022-03-02T23:29:22
--- annotations_creators: - none language_creators: - unknown language: - en license: - cc-by-4.0 multilinguality: - unknown size_categories: - unknown source_datasets: - original task_categories: - text2text-generation task_ids: - text-simplification pretty_name: cochrane-simplification --- # Dataset Card for GEM/cochrane-simplification ## Dataset Description - **Homepage:** https://github.com/AshOlogn/Paragraph-level-Simplification-of-Medical-Texts - **Repository:** https://github.com/AshOlogn/Paragraph-level-Simplification-of-Medical-Texts - **Paper:** https://aclanthology.org/2021.naacl-main.395/ - **Leaderboard:** N/A - **Point of Contact:** Ashwin Devaraj ### Link to Main Data Card You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/cochrane-simplification). ### Dataset Summary Cochrane is an English dataset for paragraph-level simplification of medical texts. Cochrane is a database of systematic reviews of clinical questions, many of which have summaries in plain English targeting readers without a university education. The dataset comprises about 4,500 of such pairs. You can load the dataset via: ``` import datasets data = datasets.load_dataset('GEM/cochrane-simplification') ``` The data loader can be found [here](https://huggingface.co/datasets/GEM/cochrane-simplification). #### website [Link](https://github.com/AshOlogn/Paragraph-level-Simplification-of-Medical-Texts) #### paper [Link](https://aclanthology.org/2021.naacl-main.395/) #### authors Ashwin Devaraj (The University of Texas at Austin), Iain J. Marshall (King's College London), Byron C. Wallace (Northeastern University), Junyi Jessy Li (The University of Texas at Austin) ## Dataset Overview ### Where to find the Data and its Documentation #### Webpage <!-- info: What is the webpage for the dataset (if it exists)? --> <!-- scope: telescope --> [Link](https://github.com/AshOlogn/Paragraph-level-Simplification-of-Medical-Texts) #### Download <!-- info: What is the link to where the original dataset is hosted? --> <!-- scope: telescope --> [Link](https://github.com/AshOlogn/Paragraph-level-Simplification-of-Medical-Texts) #### Paper <!-- info: What is the link to the paper describing the dataset (open access preferred)? --> <!-- scope: telescope --> [Link](https://aclanthology.org/2021.naacl-main.395/) #### BibTex <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. --> <!-- scope: microscope --> ``` @inproceedings{devaraj-etal-2021-paragraph, title = "Paragraph-level Simplification of Medical Texts", author = "Devaraj, Ashwin and Marshall, Iain and Wallace, Byron and Li, Junyi Jessy", booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.naacl-main.395", doi = "10.18653/v1/2021.naacl-main.395", pages = "4972--4984", } ``` #### Contact Name <!-- quick --> <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> Ashwin Devaraj #### Contact Email <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> ashwin.devaraj@utexas.edu #### Has a Leaderboard? <!-- info: Does the dataset have an active leaderboard? --> <!-- scope: telescope --> no ### Languages and Intended Use #### Multilingual? <!-- quick --> <!-- info: Is the dataset multilingual? --> <!-- scope: telescope --> no #### Covered Languages <!-- quick --> <!-- info: What languages/dialects are covered in the dataset? --> <!-- scope: telescope --> `English` #### License <!-- quick --> <!-- info: What is the license of the dataset? --> <!-- scope: telescope --> cc-by-4.0: Creative Commons Attribution 4.0 International #### Intended Use <!-- info: What is the intended use of the dataset? --> <!-- scope: microscope --> The intended use of this dataset is to train models that simplify medical text at the paragraph level so that it may be more accessible to the lay reader. #### Primary Task <!-- info: What primary task does the dataset support? --> <!-- scope: telescope --> Simplification #### Communicative Goal <!-- quick --> <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. --> <!-- scope: periscope --> A model trained on this dataset can be used to simplify medical texts to make them more accessible to readers without medical expertise. ### Credit #### Curation Organization Type(s) <!-- info: In what kind of organization did the dataset curation happen? --> <!-- scope: telescope --> `academic` #### Curation Organization(s) <!-- info: Name the organization(s). --> <!-- scope: periscope --> The University of Texas at Austin, King's College London, Northeastern University #### Dataset Creators <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). --> <!-- scope: microscope --> Ashwin Devaraj (The University of Texas at Austin), Iain J. Marshall (King's College London), Byron C. Wallace (Northeastern University), Junyi Jessy Li (The University of Texas at Austin) #### Funding <!-- info: Who funded the data creation? --> <!-- scope: microscope --> National Institutes of Health (NIH) grant R01-LM012086, National Science Foundation (NSF) grant IIS-1850153, Texas Advanced Computing Center (TACC) computational resources #### Who added the Dataset to GEM? <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. --> <!-- scope: microscope --> Ashwin Devaraj (The University of Texas at Austin) ### Dataset Structure #### Data Fields <!-- info: List and describe the fields present in the dataset. --> <!-- scope: telescope --> - `gem_id`: string, a unique identifier for the example - `doi`: string, DOI identifier for the Cochrane review from which the example was generated - `source`: string, an excerpt from an abstract of a Cochrane review - `target`: string, an excerpt from the plain-language summary of a Cochrane review that roughly aligns with the source text #### Example Instance <!-- info: Provide a JSON formatted example of a typical instance in the dataset. --> <!-- scope: periscope --> ``` { "gem_id": "gem-cochrane-simplification-train-766", "doi": "10.1002/14651858.CD002173.pub2", "source": "Of 3500 titles retrieved from the literature, 24 papers reporting on 23 studies could be included in the review. The studies were published between 1970 and 1997 and together included 1026 participants. Most were cross-over studies. Few studies provided sufficient information to judge the concealment of allocation. Four studies provided results for the percentage of symptom-free days. Pooling the results did not reveal a statistically significant difference between sodium cromoglycate and placebo. For the other pooled outcomes, most of the symptom-related outcomes and bronchodilator use showed statistically significant results, but treatment effects were small. Considering the confidence intervals of the outcome measures, a clinically relevant effect of sodium cromoglycate cannot be excluded. The funnel plot showed an under-representation of small studies with negative results, suggesting publication bias. There is insufficient evidence to be sure about the efficacy of sodium cromoglycate over placebo. Publication bias is likely to have overestimated the beneficial effects of sodium cromoglycate as maintenance therapy in childhood asthma.", "target": "In this review we aimed to determine whether there is evidence for the effectiveness of inhaled sodium cromoglycate as maintenance treatment in children with chronic asthma. Most of the studies were carried out in small groups of patients. Furthermore, we suspect that not all studies undertaken have been published. The results show that there is insufficient evidence to be sure about the beneficial effect of sodium cromoglycate compared to placebo. However, for several outcome measures the results favoured sodium cromoglycate." } ``` #### Data Splits <!-- info: Describe and name the splits in the dataset if there are more than one. --> <!-- scope: periscope --> - `train`: 3568 examples - `validation`: 411 examples - `test`: 480 examples ## Dataset in GEM ### Rationale for Inclusion in GEM #### Why is the Dataset in GEM? <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? --> <!-- scope: microscope --> This dataset is the first paragraph-level simplification dataset published (as prior work had primarily focused on simplifying individual sentences). Furthermore, this dataset is in the medical domain, which is an especially useful domain for text simplification. #### Similar Datasets <!-- info: Do other datasets for the high level task exist? --> <!-- scope: telescope --> no #### Ability that the Dataset measures <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: periscope --> This dataset measures the ability for a model to simplify paragraphs of medical text through the omission non-salient information and simplification of medical jargon. ### GEM-Specific Curation #### Modificatied for GEM? <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? --> <!-- scope: telescope --> no #### Additional Splits? <!-- info: Does GEM provide additional splits to the dataset? --> <!-- scope: telescope --> no ### Getting Started with the Task ## Previous Results ### Previous Results #### Measured Model Abilities <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: telescope --> This dataset measures the ability for a model to simplify paragraphs of medical text through the omission non-salient information and simplification of medical jargon. #### Metrics <!-- info: What metrics are typically used for this task? --> <!-- scope: periscope --> `Other: Other Metrics`, `BLEU` #### Other Metrics <!-- info: Definitions of other metrics --> <!-- scope: periscope --> SARI measures the quality of text simplification #### Previous results available? <!-- info: Are previous results available? --> <!-- scope: telescope --> yes #### Relevant Previous Results <!-- info: What are the most relevant previous results for this task/dataset? --> <!-- scope: microscope --> The paper which introduced this dataset trained BART models (pretrained on XSum) with unlikelihood training to produce simplification models achieving maximum SARI and BLEU scores of 40 and 43 respectively. ## Dataset Curation ### Original Curation #### Sourced from Different Sources <!-- info: Is the dataset aggregated from different data sources? --> <!-- scope: telescope --> no ### Language Data #### Data Validation <!-- info: Was the text validated by a different worker or a data curator? --> <!-- scope: telescope --> not validated #### Was Data Filtered? <!-- info: Were text instances selected or filtered? --> <!-- scope: telescope --> not filtered ### Structured Annotations #### Additional Annotations? <!-- quick --> <!-- info: Does the dataset have additional annotations for each instance? --> <!-- scope: telescope --> none #### Annotation Service? <!-- info: Was an annotation service used? --> <!-- scope: telescope --> no ### Consent #### Any Consent Policy? <!-- info: Was there a consent policy involved when gathering the data? --> <!-- scope: telescope --> no ### Private Identifying Information (PII) #### Contains PII? <!-- quick --> <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? --> <!-- scope: telescope --> yes/very likely #### Any PII Identification? <!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? --> <!-- scope: periscope --> no identification ### Maintenance #### Any Maintenance Plan? <!-- info: Does the original dataset have a maintenance plan? --> <!-- scope: telescope --> no ## Broader Social Context ### Previous Work on the Social Impact of the Dataset #### Usage of Models based on the Data <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? --> <!-- scope: telescope --> no ### Impact on Under-Served Communities #### Addresses needs of underserved Communities? <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). --> <!-- scope: telescope --> yes #### Details on how Dataset Addresses the Needs <!-- info: Describe how this dataset addresses the needs of underserved communities. --> <!-- scope: microscope --> This dataset can be used to simplify medical texts that may otherwise be inaccessible to those without medical training. ### Discussion of Biases #### Any Documented Social Biases? <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. --> <!-- scope: telescope --> unsure #### Are the Language Producers Representative of the Language? <!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? --> <!-- scope: periscope --> The dataset was generated from abstracts and plain-language summaries of medical literature reviews that were written by medical professionals and thus does was not generated by people representative of the entire English-speaking population. ## Considerations for Using the Data ### PII Risks and Liability ### Licenses ### Known Technical Limitations #### Technical Limitations <!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. --> <!-- scope: microscope --> The main limitation of this dataset is that the information alignment between the abstract and plain-language summary is often rough, so the plain-language summary may contain information that isn't found in the abstract. Furthermore, the plain-language targets often contain formulaic statements like "this evidence is current to [month][year]" not found in the abstracts. Another limitation is that some plain-language summaries do not simplify the technical abstracts very much and still contain medical jargon. #### Unsuited Applications <!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. --> <!-- scope: microscope --> The main pitfall to look out for is errors in factuality. Simplification work so far has not placed a strong emphasis on the logical fidelity of model generations with the input text, and the paper introducing this dataset does not explore modeling techniques to combat this. These kinds of errors are especially pernicious in the medical domain, and the models introduced in the paper do occasionally alter entities like disease and medication names.
16,372
[ [ -0.01123809814453125, -0.0474853515625, 0.03912353515625, 0.004425048828125, -0.02679443359375, -0.019805908203125, -0.0096435546875, -0.02740478515625, 0.033782958984375, 0.00986480712890625, -0.044891357421875, -0.0648193359375, -0.047119140625, 0.01989746...
GEM/wiki_auto_asset_turk
2022-10-24T15:31:10.000Z
[ "task_categories:text2text-generation", "task_ids:text-simplification", "annotations_creators:crowd-sourced", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:other", "arxiv:1910.02677", "arxiv:2005.00352", "...
GEM
WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia as a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia (this corresponds to the manual config in this version of the dataset), then trained a neural CRF system to predict these alignments. The trained alignment prediction model was then applied to the other articles in Simple English Wikipedia with an English counterpart to create a larger corpus of aligned sentences (corresponding to the auto and auto_acl configs here).
@inproceedings{jiang-etal-2020-neural, title = "Neural {CRF} Model for Sentence Alignment in Text Simplification", author = "Jiang, Chao and Maddela, Mounica and Lan, Wuwei and Zhong, Yang and Xu, Wei", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.709", doi = "10.18653/v1/2020.acl-main.709", pages = "7943--7960", }
3
90
2022-03-02T23:29:22
--- annotations_creators: - crowd-sourced language_creators: - unknown language: - en license: - other multilinguality: - unknown size_categories: - unknown source_datasets: - original task_categories: - text2text-generation task_ids: - text-simplification pretty_name: wiki_auto_asset_turk --- # Dataset Card for GEM/wiki_auto_asset_turk ## Dataset Description - **Homepage:** n/a - **Repository:** https://github.com/chaojiang06/wiki-auto, [ASSET repository - **Paper:** https://aclanthology.org/2020.acl-main.709/, [ASSET - **Leaderboard:** N/A - **Point of Contact:** WikiAuto: Chao Jiang; ASSET: Fernando Alva-Manchego and Louis Martin; TURK: Wei Xu ### Link to Main Data Card You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/wiki_auto_asset_turk). ### Dataset Summary WikiAuto is an English simplification dataset that we paired with ASSET and TURK, two very high-quality evaluation datasets, as test sets. The input is an English sentence taken from Wikipedia and the target a simplified sentence. ASSET and TURK contain the same test examples but have references that are simplified in different ways (splitting sentences vs. rewriting and splitting). You can load the dataset via: ``` import datasets data = datasets.load_dataset('GEM/wiki_auto_asset_turk') ``` The data loader can be found [here](https://huggingface.co/datasets/GEM/wiki_auto_asset_turk). #### website n/a #### paper [WikiAuto](https://aclanthology.org/2020.acl-main.709/), [ASSET](https://aclanthology.org/2020.acl-main.424/), [TURK](https://aclanthology.org/Q16-1029/) #### authors WikiAuto: Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, Wei Xu; ASSET: Fernando Alva-Manchego, Louis Martin, Antoine Bordes, Carolina Scarton, and Benoîıt Sagot, and Lucia Specia; TURK: Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch ## Dataset Overview ### Where to find the Data and its Documentation #### Download <!-- info: What is the link to where the original dataset is hosted? --> <!-- scope: telescope --> [Wiki-Auto repository](https://github.com/chaojiang06/wiki-auto), [ASSET repository](https://github.com/facebookresearch/asset), [TURKCorpus](https://github.com/cocoxu/simplification) #### Paper <!-- info: What is the link to the paper describing the dataset (open access preferred)? --> <!-- scope: telescope --> [WikiAuto](https://aclanthology.org/2020.acl-main.709/), [ASSET](https://aclanthology.org/2020.acl-main.424/), [TURK](https://aclanthology.org/Q16-1029/) #### BibTex <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. --> <!-- scope: microscope --> WikiAuto: ``` @inproceedings{jiang-etal-2020-neural, title = "Neural {CRF} Model for Sentence Alignment in Text Simplification", author = "Jiang, Chao and Maddela, Mounica and Lan, Wuwei and Zhong, Yang and Xu, Wei", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.709", doi = "10.18653/v1/2020.acl-main.709", pages = "7943--7960", } ``` ASSET: ``` @inproceedings{alva-manchego-etal-2020-asset, title = "{ASSET}: {A} Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations", author = "Alva-Manchego, Fernando and Martin, Louis and Bordes, Antoine and Scarton, Carolina and Sagot, Beno{\^\i}t and Specia, Lucia", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.424", pages = "4668--4679", } ``` TURK: ``` @article{Xu-EtAl:2016:TACL, author = {Wei Xu and Courtney Napoles and Ellie Pavlick and Quanze Chen and Chris Callison-Burch}, title = {Optimizing Statistical Machine Translation for Text Simplification}, journal = {Transactions of the Association for Computational Linguistics}, volume = {4}, year = {2016}, url = {https://cocoxu.github.io/publications/tacl2016-smt-simplification.pdf}, pages = {401--415} } ``` #### Contact Name <!-- quick --> <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> WikiAuto: Chao Jiang; ASSET: Fernando Alva-Manchego and Louis Martin; TURK: Wei Xu #### Contact Email <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> jiang.1530@osu.edu, f.alva@sheffield.ac.uk, louismartincs@gmail.com, wei.xu@cc.gatech.edu #### Has a Leaderboard? <!-- info: Does the dataset have an active leaderboard? --> <!-- scope: telescope --> no ### Languages and Intended Use #### Multilingual? <!-- quick --> <!-- info: Is the dataset multilingual? --> <!-- scope: telescope --> no #### Covered Languages <!-- quick --> <!-- info: What languages/dialects are covered in the dataset? --> <!-- scope: telescope --> `English` #### Whose Language? <!-- info: Whose language is in the dataset? --> <!-- scope: periscope --> Wiki-Auto contains English text only (BCP-47: `en`). It is presented as a translation task where Wikipedia Simple English is treated as its own idiom. For a statement of what is intended (but not always observed) to constitute Simple English on this platform, see [Simple English in Wikipedia](https://simple.wikipedia.org/wiki/Wikipedia:About#Simple_English). Both ASSET and TURK use crowdsourcing to change references, and their language is thus a combination of the WikiAuto data and the language of the demographic on mechanical Turk #### License <!-- quick --> <!-- info: What is the license of the dataset? --> <!-- scope: telescope --> other: Other license #### Intended Use <!-- info: What is the intended use of the dataset? --> <!-- scope: microscope --> WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia as a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wikipedia (this corresponds to the `manual` config in this version of the dataset), then trained a neural CRF system to predict these alignments. The trained alignment prediction model was then applied to the other articles in Simple English Wikipedia with an English counterpart to create a larger corpus of aligned sentences (corresponding to the `auto` and `auto_acl` configs here). [ASSET](https://github.com/facebookresearch/asset) [(Alva-Manchego et al., 2020)](https://www.aclweb.org/anthology/2020.acl-main.424.pdf) is multi-reference dataset for the evaluation of sentence simplification in English. The dataset uses the same 2,359 sentences from [TurkCorpus](https://github.com/cocoxu/simplification/) [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf) and each sentence is associated with 10 crowdsourced simplifications. Unlike previous simplification datasets, which contain a single transformation (e.g., lexical paraphrasing in TurkCorpus or sentence splitting in [HSplit](https://www.aclweb.org/anthology/D18-1081.pdf)), the simplifications in ASSET encompass a variety of rewriting transformations. TURKCorpus is a high quality simplification dataset where each source (not simple) sentence is associated with 8 human-written simplifications that focus on lexical paraphrasing. It is one of the two evaluation datasets for the text simplification task in GEM. It acts as the validation and test set for paraphrasing-based simplification that does not involve sentence splitting and deletion. #### Add. License Info <!-- info: What is the 'other' license of the dataset? --> <!-- scope: periscope --> WikiAuto: `CC BY-NC 3.0`, ASSET: `CC BY-NC 4.0`, TURK: `GNU General Public License v3.0` #### Primary Task <!-- info: What primary task does the dataset support? --> <!-- scope: telescope --> Simplification #### Communicative Goal <!-- quick --> <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. --> <!-- scope: periscope --> The goal is to communicate the main ideas of source sentence in a way that is easier to understand by non-native speakers of English. ### Credit #### Curation Organization Type(s) <!-- info: In what kind of organization did the dataset curation happen? --> <!-- scope: telescope --> `academic`, `industry` #### Curation Organization(s) <!-- info: Name the organization(s). --> <!-- scope: periscope --> Ohio State University, University of Sheffield, Inria, Facebook AI Research, Imperial College London, University of Pennsylvania, John Hopkins University #### Dataset Creators <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). --> <!-- scope: microscope --> WikiAuto: Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, Wei Xu; ASSET: Fernando Alva-Manchego, Louis Martin, Antoine Bordes, Carolina Scarton, and Benoîıt Sagot, and Lucia Specia; TURK: Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch #### Funding <!-- info: Who funded the data creation? --> <!-- scope: microscope --> WikiAuto: NSF, ODNI, IARPA, Figure Eight AI, and Criteo. ASSET: PRAIRIE Institute, ANR. TURK: NSF #### Who added the Dataset to GEM? <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. --> <!-- scope: microscope --> GEM v1 had separate data cards for WikiAuto, ASSET, and TURK. They were contributed by Dhruv Kumar and Mounica Maddela. The initial data loader was written by Yacine Jernite. Sebastian Gehrmann merged and extended the data cards and migrated the loader to the v2 infrastructure. ### Dataset Structure #### Data Fields <!-- info: List and describe the fields present in the dataset. --> <!-- scope: telescope --> - `source`: A source sentence from one of the datasets - `target`: A single simplified sentence corresponding to `source` - `references`: In the case of ASSET/TURK, references is a list of strings corresponding to the different references. #### Reason for Structure <!-- info: How was the dataset structure determined? --> <!-- scope: microscope --> The underlying datasets have extensive secondary annotations that can be used in conjunction with the GEM version. We omit those annotations to simplify the format into one that can be used by seq2seq models. #### Example Instance <!-- info: Provide a JSON formatted example of a typical instance in the dataset. --> <!-- scope: periscope --> ``` { 'source': 'In early work, Rutherford discovered the concept of radioactive half-life , the radioactive element radon, and differentiated and named alpha and beta radiation .', 'target': 'Rutherford discovered the radioactive half-life, and the three parts of radiation which he named Alpha, Beta, and Gamma.' } ``` #### Data Splits <!-- info: Describe and name the splits in the dataset if there are more than one. --> <!-- scope: periscope --> In WikiAuto, which is used as training and validation set, the following splits are provided: | | Tain | Dev | Test | | ----- | ------ | ----- | ---- | | Total sentence pairs | 373801 | 73249 | 118074 | | Aligned sentence pairs | 1889 | 346 | 677 | ASSET does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) for training. For GEM, [Wiki-Auto](https://github.com/chaojiang06/wiki-auto) will be used for training the model. Each input sentence has 10 associated reference simplified sentences. The statistics of ASSET are given below. | | Dev | Test | Total | | ----- | ------ | ---- | ----- | | Input Sentences | 2000 | 359 | 2359 | | Reference Simplifications | 20000 | 3590 | 23590 | The test and validation sets are the same as those of [TurkCorpus](https://github.com/cocoxu/simplification/). The split was random. There are 19.04 tokens per reference on average (lower than 21.29 and 25.49 for TurkCorpus and HSplit, respectively). Most (17,245) of the referece sentences do not involve sentence splitting. TURKCorpus does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) or [Wiki-Auto](https://github.com/chaojiang06/wiki-auto) (Jiang et. al 2020) for training. Each input sentence has 8 associated reference simplified sentences. 2,359 input sentences are randomly split into 2,000 validation and 359 test sentences. | | Dev | Test | Total | | ----- | ------ | ---- | ----- | | Input Sentences | 2000 | 359 | 2359 | | Reference Simplifications | 16000 | 2872 | 18872 | There are 21.29 tokens per reference on average. #### Splitting Criteria <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. --> <!-- scope: microscope --> In our setup, we use WikiAuto as training/validation corpus and ASSET and TURK as test corpora. ASSET and TURK have the same inputs but differ in their reference style. Researchers can thus conduct targeted evaluations based on the strategies that a model should learn. ## Dataset in GEM ### Rationale for Inclusion in GEM #### Why is the Dataset in GEM? <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? --> <!-- scope: microscope --> WikiAuto is the largest open text simplification dataset currently available. ASSET and TURK are high quality test sets that are compatible with WikiAuto. #### Similar Datasets <!-- info: Do other datasets for the high level task exist? --> <!-- scope: telescope --> yes #### Unique Language Coverage <!-- info: Does this dataset cover other languages than other datasets for the same task? --> <!-- scope: periscope --> no #### Difference from other GEM datasets <!-- info: What else sets this dataset apart from other similar datasets in GEM? --> <!-- scope: microscope --> It's unique setup with multiple test sets makes the task interesting since it allows for evaluation of multiple generations and systems that simplify in different ways. #### Ability that the Dataset measures <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: periscope --> simplification ### GEM-Specific Curation #### Modificatied for GEM? <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? --> <!-- scope: telescope --> yes #### GEM Modifications <!-- info: What changes have been made to he original dataset? --> <!-- scope: periscope --> `other` #### Modification Details <!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification --> <!-- scope: microscope --> We removed secondary annotations and focus on the simple `input->output` format, but combine the different sub-datasets. #### Additional Splits? <!-- info: Does GEM provide additional splits to the dataset? --> <!-- scope: telescope --> yes #### Split Information <!-- info: Describe how the new splits were created --> <!-- scope: periscope --> we split the original test set according to syntactic complexity of the source sentences. To characterize sentence syntactic complexity, we use the 8-level developmental level (d-level) scale proposed by [Covington et al. (2006)](https://www.researchgate.net/publication/254033869_How_complex_is_that_sentence_A_proposed_revision_of_the_Rosenberg_and_Abbeduto_D-Level_Scale) and the implementation of [Lu, Xiaofei (2010)](https://www.jbe-platform.com/content/journals/10.1075/ijcl.15.4.02lu). We thus split the original test set into 8 subsets corresponding to the 8 d-levels assigned to source sentences. We obtain the following number of instances per level and average d-level of the dataset: | Total nb. sentences | L0 | L1 | L2 | L3 | L4 | L5 | L6 | L7 | Mean Level | |-------------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ---------- | | 359 | 166 | 0 | 58 | 32 | 5 | 28 | 7 | 63 | 2.38 | #### Split Motivation <!-- info: What aspects of the model's generation capacities were the splits created to test? --> <!-- scope: periscope --> The goal was to assess performance when simplifying source sentences with different syntactic structure and complexity. ### Getting Started with the Task #### Pointers to Resources <!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. --> <!-- scope: microscope --> There are recent supervised ([Martin et al., 2019](https://arxiv.org/abs/1910.02677), [Kriz et al., 2019](https://www.aclweb.org/anthology/N19-1317/), [Dong et al., 2019](https://www.aclweb.org/anthology/P19-1331/), [Zhang and Lapata, 2017](https://www.aclweb.org/anthology/D17-1062/)) and unsupervised ([Martin et al., 2020](https://arxiv.org/abs/2005.00352v1), [Kumar et al., 2020](https://www.aclweb.org/anthology/2020.acl-main.707/), [Surya et al., 2019](https://www.aclweb.org/anthology/P19-1198/)) text simplification models that can be used as baselines. #### Technical Terms <!-- info: Technical terms used in this card and the dataset and their definitions --> <!-- scope: microscope --> The common metric used for automatic evaluation is SARI [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029/). ## Previous Results ### Previous Results #### Measured Model Abilities <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: telescope --> Simplification #### Metrics <!-- info: What metrics are typically used for this task? --> <!-- scope: periscope --> `Other: Other Metrics`, `BLEU` #### Other Metrics <!-- info: Definitions of other metrics --> <!-- scope: periscope --> SARI: A simplification metric that considers both input and references to measure the "goodness" of words that are added, deleted, and kept. #### Proposed Evaluation <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. --> <!-- scope: microscope --> The original authors of WikiAuto and ASSET used human evaluation to assess the fluency, adequacy, and simplicity (details provided in the paper). For TURK, the authors measured grammaticality, meaning-preservation, and simplicity gain (details in the paper). #### Previous results available? <!-- info: Are previous results available? --> <!-- scope: telescope --> no ## Dataset Curation ### Original Curation #### Original Curation Rationale <!-- info: Original curation rationale --> <!-- scope: telescope --> Wiki-Auto provides a new version of the Wikipedia corpus that is larger, contains 75% less defective pairs and has more complex rewrites than the previous WIKILARGE dataset. ASSET was created in order to improve the evaluation of sentence simplification. It uses the same input sentences as the [TurkCorpus](https://github.com/cocoxu/simplification/) dataset from [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). The 2,359 input sentences of TurkCorpus are a sample of "standard" (not simple) sentences from the [Parallel Wikipedia Simplification (PWKP)](https://www.informatik.tu-darmstadt.de/ukp/research_6/data/sentence_simplification/simple_complex_sentence_pairs/index.en.jsp) dataset [(Zhu et al., 2010)](https://www.aclweb.org/anthology/C10-1152.pdf), which come from the August 22, 2009 version of Wikipedia. The sentences of TurkCorpus were chosen to be of similar length [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). No further information is provided on the sampling strategy. The TurkCorpus dataset was developed in order to overcome some of the problems with sentence pairs from Standard and Simple Wikipedia: a large fraction of sentences were misaligned, or not actually simpler [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). However, TurkCorpus mainly focused on *lexical paraphrasing*, and so cannot be used to evaluate simplifications involving *compression* (deletion) or *sentence splitting*. HSplit [(Sulem et al., 2018)](https://www.aclweb.org/anthology/D18-1081.pdf), on the other hand, can only be used to evaluate sentence splitting. The reference sentences in ASSET include a wider variety of sentence rewriting strategies, combining splitting, compression and paraphrasing. Annotators were given examples of each kind of transformation individually, as well as all three transformations used at once, but were allowed to decide which transformations to use for any given sentence. An example illustrating the differences between TurkCorpus, HSplit and ASSET is given below: > **Original:** He settled in London, devoting himself chiefly to practical teaching. > > **TurkCorpus:** He rooted in London, devoting himself mainly to practical teaching. > > **HSplit:** He settled in London. He devoted himself chiefly to practical teaching. > > **ASSET:** He lived in London. He was a teacher. #### Communicative Goal <!-- info: What was the communicative goal? --> <!-- scope: periscope --> The goal is to communicate the same information as the source sentence using simpler words and grammar. #### Sourced from Different Sources <!-- info: Is the dataset aggregated from different data sources? --> <!-- scope: telescope --> yes #### Source Details <!-- info: List the sources (one per line) --> <!-- scope: periscope --> Wikipedia ### Language Data #### How was Language Data Obtained? <!-- info: How was the language data obtained? --> <!-- scope: telescope --> `Found` #### Where was it found? <!-- info: If found, where from? --> <!-- scope: telescope --> `Single website` #### Language Producers <!-- info: What further information do we have on the language producers? --> <!-- scope: microscope --> The dataset uses language from Wikipedia: some demographic information is provided [here](https://en.wikipedia.org/wiki/Wikipedia:Who_writes_Wikipedia%3F). #### Data Validation <!-- info: Was the text validated by a different worker or a data curator? --> <!-- scope: telescope --> not validated #### Was Data Filtered? <!-- info: Were text instances selected or filtered? --> <!-- scope: telescope --> algorithmically #### Filter Criteria <!-- info: What were the selection criteria? --> <!-- scope: microscope --> The authors mention that they "extracted 138,095 article pairs from the 2019/09 Wikipedia dump using an improved version of the [WikiExtractor](https://github.com/attardi/wikiextractor) library". The [SpaCy](https://spacy.io/) library is used for sentence splitting. ### Structured Annotations #### Additional Annotations? <!-- quick --> <!-- info: Does the dataset have additional annotations for each instance? --> <!-- scope: telescope --> crowd-sourced #### Number of Raters <!-- info: What is the number of raters --> <!-- scope: telescope --> 11<n<50 #### Rater Qualifications <!-- info: Describe the qualifications required of an annotator. --> <!-- scope: periscope --> WikiAuto (Figure Eight): No information provided. ASSET (MTurk): - Having a HIT approval rate over 95%, and over 1000 HITs approved. No other demographic or compensation information is provided. - Passing a Qualification Test (appropriately simplifying sentences). Out of 100 workers, 42 passed the test. - Being a resident of the United States, United Kingdom or Canada. TURK (MTurk): - Reference sentences were written by workers with HIT approval rate over 95%. No other demographic or compensation information is provided. #### Raters per Training Example <!-- info: How many annotators saw each training example? --> <!-- scope: periscope --> 1 #### Raters per Test Example <!-- info: How many annotators saw each test example? --> <!-- scope: periscope --> >5 #### Annotation Service? <!-- info: Was an annotation service used? --> <!-- scope: telescope --> yes #### Which Annotation Service <!-- info: Which annotation services were used? --> <!-- scope: periscope --> `Amazon Mechanical Turk`, `Appen` #### Annotation Values <!-- info: Purpose and values for each annotation --> <!-- scope: microscope --> WikiAuto: Sentence alignment labels were crowdsourced for 500 randomly sampled document pairs (10,123 sentence pairs total). The authors pre-selected several alignment candidates from English Wikipedia for each Simple Wikipedia sentence based on various similarity metrics, then asked the crowd-workers to annotate these pairs. Finally, they trained their alignment model on this manually annotated dataset to obtain automatically aligned sentences (138,095 document pairs, 488,332 sentence pairs). No demographic annotation is provided for the crowd workers. The [Figure Eight](https://www.figure-eight.com/) platform now part of Appen) was used for the annotation process. ASSET: The instructions given to the annotators are available [here](https://github.com/facebookresearch/asset/blob/master/crowdsourcing/AMT_AnnotationInstructions.pdf). TURK: The references are crowdsourced from Amazon Mechanical Turk. The annotators were asked to provide simplifications without losing any information or splitting the input sentence. No other demographic or compensation information is provided in the TURKCorpus paper. The instructions given to the annotators are available in the paper. #### Any Quality Control? <!-- info: Quality control measures? --> <!-- scope: telescope --> none ### Consent #### Any Consent Policy? <!-- info: Was there a consent policy involved when gathering the data? --> <!-- scope: telescope --> yes #### Consent Policy Details <!-- info: What was the consent policy? --> <!-- scope: microscope --> Both Figure Eight and Amazon Mechanical Turk raters forfeit the right to their data as part of their agreements. ### Private Identifying Information (PII) #### Contains PII? <!-- quick --> <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? --> <!-- scope: telescope --> no PII #### Justification for no PII <!-- info: Provide a justification for selecting `no PII` above. --> <!-- scope: periscope --> Since the dataset is created from Wikipedia/Simple Wikipedia, all the information contained in the dataset is already in the public domain. ### Maintenance #### Any Maintenance Plan? <!-- info: Does the original dataset have a maintenance plan? --> <!-- scope: telescope --> no ## Broader Social Context ### Previous Work on the Social Impact of the Dataset #### Usage of Models based on the Data <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? --> <!-- scope: telescope --> no ### Impact on Under-Served Communities #### Addresses needs of underserved Communities? <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). --> <!-- scope: telescope --> no ### Discussion of Biases #### Any Documented Social Biases? <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. --> <!-- scope: telescope --> yes #### Links and Summaries of Analysis Work <!-- info: Provide links to and summaries of works analyzing these biases. --> <!-- scope: microscope --> The dataset may contain some social biases, as the input sentences are based on Wikipedia. Studies have shown that the English Wikipedia contains both gender biases [(Schmahl et al., 2020)](https://research.tudelft.nl/en/publications/is-wikipedia-succeeding-in-reducing-gender-bias-assessing-changes) and racial biases [(Adams et al., 2019)](https://journals.sagepub.com/doi/pdf/10.1177/2378023118823946). ## Considerations for Using the Data ### PII Risks and Liability #### Potential PII Risk <!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. --> <!-- scope: microscope --> All the data is in the public domain. ### Licenses #### Copyright Restrictions on the Dataset <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? --> <!-- scope: periscope --> `open license - commercial use allowed` #### Copyright Restrictions on the Language Data <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? --> <!-- scope: periscope --> `open license - commercial use allowed` ### Known Technical Limitations #### Technical Limitations <!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. --> <!-- scope: microscope --> The dataset may contain some social biases, as the input sentences are based on Wikipedia. Studies have shown that the English Wikipedia contains both gender biases [(Schmahl et al., 2020)](https://research.tudelft.nl/en/publications/is-wikipedia-succeeding-in-reducing-gender-bias-assessing-changes) and racial biases [(Adams et al., 2019)](https://journals.sagepub.com/doi/pdf/10.1177/2378023118823946). #### Unsuited Applications <!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. --> <!-- scope: microscope --> Since the test datasets contains only 2,359 sentences that are derived from Wikipedia, they are limited to a small subset of topics present on Wikipedia.
31,669
[ [ -0.026702880859375, -0.05279541015625, 0.0220184326171875, 0.00647735595703125, -0.02423095703125, -0.023162841796875, -0.0333251953125, -0.02386474609375, 0.0292205810546875, 0.0222930908203125, -0.06402587890625, -0.049041748046875, -0.032501220703125, 0.0...
Llamacha/monolingual-quechua-iic
2022-10-02T14:40:00.000Z
[ "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<5M", "source_datasets:original", "language:qu", "license:apache-2.0", "regi...
Llamacha
null
null
0
90
2022-03-02T23:29:22
--- annotations_creators: - no-annotation language_creators: - crowdsourced language: - qu license: - apache-2.0 multilinguality: - monolingual size_categories: - 1M<n<5M source_datasets: - original task_categories: - fill-mask task_ids: - language-modeling - masked-language-modeling --- # Dataset Card for Monolingual-Quechua-IIC ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://llamacha.pe](https://llamacha.pe) - **Paper:** [Introducing QuBERT: A Large Monolingual Corpus and BERT Model for Southern Quechua](https://aclanthology.org/2022.deeplo-1.1.pdf) - **Point of Contact:** [Rodolfo Zevallos](mailto:rodolfojoel.zevallos@upf.edu) - **Size of downloaded dataset files:** 373.28 MB ### Dataset Summary We present Monolingual-Quechua-IIC, a monolingual corpus of Southern Quechua, which can be used to build language models using Transformers models. This corpus also includes the Wiki and OSCAR corpora. We used this corpus to build Llama-RoBERTa-Quechua, the first language model for Southern Quechua using Transformers. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages Southern Quechua ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information Apache-2.0 ### Citation Information ``` @inproceedings{zevallos2022introducing, title={Introducing QuBERT: A Large Monolingual Corpus and BERT Model for Southern Quechua}, author={Zevallos, Rodolfo and Ortega, John and Chen, William and Castro, Richard and Bel, Nuria and Toshio, Cesar and Venturas, Renzo and Aradiel, Hilario and Melgarejo, Nelsi}, booktitle={Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing}, pages={1--13}, year={2022} } ``` ### Contributions Thanks to [@rjzevallos](https://github.com/rjzevallos) for adding this dataset.
4,676
[ [ -0.0313720703125, -0.02398681640625, -0.00437164306640625, 0.038970947265625, -0.01471710205078125, 0.017059326171875, -0.0400390625, -0.0276336669921875, 0.03271484375, 0.036590576171875, -0.042877197265625, -0.07879638671875, -0.02777099609375, 0.007499694...
caltonji/harrypotter_squad_v2
2021-12-31T19:11:29.000Z
[ "region:us" ]
caltonji
null
null
0
90
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
caltonji/harrypotter_squad_v2_2
2021-12-31T20:01:23.000Z
[ "region:us" ]
caltonji
null
null
0
90
2022-03-02T23:29:22
## Dataset Summary Contains 15 Harry Potter trivia questions in Squadv2 format, 3 of which are unanswerable. ## Model Performance [Test Notebook](https://colab.research.google.com/drive/1VFUJKV7eun68XgQDAHSHsbvoM_CGHzWA?usp=sharing) | Model | exact | f1 | | ----------- | ----------- | ----------- | | Albert Base ([twmkn9/albert-base-v2-squad2](https://huggingface.co/twmkn9/albert-base-v2-squad2)) | 46.6667 | 46.6667 | | Albert XXLarge ([ahotrod/albert_xxlargev1_squad2_512](https://huggingface.co/ahotrod/albert_xxlargev1_squad2_512)) | 66.6667 | 66.6667 |
580
[ [ -0.02447509765625, -0.036468505859375, 0.0014181137084960938, 0.055023193359375, 0.00011348724365234375, 0.00920867919921875, 0.0025806427001953125, -0.01526641845703125, 0.018798828125, 0.027984619140625, -0.06658935546875, -0.0247802734375, -0.0167236328125, ...
cointegrated/ru-paraphrase-NMT-Leipzig
2022-10-23T12:23:15.000Z
[ "task_categories:text-generation", "annotations_creators:no-annotation", "language_creators:machine-generated", "multilinguality:translation", "size_categories:100K<n<1M", "source_datasets:extended|other", "language:ru", "license:cc-by-4.0", "conditional-text-generation", "paraphrase-generation", ...
cointegrated
null
null
4
90
2022-03-02T23:29:22
--- annotations_creators: - no-annotation language_creators: - machine-generated language: - ru license: - cc-by-4.0 multilinguality: - translation size_categories: - 100K<n<1M source_datasets: - extended|other task_categories: - text-generation pretty_name: ru-paraphrase-NMT-Leipzig tags: - conditional-text-generation - paraphrase-generation - paraphrase --- # Dataset Card for **cointegrated/ru-paraphrase-NMT-Leipzig** ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Paper:** https://habr.com/ru/post/564916/ - **Point of Contact:** [@cointegrated](https://huggingface.co/cointegrated) ### Dataset Summary The dataset contains 1 million Russian sentences and their automatically generated paraphrases. It was created by David Dale ([@cointegrated](https://huggingface.co/cointegrated)) by translating the `rus-ru_web-public_2019_1M` corpus from [the Leipzig collection](https://wortschatz.uni-leipzig.de/en/download) into English and back into Russian. A fraction of the resulting paraphrases are invalid, and should be filtered out. The blogpost ["Перефразирование русских текстов: корпуса, модели, метрики"](https://habr.com/ru/post/564916/) provides a detailed description of the dataset and its properties. The dataset can be loaded with the following code: ```Python import datasets data = datasets.load_dataset( 'cointegrated/ru-paraphrase-NMT-Leipzig', data_files={"train": "train.csv","val": "val.csv","test": "test.csv"}, ) ``` Its output should look like ``` DatasetDict({ train: Dataset({ features: ['idx', 'original', 'en', 'ru', 'chrf_sim', 'labse_sim'], num_rows: 980000 }) val: Dataset({ features: ['idx', 'original', 'en', 'ru', 'chrf_sim', 'labse_sim'], num_rows: 10000 }) test: Dataset({ features: ['idx', 'original', 'en', 'ru', 'chrf_sim', 'labse_sim'], num_rows: 10000 }) }) ``` ### Supported Tasks and Leaderboards The dataset can be used to train and validate models for paraphrase generation or (if negative sampling is used) for paraphrase detection. ### Languages Russian (main), English (auxilliary). ## Dataset Structure ### Data Instances Data instances look like ``` { "labse_sim": 0.93502015, "chrf_sim": 0.4946451012684782, "idx": 646422, "ru": "О перспективах развития новых медиа-технологий в РФ расскажут на медиафоруме Енисея.", "original": "Перспективы развития новых медиатехнологий в Российской Федерации обсудят участники медиафорума «Енисей.", "en": "Prospects for the development of new media technologies in the Russian Federation will be discussed at the Yenisey Media Forum." } ``` Where `original` is the original sentence, and `ru` is its machine-generated paraphrase. ### Data Fields - `idx`: id of the instance in the original corpus - `original`: the original sentence - `en`: automatic translation of `original` to English - `ru`: automatic translation of `en` back to Russian, i.e. a paraphrase of `original` - `chrf_sim`: [ChrF++](https://huggingface.co/metrics/chrf) similarity of `original` and `ru` - `labse_sim`: cosine similarity of [LaBSE](https://huggingface.co/cointegrated/LaBSE-en-ru) embedings of `original` and `ru` - `forward_entailment`: predicted probability that `original` entails `ru` - `backward_entailment`: predicted probability that `ru` entails `original` - `p_good`: predicted probability that `ru` and `original` have equivalent meaning ### Data Splits Train – 980K, validation – 10K, test – 10K. The splits were generated randomly. ## Dataset Creation ### Curation Rationale There are other Russian paraphrase corpora, but they have major drawbacks: - The best known [corpus from paraphraser.ru 2016 contest](http://paraphraser.ru/download/) is rather small and covers only the News domain. - [Opusparcus](https://huggingface.co/datasets/GEM/opusparcus), [ParaPhraserPlus](http://paraphraser.ru/download/), and [corpora of Tamara Zhordanija](https://github.com/tamriq/paraphrase) are noisy, i.e. a large proportion of sentence pairs in them have substantial difference in meaning. - The Russian part of [TaPaCo](https://huggingface.co/datasets/tapaco) has very high lexical overlap in the sentence pairs; in other words, their paraphrases are not diverse enough. The current corpus is generated with a dual objective: the parphrases should be semantically as close as possible to the original sentences, while being lexically different from them. Back-translation with restricted vocabulary seems to achieve this goal often enough. ### Source Data #### Initial Data Collection and Normalization The `rus-ru_web-public_2019_1M` corpus from [the Leipzig collection](https://wortschatz.uni-leipzig.de/en/download) as is. The process of its creation is described [in this paper](http://www.lrec-conf.org/proceedings/lrec2012/pdf/327_Paper.pdf): D. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages. In: *Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012*. #### Automatic paraphrasing The paraphrasing was carried out by translating the original sentence to English and then back to Russian. The models [facebook/wmt19-ru-en](https://huggingface.co/facebook/wmt19-ru-en) and [facebook/wmt19-en-ru](https://huggingface.co/facebook/wmt19-en-ru) were used for translation. To ensure that the back-translated texts are not identical to the original texts, the final decoder was prohibited to use the token n-grams from the original texts. The code below implements the paraphrasing function. ```python import torch from transformers import FSMTModel, FSMTTokenizer, FSMTForConditionalGeneration tokenizer = FSMTTokenizer.from_pretrained("facebook/wmt19-en-ru") model = FSMTForConditionalGeneration.from_pretrained("facebook/wmt19-en-ru") inverse_tokenizer = FSMTTokenizer.from_pretrained("facebook/wmt19-ru-en") inverse_model = FSMTForConditionalGeneration.from_pretrained("facebook/wmt19-ru-en") model.cuda(); inverse_model.cuda(); def paraphrase(text, gram=4, num_beams=5, **kwargs): """ Generate a paraphrase using back translation. Parameter `gram` denotes size of token n-grams of the original sentence that cannot appear in the paraphrase. """ input_ids = inverse_tokenizer.encode(text, return_tensors="pt") with torch.no_grad(): outputs = inverse_model.generate(input_ids.to(inverse_model.device), num_beams=num_beams, **kwargs) other_lang = inverse_tokenizer.decode(outputs[0], skip_special_tokens=True) # print(other_lang) input_ids = input_ids[0, :-1].tolist() bad_word_ids = [input_ids[i:(i+gram)] for i in range(len(input_ids)-gram)] input_ids = tokenizer.encode(other_lang, return_tensors="pt") with torch.no_grad(): outputs = model.generate(input_ids.to(model.device), num_beams=num_beams, bad_words_ids=bad_word_ids, **kwargs) decoded = tokenizer.decode(outputs[0], skip_special_tokens=True) return decoded ``` The corpus was created by running the above `paraphrase` function on the original sentences with parameters `gram=3, num_beams=5, repetition_penalty=3.14, no_repeat_ngram_size=6`. ### Annotations #### Annotation process The dataset was annotated by several automatic metrics: - [ChrF++](https://huggingface.co/metrics/chrf) between `original` and `ru` sentences; - cosine similarity between [LaBSE](https://huggingface.co/cointegrated/LaBSE-en-ru) embeddings of these sentences; - forward and backward entailment probabilites predictd by the [rubert-base-cased-nli-twoway](https://huggingface.co/cointegrated/rubert-base-cased-nli-twoway) model; - `p_good`, a metric aggregating the four metrics above into a single number. It is obtained with a logistic regression trained on 100 randomly chosen from the train set and manually labelled sentence pairs. #### Who are the annotators? Human annotation was involved only for a small subset used to train the model for `p_good`. It was conduced by the dataset author, @cointegrated. ### Personal and Sensitive Information The dataset is not known to contain any personal or sensitive information. The sources and processes of original data collection are described at https://wortschatz.uni-leipzig.de/en/download. ## Considerations for Using the Data ### Social Impact of Dataset The dataset may enable creation for paraphrasing systems that can be used both for "good" purposes (such as assisting writers or augmenting text datasets), and for "bad" purposes (such as disguising plagiarism). The authors are not responsible for any uses of the dataset. ### Discussion of Biases The dataset may inherit some of the biases of [the underlying Leipzig web corpus](https://wortschatz.uni-leipzig.de/en/download) or the neural machine translation models ([1](https://huggingface.co/facebook/wmt19-ru-en), [2](https://huggingface.co/facebook/wmt19-en-ru)) with which it was generated. ### Other Known Limitations Most of the paraphrases in the dataset are valid (by a rough estimante, at least 80%). However, in some sentence pairs there are faults: - Named entities are often spelled in different ways (e.g. `"Джейкоб" -> "Яков") or even replaced with other entities (e.g. `"Оймякон" -> "Оймянск" or `"Верхоянск" -> "Тольятти"`). - Sometimes the meaning of words or phrases changes signigicantly, e.g. `"полустанок" -> "полумашина"`, or `"были по колено в грязи" -> "лежали на коленях в иле"`. - Sometimes the syntax is changed in a meaning-altering way, e.g. `"Интеллектуальное преимущество Вавилова и его соратников над демагогами из рядов сторонников новой агробиологии разительно очевидно." -> "Интеллектуал Вавилов и его приспешники в новой аграрной биологии явно превзошли демогогов."`. - Grammatical properties that are present in Russian morphology but absent in English, such as gender, are often lost, e.g. `"Я не хотела тебя пугать" -> "Я не хотел пугать вас"`. The field `labse_sim` reflects semantic similarity between the sentences, and it can be used to filter out at least some poor paraphrases. ## Additional Information ### Dataset Curators The dataset was created by [David Dale](https://daviddale.ru/en), a.k.a. [@cointegrated](https://huggingface.co/cointegrated). ### Licensing Information This corpus, as well as the original Leipzig corpora, are licensed under [CC BY](http://creativecommons.org/licenses/by/4.0/). ### Citation Information [This blog post](https://habr.com/ru/post/564916/) can be cited: ``` @misc{dale_paraphrasing_2021, author = "Dale, David", title = "Перефразирование русских текстов: корпуса, модели, метрики", editor = "habr.com", url = "https://habr.com/ru/post/564916/", month = {June}, year = {2021}, note = {[Online; posted 28-June-2021]}, } ``` ### Contributions Thanks to [@avidale](https://github.com/avidale) for adding this dataset.
11,995
[ [ -0.01519775390625, -0.059112548828125, 0.034332275390625, 0.0248260498046875, -0.041259765625, -0.0147552490234375, -0.019256591796875, 0.00485992431640625, 0.0268707275390625, 0.0440673828125, -0.022491455078125, -0.05291748046875, -0.039215087890625, 0.024...
TheBritishLibrary/BookGenreSnorkelAnnotated
2021-12-17T11:37:52.000Z
[ "region:us" ]
TheBritishLibrary
null
null
0
90
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
enelpol/czywiesz
2022-10-25T09:07:45.000Z
[ "task_categories:question-answering", "task_ids:open-domain-qa", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:unknown", "region:us" ]
enelpol
null
null
2
90
2022-03-02T23:29:22
--- language: - pl license: - unknown multilinguality: - monolingual pretty_name: Czywiesz size_categories: - 10K<n<100K source_datasets: - original task_categories: - question-answering task_ids: - open-domain-qa --- This is an extract of the original [Czywiesz](https://clarin-pl.eu/dspace/handle/11321/39) dataset. It contains the questions and the relevant Wikipedia passages in format compatible with DPR training objective. It may be used to train a passage retriever.
475
[ [ -0.035919189453125, -0.044647216796875, 0.01806640625, 0.005771636962890625, -0.0263519287109375, -0.026763916015625, -0.01454925537109375, -0.0291748046875, 0.039459228515625, 0.055999755859375, -0.0843505859375, -0.0188751220703125, -0.0231170654296875, 0....
fuliucansheng/minicoco
2021-08-03T07:19:48.000Z
[ "region:us" ]
fuliucansheng
MINICOCO2017
MINICOCO2017
0
90
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.014984130859375, 0.057220458984375, 0.0288238525390625, -0.03509521484375, 0.04656982421875, 0.052520751953125, 0.00506591796875, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060455322265625, 0.03793334...
bigbio/bionlp_st_2013_pc
2022-12-22T15:44:03.000Z
[ "multilinguality:monolingual", "language:en", "license:other", "region:us" ]
bigbio
the Pathway Curation (PC) task is a main event extraction task of the BioNLP shared task (ST) 2013. The PC task concerns the automatic extraction of biomolecular reactions from text. The task setting, representation and semantics are defined with respect to pathway model standards and ontologies (SBML, BioPAX, SBO) and documents selected by relevance to specific model reactions. Two BioNLP ST 2013 participants successfully completed the PC task. The highest achieved F-score, 52.8%, indicates that event extraction is a promising approach to supporting pathway curation efforts.
@inproceedings{ohta-etal-2013-overview, title = "Overview of the Pathway Curation ({PC}) task of {B}io{NLP} Shared Task 2013", author = "Ohta, Tomoko and Pyysalo, Sampo and Rak, Rafal and Rowley, Andrew and Chun, Hong-Woo and Jung, Sung-Jae and Choi, Sung-Pil and Ananiadou, Sophia and Tsujii, Jun{'}ichi", booktitle = "Proceedings of the {B}io{NLP} Shared Task 2013 Workshop", month = aug, year = "2013", address = "Sofia, Bulgaria", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W13-2009", pages = "67--75", }
0
90
2022-11-13T22:07:14
--- language: - en bigbio_language: - English license: other multilinguality: monolingual bigbio_license_shortname: GENIA_PROJECT_LICENSE pretty_name: BioNLP 2013 PC homepage: https://github.com/openbiocorpora/bionlp-st-2013-pc bigbio_pubmed: True bigbio_public: True bigbio_tasks: - EVENT_EXTRACTION - NAMED_ENTITY_RECOGNITION - COREFERENCE_RESOLUTION --- # Dataset Card for BioNLP 2013 PC ## Dataset Description - **Homepage:** https://github.com/openbiocorpora/bionlp-st-2013-pc - **Pubmed:** True - **Public:** True - **Tasks:** EE,NER,COREF the Pathway Curation (PC) task is a main event extraction task of the BioNLP shared task (ST) 2013. The PC task concerns the automatic extraction of biomolecular reactions from text. The task setting, representation and semantics are defined with respect to pathway model standards and ontologies (SBML, BioPAX, SBO) and documents selected by relevance to specific model reactions. Two BioNLP ST 2013 participants successfully completed the PC task. The highest achieved F-score, 52.8%, indicates that event extraction is a promising approach to supporting pathway curation efforts. ## Citation Information ``` @inproceedings{ohta-etal-2013-overview, title = "Overview of the Pathway Curation ({PC}) task of {B}io{NLP} Shared Task 2013", author = "Ohta, Tomoko and Pyysalo, Sampo and Rak, Rafal and Rowley, Andrew and Chun, Hong-Woo and Jung, Sung-Jae and Choi, Sung-Pil and Ananiadou, Sophia and Tsujii, Jun{'}ichi", booktitle = "Proceedings of the {B}io{NLP} Shared Task 2013 Workshop", month = aug, year = "2013", address = "Sofia, Bulgaria", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W13-2009", pages = "67--75", } ```
1,823
[ [ -0.004405975341796875, -0.049102783203125, 0.031829833984375, 0.00991058349609375, -0.0255126953125, -0.01052093505859375, -0.0251312255859375, -0.03460693359375, 0.039764404296875, 0.032745361328125, -0.06280517578125, -0.0548095703125, -0.048187255859375, ...
HiTZ/alpaca_mt
2023-04-07T15:15:55.000Z
[ "task_categories:text-generation", "task_ids:dialogue-modeling", "annotations_creators:no-annotation", "language_creators:machine-generated", "multilinguality:multilingual", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:tatsu-lab/alpaca", "language:en", "language:pt...
HiTZ
Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003 engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better. This dataset also includes machine-translated data for 6 Iberian languages: Portuguese, Spanish, Catalan, Basque, Galician and Asturian.
@misc{alpaca, author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto }, title = {Stanford Alpaca: An Instruction-following LLaMA model}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {https://github.com/tatsu-lab/stanford_alpaca}, }
7
90
2023-03-22T15:27:30
--- annotations_creators: - no-annotation language: - en - pt - es - ca - eu - gl - at language_creators: - machine-generated license: cc-by-nc-4.0 multilinguality: - multilingual - translation pretty_name: Alpaca MT size_categories: - 10K<n<100K source_datasets: - tatsu-lab/alpaca tags: - instruction-finetuning task_categories: - text-generation task_ids: - dialogue-modeling dataset_info: - config_name: en features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: prompt dtype: string splits: - name: train num_bytes: 32088854 num_examples: 51942 download_size: 22764890 dataset_size: 32088854 - config_name: pt features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: prompt dtype: string splits: - name: train num_bytes: 33600380 num_examples: 51942 download_size: 23513483 dataset_size: 33600380 - config_name: es features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: prompt dtype: string splits: - name: train num_bytes: 35893136 num_examples: 51942 download_size: 24483751 dataset_size: 35893136 - config_name: ca features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: prompt dtype: string splits: - name: train num_bytes: 33938638 num_examples: 51942 download_size: 23096222 dataset_size: 33938638 - config_name: eu features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: prompt dtype: string splits: - name: train num_bytes: 29977672 num_examples: 51942 download_size: 20469814 dataset_size: 29977672 - config_name: gl features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: prompt dtype: string splits: - name: train num_bytes: 32736710 num_examples: 51942 download_size: 22356802 dataset_size: 32736710 - config_name: at features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: prompt dtype: string splits: - name: train num_bytes: 31487842 num_examples: 51942 download_size: 20688305 dataset_size: 31487842 --- # Dataset Card for Alpaca MT ## Dataset Description - **Homepage:** https://crfm.stanford.edu/2023/03/13/alpaca.html - **Repository:** https://github.com/juletx/alpaca-lora-mt - **Paper:** - **Leaderboard:** - **Point of Contact:** Rohan Taori ### Dataset Summary Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better. This dataset also includes machine-translated data for 6 Iberian languages: Portuguese, Spanish, Catalan, Basque, Galician and Asturian. Translation was done using NLLB-200 3.3B model. The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications: - The `text-davinci-003` engine to generate the instruction data instead of `davinci`. - A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`. - Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation. - The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions. - Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct. This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500). In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl). ### Supported Tasks and Leaderboards The Alpaca dataset designed for instruction training pretrained language models. ### Languages The original data in Alpaca is in English (BCP-47 en). We also provide machine-translated data for 6 Iberian languages: Portuguese (BCP-47 pt), Spanish (BCP-47 es), Catalan (BCP-47 ca), Basque (BCP-47 eu), Galician (BCP-47 gl) and Asturian (BCP-47 at). ## Dataset Structure ### Data Instances An example of "train" looks as follows: ```json { "instruction": "Create a classification task by clustering the given list of items.", "input": "Apples, oranges, bananas, strawberries, pineapples", "output": "Class 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples", "text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nCreate a classification task by clustering the given list of items.\n\n### Input:\nApples, oranges, bananas, strawberries, pineapples\n\n### Response:\nClass 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples", } ``` ### Data Fields The data fields are as follows: * `instruction`: describes the task the model should perform. Each of the 52K instructions is unique. * `input`: optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input. * `output`: the answer to the instruction as generated by `text-davinci-003`. * `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors for fine-tuning their models. ### Data Splits | | train | |---------------|------:| | en | 52002 | | pt | 52002 | | es | 52002 | | ca | 52002 | | eu | 52002 | | gl | 52002 | | at | 52002 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset Excerpt the [blog post](https://crfm.stanford.edu/2023/03/13/alpaca.html) accompanying the release of this dataset: > We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on instruction-following language models, resulting in better science and ultimately new techniques to address the existing deficiencies with these models. At the same time, any release carries some risk. First, we recognize that releasing our training recipe reveals the feasibility of certain capabilities. On one hand, this enables more people (including bad actors) to create models that could cause harm (either intentionally or not). On the other hand, this awareness might incentivize swift defensive action, especially from the academic community, now empowered by the means to perform deeper safety research on such models. Overall, we believe that the benefits for the research community outweigh the risks of this particular release. Given that we are releasing the training recipe, we believe that releasing the data, model weights, and training code incur minimal further risk, given the simplicity of the recipe. At the same time, releasing these assets has enormous benefits for reproducible science, so that the academic community can use standard datasets, models, and code to perform controlled comparisons and to explore extensions. Deploying an interactive demo for Alpaca also poses potential risks, such as more widely disseminating harmful content and lowering the barrier for spam, fraud, or disinformation. We have put into place two risk mitigation strategies. First, we have implemented a content filter using OpenAI’s content moderation API, which filters out harmful content as defined by OpenAI’s usage policies. Second, we watermark all the model outputs using the method described in Kirchenbauer et al. 2023, so that others can detect (with some probability) whether an output comes from Alpaca 7B. Finally, we have strict terms and conditions for using the demo; it is restricted to non-commercial uses and to uses that follow LLaMA’s license agreement. We understand that these mitigation measures can be circumvented once we release the model weights or if users train their own instruction-following models. However, by installing these mitigations, we hope to advance the best practices and ultimately develop community norms for the responsible deployment of foundation models. ### Discussion of Biases [More Information Needed] ### Other Known Limitations The `alpaca` data is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). ### Citation Information ``` @misc{alpaca, author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto }, title = {Stanford Alpaca: An Instruction-following LLaMA model}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}}, } ``` ### Contributions [More Information Needed]
10,371
[ [ -0.032012939453125, -0.059906005859375, 0.01013946533203125, 0.00670623779296875, -0.0199432373046875, -0.02520751953125, -0.013031005859375, -0.03753662109375, 0.01561737060546875, 0.0489501953125, -0.050323486328125, -0.05859375, -0.05718994140625, -0.0018...
Francesco/construction-safety-gsnvb
2023-03-30T09:11:51.000Z
[ "task_categories:object-detection", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc", "rf100", "region:us" ]
Francesco
null
null
2
90
2023-03-30T09:11:17
--- dataset_info: features: - name: image_id dtype: int64 - name: image dtype: image - name: width dtype: int32 - name: height dtype: int32 - name: objects sequence: - name: id dtype: int64 - name: area dtype: int64 - name: bbox sequence: float32 length: 4 - name: category dtype: class_label: names: '0': construction-safety '1': helmet '2': no-helmet '3': no-vest '4': person '5': vest annotations_creators: - crowdsourced language_creators: - found language: - en license: - cc multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - object-detection task_ids: [] pretty_name: construction-safety-gsnvb tags: - rf100 --- # Dataset Card for construction-safety-gsnvb ** The original COCO dataset is stored at `dataset.tar.gz`** ## Dataset Description - **Homepage:** https://universe.roboflow.com/object-detection/construction-safety-gsnvb - **Point of Contact:** francesco.zuppichini@gmail.com ### Dataset Summary construction-safety-gsnvb ### Supported Tasks and Leaderboards - `object-detection`: The dataset can be used to train a model for Object Detection. ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its object annotations. ``` { 'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>, 'width': 964043, 'height': 640, 'objects': { 'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [ [302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0] ], 'category': [4, 4, 0, 0] } } ``` ### Data Fields - `image`: the image id - `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `width`: the image width - `height`: the image height - `objects`: a dictionary containing bounding box metadata for the objects present on the image - `id`: the annotation id - `area`: the area of the bounding box - `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) - `category`: the object's category. #### Who are the annotators? Annotators are Roboflow users ## Additional Information ### Licensing Information See original homepage https://universe.roboflow.com/object-detection/construction-safety-gsnvb ### Citation Information ``` @misc{ construction-safety-gsnvb, title = { construction safety gsnvb Dataset }, type = { Open Source Dataset }, author = { Roboflow 100 }, howpublished = { \url{ https://universe.roboflow.com/object-detection/construction-safety-gsnvb } }, url = { https://universe.roboflow.com/object-detection/construction-safety-gsnvb }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2023-03-29 }, }" ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
3,561
[ [ -0.04144287109375, -0.04461669921875, 0.00988006591796875, -0.0006308555603027344, -0.040313720703125, -0.01346588134765625, 0.00821685791015625, -0.03857421875, 0.0112457275390625, 0.0276641845703125, -0.042083740234375, -0.08026123046875, -0.04779052734375, ...
BelleGroup/multiturn_chat_0.8M
2023-04-02T09:15:32.000Z
[ "task_categories:text2text-generation", "size_categories:100K<n<1M", "language:zh", "license:gpl-3.0", "region:us" ]
BelleGroup
null
null
77
90
2023-04-02T08:55:44
--- license: gpl-3.0 task_categories: - text2text-generation language: - zh size_categories: - 100K<n<1M --- # Multiturn Chat 0.8M ## 内容 包含约80万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的*用户*与*助手*的多轮对话。 注意:此数据集是由ChatGPT产生的,未经过严格校验,内容可能包含错误。使用过程中请注意这一点。 **instruction**中包含多轮对话的上文内容,以*Human:*和*Assistant:*区分,**output**中包含当前*助手*角色的回答。 ## 样例 ``` { "instruction": "Human:请根据题材「健身房」,写一篇800字的文章。\nAssistant:好的,以下是由我生成的800字「健身房」题材文章:\n在现代社会,人们越来越珍视健康和身材。健身房就是一个人们可以锻炼身体的场所,也是一个社交的场所。在健身房,人们可以通过器械训练,有氧运动和瑜伽等方式来达到健身的目的。\n对于喜欢力量训练的人来说,选择健身房的一个原因是因为那里的器械更加完备,可以满足训练的需求。当然,不仅仅是体力上的训练,健身房同样也是锻炼毅力和耐力的地方。因为每一次锻炼的过程都需要坚持和勇气。\n而对于那些想要减肥的人来说,健身房也是非常重要的。有氧运动比如慢跑,跳操,游泳等都是非常有效的减肥方式。而且因为健身房的压力比较大,所以很容易使人们保持锻炼的热情和动力。\n一些人则会选择瑜伽馆作为健身的场所,因为它们通常比较安静,能够让人们放松。瑜伽有纾解压力,提高柔韧性,强化肌肉的好处,而且更适合那些不喜欢高强度训练的人。\n总的来说,健身房和瑜伽馆是现代社会不可或缺的健身场所。无论您是力量训练爱好者,减肥追求者还是想要放松的人,您都可以在这里找到适合自己的锻炼方式。\nHuman:请从上文摘取三个句子进行打分。\nAssistant:", "input": "", "output": "好的。以下是我摘取的三个句子:\n1. \"在健身房,人们可以通过器械训练,有氧运动和瑜伽等方式来达到健身的目的。\"\n2. \"而对于那些想要减肥的人来说,健身房也是非常重要的。\"\n3. \"总的来说,健身房和瑜伽馆是现代社会不可或缺的健身场所。\"\n网页端评分界面还未准备好,不能在这里为你进行打分,请见谅。" } ``` ### 字段 ``` instruction: 指令 input: 输入(本数据集均为空) output: 输出 ``` ## 局限性和使用限制 我们要求开发者仅将我们开源的代码、数据、模型及后续衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。 由于数据是由*ChatGPT*生成的,未经严格验证,在事实性和其他方面还存在一些不足。因此,在使用此数据集时,请务必注意甄别。 本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目的开发者不承担任何责任。 *** # Multiturn Chat 0.8M ## Contents Includes approx. 0.8M Chinese multiturn dialogs between *human* and *assistant*. Note: this subset was generated by *ChatGPT* and was not strictly verified. The dialog contents might contain errors. Please take this in mind when using this subset. **instruction** contains history dialog context, distinguishable by *Human:* and *Assistant:*, **output** contains the current reply by *assistant*. ## Sample ``` { "instruction": "Human:请根据题材「健身房」,写一篇800字的文章。\nAssistant:好的,以下是由我生成的800字「健身房」题材文章:\n在现代社会,人们越来越珍视健康和身材。健身房就是一个人们可以锻炼身体的场所,也是一个社交的场所。在健身房,人们可以通过器械训练,有氧运动和瑜伽等方式来达到健身的目的。\n对于喜欢力量训练的人来说,选择健身房的一个原因是因为那里的器械更加完备,可以满足训练的需求。当然,不仅仅是体力上的训练,健身房同样也是锻炼毅力和耐力的地方。因为每一次锻炼的过程都需要坚持和勇气。\n而对于那些想要减肥的人来说,健身房也是非常重要的。有氧运动比如慢跑,跳操,游泳等都是非常有效的减肥方式。而且因为健身房的压力比较大,所以很容易使人们保持锻炼的热情和动力。\n一些人则会选择瑜伽馆作为健身的场所,因为它们通常比较安静,能够让人们放松。瑜伽有纾解压力,提高柔韧性,强化肌肉的好处,而且更适合那些不喜欢高强度训练的人。\n总的来说,健身房和瑜伽馆是现代社会不可或缺的健身场所。无论您是力量训练爱好者,减肥追求者还是想要放松的人,您都可以在这里找到适合自己的锻炼方式。\nHuman:请从上文摘取三个句子进行打分。\nAssistant:", "input": "", "output": "好的。以下是我摘取的三个句子:\n1. \"在健身房,人们可以通过器械训练,有氧运动和瑜伽等方式来达到健身的目的。\"\n2. \"而对于那些想要减肥的人来说,健身房也是非常重要的。\"\n3. \"总的来说,健身房和瑜伽馆是现代社会不可或缺的健身场所。\"\n网页端评分界面还未准备好,不能在这里为你进行打分,请见谅。" } ``` ### Schema ``` instruction: 指令 input: 输入(本数据集均为空) output: 输出 ``` ## Limitation and Usage Limits We require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed. Since this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed. This dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project.
3,334
[ [ -0.04180908203125, -0.05169677734375, 0.0219268798828125, 0.0343017578125, -0.03594970703125, -0.01397705078125, -0.00421142578125, -0.031768798828125, 0.03460693359375, 0.037322998046875, -0.05560302734375, -0.0386962890625, -0.04766845703125, 0.00406646728...
clarin-knext/nfcorpus-pl
2023-06-07T08:11:26.000Z
[ "language:pl", "arxiv:2305.19840", "region:us" ]
clarin-knext
null
null
0
90
2023-06-06T22:41:34
--- language: - pl --- Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**. Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf Contact: konrad.wojtasik@pwr.edu.pl
201
[ [ -0.01538848876953125, -0.0628662109375, 0.035491943359375, 0.01641845703125, -0.02215576171875, -0.0103759765625, -0.0115814208984375, -0.034515380859375, -0.0013074874877929688, 0.0286102294921875, -0.03826904296875, -0.04815673828125, -0.0289764404296875, ...
Vipitis/Shadertoys-FunctionGeneration-dev
2023-10-15T23:00:48.000Z
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "language:code", "license:cc-by-nc-sa-3.0", "shader", "region:us" ]
Vipitis
null
null
2
90
2023-08-14T21:41:28
--- language: - en - code license: cc-by-nc-sa-3.0 size_categories: - 10K<n<100K task_categories: - text-generation pretty_name: 'Shadertoys: shadereval task 2: Function Generation (development build)' configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: full_code dtype: string - name: func_range sequence: int64 - name: model_ctx dtype: string - name: author dtype: string - name: source dtype: string - name: func_id dtype: int64 splits: - name: train num_bytes: 111947580 num_examples: 8532 - name: test num_bytes: 13275768 num_examples: 1571 download_size: 9687558 dataset_size: 125223348 tags: - shader --- # Dataset Card for "Shadertoys-FunctionGeneration-dev" this is a filtered and preprocessed dataset of the [Shadertoys](https://huggingface.co/datasets/Vipitis/Shadertoys) (0.0.2 revision) dataset. Please take all information from the dataset card there. The evaluation task is being develop here: https://github.com/Vipitis/bigcode-evaluation-harness/tree/shadereval_task2 (somewhat public)
1,172
[ [ -0.0290985107421875, -0.0180511474609375, -0.0011768341064453125, 0.04180908203125, -0.031829833984375, 0.01076507568359375, -0.003322601318359375, -0.038726806640625, 0.037353515625, 0.057098388671875, -0.07861328125, -0.041534423828125, -0.022796630859375, ...
morpheuslord/cve-llm-training
2023-08-30T06:26:48.000Z
[ "task_categories:text-generation", "language:en", "license:apache-2.0", "code", "region:us" ]
morpheuslord
null
null
6
90
2023-08-15T06:15:00
--- license: apache-2.0 task_categories: - text-generation language: - en tags: - code pretty_name: CVE-LLm-Dataset data_source: Custom data collected from the CVE database data_formats: JSONL --- # CVE-llm_dataset This dataset is intended to train an LLM model for an utterly CVE-focused input and output. ## Data extraction: For the data extraction, I first downloaded the CVE database from NVD lists and then loaded them using the `cve_dataset_2.py` and `cve_dataset.py` both have produce different datasets one is for llama and the other is for openai GPT. The CVE json files are mapped in this format: ``` cves: | ├─1999 | ├─0xxx | | ├─CVE-1999-0001.json | | ├─.... | | └─CVE-1999-0999.json | └─1xxx | ├─CVE-1999-1000.json | ├─.... | └─CVE-1999-1598.json └─2023 ``` The programs traverse through these folders extract the data in the files and arrange them into usable formats for the fine-tuning process. ## llama2 Model dataset: The llama2 fine-tuned dataset follows this format: ``` { "instruction": "Explain CVE-1999-0001", "input": "Explain the vulnerability: CVE-1999-0001", "output": "ip_input.c in BSD-derived TCP/IP implementations allows remote attackers to cause a denial of service (crash or hang) via crafted packets.\nAffected Products: n/a\nReferences: [{'tags': ['x_refsource_CONFIRM'], 'url': 'http://www.openbsd.org/errata23.html#tcpfix'}, {'name': '5707', 'tags': ['vdb-entry', 'x_refsource_OSVDB'], 'url': 'http://www.osvdb.org/5707'}]\nCVE State: PUBLISHED" } ``` The instruction is what we instruct the AI to do with the data provided For example we can command the AI `To take in user input analyze it and then based on what he asks returns an answer` This is also where we can add a `role` or a `personal` to the AI. The input is the user Input of the main query or data that must be processed by the AI. This is a crucial piece of information that the AI will process in order to provide an output. The output is the format that we define and tell the AI to generate answers in that format or provide that answer to the question asked.
2,135
[ [ -0.0246429443359375, -0.059417724609375, 0.0209808349609375, 0.00376129150390625, -0.02435302734375, -0.0161590576171875, 0.00945281982421875, -0.01363372802734375, 0.012664794921875, 0.072509765625, -0.055908203125, -0.051055908203125, -0.036865234375, -0.0...
yongsun-yoon/open-ner-english
2023-08-19T03:41:49.000Z
[ "region:us" ]
yongsun-yoon
null
null
0
90
2023-08-19T03:40:23
--- dataset_info: features: - name: text dtype: string - name: entities list: - name: entity_mentions sequence: string - name: entity_type dtype: string splits: - name: train num_bytes: 51881074.954063065 num_examples: 36711 - name: validation num_bytes: 12970622.045936935 num_examples: 9178 download_size: 40944137 dataset_size: 64851697.0 --- # Open NER (English)" This is the processed version of [Universal-NER/Pile-NER-type](https://huggingface.co/datasets/Universal-NER/Pile-NER-type).
553
[ [ -0.0157318115234375, -0.051116943359375, -0.0181732177734375, 0.01375579833984375, -0.01629638671875, -0.0115509033203125, -0.0008025169372558594, -0.0242919921875, 0.0562744140625, 0.06268310546875, -0.047027587890625, -0.04315185546875, -0.039306640625, 0....
asyafiqe/orca_mini_v1_indonesia
2023-08-27T10:54:58.000Z
[ "license:apache-2.0", "region:us" ]
asyafiqe
null
null
1
90
2023-08-27T10:53:05
--- license: apache-2.0 --- This is dataset is a modified version of psmathur's [orca_mini_v1](https://huggingface.co/datasets/psmathur/orca_mini_v1_dataset) dataset translated into Bahasa Indonesia by Google Translate.
220
[ [ -0.0176544189453125, -0.047882080078125, -0.0120697021484375, 0.010711669921875, -0.03875732421875, -0.00971221923828125, 0.0033111572265625, -0.03179931640625, 0.06793212890625, 0.062469482421875, -0.08013916015625, -0.0155181884765625, -0.0325927734375, 0....
yujiepan/wikitext-tiny
2023-08-31T18:05:09.000Z
[ "region:us" ]
yujiepan
null
null
0
90
2023-08-31T18:01:07
This dataset is sampled from `wikitext/wikitext-2-v1/train`. Codes to generate this dataset: ```python import datasets dataset = datasets.load_dataset('wikitext', 'wikitext-2-v1') selected = [] i = -1 while len(selected) < 24: i += 1 text = dataset['train'][i]['text'] if 8 < len(text.split(' ')) <= 16 and '=' not in text: selected.append(i) tiny_dataset = dataset['train'].select(selected) ```
427
[ [ -0.0145721435546875, -0.0042266845703125, 0.004520416259765625, 0.01491546630859375, -0.00921630859375, -0.00666046142578125, -0.00006127357482910156, -0.00617218017578125, 0.03485107421875, 0.02850341796875, -0.06805419921875, -0.005146026611328125, -0.01715087...
yzhuang/autotree_pmlb_100000_spambase_sgosdt_l256_dim10_d3_sd0
2023-09-07T19:42:03.000Z
[ "region:us" ]
yzhuang
null
null
0
90
2023-09-07T19:41:40
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 2364400000 num_examples: 100000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 340594567 dataset_size: 2600840000 --- # Dataset Card for "autotree_pmlb_100000_spambase_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
848
[ [ -0.040069580078125, -0.022979736328125, 0.006206512451171875, 0.0382080078125, -0.01154327392578125, 0.01180267333984375, 0.04949951171875, 0.00862884521484375, 0.057708740234375, 0.03192138671875, -0.06353759765625, -0.044708251953125, -0.053741455078125, 0...
yzhuang/autotree_automl_100000_covertype_sgosdt_l256_dim10_d3_sd0
2023-09-08T02:06:34.000Z
[ "region:us" ]
yzhuang
null
null
0
90
2023-09-08T02:06:02
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 2364400000 num_examples: 100000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 832579062 dataset_size: 2600840000 --- # Dataset Card for "autotree_automl_100000_covertype_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
851
[ [ -0.0243988037109375, -0.0164642333984375, 0.021820068359375, 0.0210113525390625, -0.019073486328125, 0.0115203857421875, 0.041412353515625, -0.0013647079467773438, 0.049285888671875, 0.03900146484375, -0.06097412109375, -0.05029296875, -0.055877685546875, 0....
warshakhan/donut_vqa_ISynHMP
2023-09-15T07:12:51.000Z
[ "task_categories:visual-question-answering", "language:en", "license:unknown", "medical", " prescriptions", "region:us" ]
warshakhan
null
null
0
90
2023-09-14T11:10:50
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: valid path: data/valid-* - split: test path: data/test-* dataset_info: features: - name: image dtype: image - name: ground_truth dtype: string splits: - name: train num_bytes: 578804498 num_examples: 2800 - name: valid num_bytes: 85350687 num_examples: 400 - name: test num_bytes: 172300907 num_examples: 800 download_size: 804418576 dataset_size: 836456092 license: unknown task_categories: - visual-question-answering language: - en tags: - medical - ' prescriptions' --- # Dataset Card for "donut_vqa_ISynHMP" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
806
[ [ -0.014801025390625, -0.01262664794921875, 0.0159149169921875, 0.007091522216796875, -0.01027679443359375, 0.0185089111328125, 0.00682830810546875, -0.005664825439453125, 0.0711669921875, 0.036651611328125, -0.0635986328125, -0.04949951171875, -0.041229248046875,...
ostapeno/qa-openai_batched_icl5_clen512_maxD-1_maxC2500_0_cleaned
2023-10-25T19:00:26.000Z
[ "region:us" ]
ostapeno
null
null
0
90
2023-10-25T16:41:26
Config: { "type": "QATransformConfig", "model_setting": "openai_batched", "icl_examples": 0, "icl_dataset": "lukaemon/mmlu", "icl_split": "validation", "icl_use_options": true, "num_iterations": 1, "max_context_length": 512, "max_tokens_instruction": 2048, "max_tokens_response": 1024, "max_contexts_per_subject": 2500 } Cleaning envolved removing ",space" at the end of instruction.
429
[ [ -0.041778564453125, -0.03216552734375, 0.0023956298828125, 0.016387939453125, -0.045257568359375, 0.007266998291015625, -0.01168060302734375, 0.00919342041015625, -0.0120391845703125, 0.045196533203125, -0.061859130859375, -0.033447265625, -0.0276031494140625, ...
finer
2023-01-25T14:30:30.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:fi", "license:mit", "arxiv:1908.04212", "region:us" ...
null
The directory data contains a corpus of Finnish technology related news articles with a manually prepared named entity annotation (digitoday.2014.csv). The text material was extracted from the archives of Digitoday, a Finnish online technology news source (www.digitoday.fi). The corpus consists of 953 articles (193,742 word tokens) with six named entity classes (organization, location, person, product, event, and date). The corpus is available for research purposes and can be readily used for development of NER systems for Finnish.
@article{ruokolainen2019finnish, title={A finnish news corpus for named entity recognition}, author={Ruokolainen, Teemu and Kauppinen, Pekka and Silfverberg, Miikka and Lind{\'e}n, Krister}, journal={Language Resources and Evaluation}, pages={1--26}, year={2019}, publisher={Springer} }
1
89
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - other language: - fi license: - mit multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition paperswithcode_id: finer pretty_name: Finnish News Corpus for Named Entity Recognition dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': O '1': B-DATE '2': B-EVENT '3': B-LOC '4': B-ORG '5': B-PER '6': B-PRO '7': I-DATE '8': I-EVENT '9': I-LOC '10': I-ORG '11': I-PER '12': I-PRO - name: nested_ner_tags sequence: class_label: names: '0': O '1': B-DATE '2': B-EVENT '3': B-LOC '4': B-ORG '5': B-PER '6': B-PRO '7': I-DATE '8': I-EVENT '9': I-LOC '10': I-ORG '11': I-PER '12': I-PRO config_name: finer splits: - name: train num_bytes: 5159550 num_examples: 13497 - name: validation num_bytes: 387494 num_examples: 986 - name: test num_bytes: 1327354 num_examples: 3512 - name: test_wikipedia num_bytes: 1404397 num_examples: 3360 download_size: 3733127 dataset_size: 8278795 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/mpsilfve/finer-data) - **Repository:** [Github](https://github.com/mpsilfve/finer-data) - **Paper:** [Arxiv](https://arxiv.org/abs/1908.04212) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields Each row consists of the following fields: * `id`: The sentence id * `tokens`: An ordered list of tokens from the full text * `ner_tags`: Named entity recognition tags for each token * `nested_ner_tags`: Nested named entity recognition tags for each token Note that by design, the length of `tokens`, `ner_tags`, and `nested_ner_tags` will always be identical. `ner_tags` and `nested_ner_tags` correspond to the list below: ``` [ "O", "B-DATE", "B-EVENT", "B-LOC", "B-ORG", "B-PER", "B-PRO", "I-DATE", "I-EVENT", "I-LOC", "I-ORG", "I-PER", "I-PRO" ] ``` IOB2 labeling scheme is used. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@stefan-it](https://github.com/stefan-it) for adding this dataset.
4,621
[ [ -0.03485107421875, -0.03558349609375, 0.01861572265625, 0.0134124755859375, -0.01806640625, 0.002735137939453125, -0.033233642578125, -0.0241241455078125, 0.040618896484375, 0.03814697265625, -0.060272216796875, -0.086181640625, -0.052520751953125, 0.0186157...
isixhosa_ner_corpus
2023-01-25T14:33:10.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:xh", "license:other", "region:us" ]
null
Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags.
@inproceedings{isixhosa_ner_corpus, author = {K. Podile and Roald Eiselen}, title = {NCHLT isiXhosa Named Entity Annotated Corpus}, booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.}, year = {2016}, url = {https://repo.sadilar.org/handle/20.500.12185/312}, }
0
89
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - xh license: - other multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: IsixhosaNerCorpus license_details: Creative Commons Attribution 2.5 South Africa License dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': OUT '1': B-PERS '2': I-PERS '3': B-ORG '4': I-ORG '5': B-LOC '6': I-LOC '7': B-MISC '8': I-MISC config_name: isixhosa_ner_corpus splits: - name: train num_bytes: 2414995 num_examples: 6284 download_size: 14513302 dataset_size: 2414995 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [IsiXhosa Ner Corpus Homepage](https://repo.sadilar.org/handle/20.500.12185/312) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** [Martin Puttkammer](mailto:Martin.Puttkammer@nwu.ac.za) ### Dataset Summary The isiXhosa Ner Corpus is a Xhosa dataset developed by [The Centre for Text Technology (CTexT), North-West University, South Africa](http://humanities.nwu.ac.za/ctext). The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Xhosa language. The dataset uses CoNLL shared task annotation standards. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language supported is Xhosa. ## Dataset Structure ### Data Instances A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. {'id': '0', 'ner_tags': [7, 8, 5, 6, 0], 'tokens': ['Injongo', 'ye-website', 'yaseMzantsi', 'Afrika', 'kukuvelisa'] } ### Data Fields - `id`: id of the sample - `tokens`: the tokens of the example text - `ner_tags`: the NER tags of each token The NER tags correspond to this list: ``` "OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC", ``` The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity. ### Data Splits The data was not split. ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - Xhosa. [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The data is based on South African government domain and was crawled from gov.za websites. [More Information Needed] #### Who are the source language producers? The data was produced by writers of South African government websites - gov.za [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The data was annotated during the NCHLT text resource development project. [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa). See: [more information](http://www.nwu.ac.za/ctext) ### Licensing Information The data is under the [Creative Commons Attribution 2.5 South Africa License](http://creativecommons.org/licenses/by/2.5/za/legalcode) ### Citation Information ``` @inproceedings{isixhosa_ner_corpus, author = { K. Podile and Roald Eiselen}, title = {NCHLT isiXhosa Named Entity Annotated Corpus}, booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.}, year = {2016}, url = {https://repo.sadilar.org/handle/20.500.12185/312}, } ``` ### Contributions Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
5,560
[ [ -0.03643798828125, -0.033203125, 0.0024280548095703125, 0.0267791748046875, -0.0296630859375, 0.000942230224609375, -0.021881103515625, -0.0361328125, 0.060394287109375, 0.043060302734375, -0.036590576171875, -0.054443359375, -0.06982421875, 0.028076171875, ...
journalists_questions
2023-01-25T14:33:26.000Z
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ar", "license:unknown", "question-identification", "region:us" ]
null
\ The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic tweets manually labeled for question identification over Arabic tweets posted by journalists.
\ @inproceedings{hasanain2016questions, title={What Questions Do Journalists Ask on Twitter?}, author={Hasanain, Maram and Bagdouri, Mossaab and Elsayed, Tamer and Oard, Douglas W}, booktitle={Tenth International AAAI Conference on Web and Social Media}, year={2016} }
0
89
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language_creators: - other language: - ar license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: [] pretty_name: JournalistsQuestions tags: - question-identification dataset_info: features: - name: tweet_id dtype: string - name: label dtype: class_label: names: '0': 'no' '1': 'yes' - name: label_confidence dtype: float32 config_name: plain_text splits: - name: train num_bytes: 342296 num_examples: 10077 download_size: 271039 dataset_size: 342296 --- # Dataset Card for journalists_questions ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://qufaculty.qu.edu.qa/telsayed/datasets/ - **Repository:** [Needs More Information] - **Paper:** https://www.aaai.org/ocs/index.php/ICWSM/ICWSM16/paper/download/13221/12856 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Maram Hasanain] maram.hasanain@qu.edu.qa ### Dataset Summary The journalists_questions dataset supports question identification over Arabic tweets of journalists. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Arabic ## Dataset Structure ### Data Instances Our dataset supports question identification task. It includes 10K Arabic tweets crawled from journalists accounts. Tweets were labelled by crowdsourcing. Each tweet is associated with one label: question tweet or not. A question tweet is a tweet that has at least one interrogative question. Each label is associated with a number that represents the confidence in the label, given that each tweet was labelled by 3 annotators and an aggregation method was followed to choose the final label. Below is an example: { 'tweet_id': '493235142128074753', 'label': 'yes', 'label_confidence':0.6359 } ### Data Fields tweet_id: the Twitter assigned ID for the tweet object. label: annotation of the tweet by whether it is a question or not label_confidence: confidence score for the label given annotations of multiple annotators per tweet ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale The dataset includes tweet IDs only due to Twitter content re-distribution policy. It was created and shared for research purposes for parties interested in understanding questions expecting answers by Arab journalists on Twitter. ### Source Data #### Initial Data Collection and Normalization To construct our dataset of question tweets posted by journalists, we first acquire a list of Twitter accounts of 389 Arab journalists. We use the Twitter API to crawl their available tweets, keeping only those that are identified by Twitter to be both Arabic, and not retweets (as these would contain content that was not originally authored by journalists). We apply a rule-based question filter to this dataset of 465,599 tweets, extracting 49,119 (10.6%) potential question tweets from 363 (93.3%) Arab journalists. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@MaramHasanain](https://github.com/MaramHasanain) for adding this dataset.
4,853
[ [ -0.0304107666015625, -0.0277252197265625, 0.023651123046875, 0.021453857421875, -0.0302581787109375, 0.021484375, -0.01214599609375, -0.0271148681640625, 0.0224761962890625, 0.0182647705078125, -0.04119873046875, -0.082763671875, -0.058990478515625, 0.016525...
multi_nli_mismatch
2023-04-05T10:10:18.000Z
[ "task_categories:text-classification", "task_ids:natural-language-inference", "task_ids:multi-input-text-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:ori...
null
The Multi-Genre Natural Language Inference (MultiNLI) corpus is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information. The corpus is modeled on the SNLI corpus, but differs in that covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalization evaluation. The corpus served as the basis for the shared task of the RepEval 2017 Workshop at EMNLP in Copenhagen.
@InProceedings{N18-1101, author = {Williams, Adina and Nangia, Nikita and Bowman, Samuel}, title = {A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference}, booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)}, year = {2018}, publisher = {Association for Computational Linguistics}, pages = {1112--1122}, location = {New Orleans, Louisiana}, url = {http://aclweb.org/anthology/N18-1101} }
1
89
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language_creators: - crowdsourced - found language: - en license: - cc-by-3.0 - cc-by-sa-3.0 - mit - other license_details: Open Portion of the American National Corpus multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - text-classification task_ids: - natural-language-inference - multi-input-text-classification paperswithcode_id: multinli pretty_name: Multi-Genre Natural Language Inference dataset_info: features: - name: premise dtype: string - name: hypothesis dtype: string - name: label dtype: string config_name: plain_text splits: - name: train num_bytes: 75601459 num_examples: 392702 - name: validation num_bytes: 2009444 num_examples: 10000 download_size: 226850426 dataset_size: 77610903 --- # Dataset Card for Multi-Genre Natural Language Inference (Mismatched only) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.nyu.edu/projects/bowman/multinli/](https://www.nyu.edu/projects/bowman/multinli/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 226.85 MB - **Size of the generated dataset:** 77.62 MB - **Total amount of disk used:** 304.46 MB ### Dataset Summary The Multi-Genre Natural Language Inference (MultiNLI) corpus is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information. The corpus is modeled on the SNLI corpus, but differs in that covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalization evaluation. The corpus served as the basis for the shared task of the RepEval 2017 Workshop at EMNLP in Copenhagen. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### plain_text - **Size of downloaded dataset files:** 226.85 MB - **Size of the generated dataset:** 77.62 MB - **Total amount of disk used:** 304.46 MB An example of 'train' looks as follows. ``` { "hypothesis": "independence", "label": "contradiction", "premise": "correlation" } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a `string` feature. ### Data Splits | name |train |validation| |----------|-----:|---------:| |plain_text|392702| 10000| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{N18-1101, author = "Williams, Adina and Nangia, Nikita and Bowman, Samuel", title = "A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference", booktitle = "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", year = "2018", publisher = "Association for Computational Linguistics", pages = "1112--1122", location = "New Orleans, Louisiana", url = "http://aclweb.org/anthology/N18-1101" } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
6,985
[ [ -0.0400390625, -0.04559326171875, 0.0144195556640625, 0.0183563232421875, -0.006053924560546875, -0.0010356903076171875, -0.04058837890625, -0.038238525390625, 0.0472412109375, 0.03936767578125, -0.057373046875, -0.06671142578125, -0.03564453125, 0.013885498...
myanmar_news
2023-01-25T14:41:11.000Z
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:my", "license:gpl-3.0", "region:us" ]
null
The Myanmar news dataset contains article snippets in four categories: Business, Entertainment, Politics, and Sport. These were collected in October 2017 by Aye Hninn Khine
null
1
89
2022-03-02T23:29:22
--- annotations_creators: - found language_creators: - found language: - my license: - gpl-3.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - topic-classification pretty_name: MyanmarNews dataset_info: features: - name: text dtype: string - name: category dtype: class_label: names: '0': Sport '1': Politic '2': Business '3': Entertainment splits: - name: train num_bytes: 3797368 num_examples: 8116 download_size: 610592 dataset_size: 3797368 --- # Dataset Card for Myanmar_News ## Dataset Description - **Repository:** https://github.com/ayehninnkhine/MyanmarNewsClassificationSystem ### Dataset Summary The Myanmar news dataset contains article snippets in four categories: Business, Entertainment, Politics, and Sport. These were collected in October 2017 by Aye Hninn Khine ### Languages Myanmar/Burmese language ## Dataset Structure ### Data Fields - text - text from article - category - a topic: Business, Entertainment, **Politic**, or **Sport** (note spellings) ### Data Splits One training set (8,116 total rows) ### Source Data #### Initial Data Collection and Normalization Data was collected by Aye Hninn Khine and shared on GitHub with a GPL-3.0 license. Multiple text files were consolidated into one labeled CSV file by Nick Doiron. ## Additional Information ### Dataset Curators Contributors to original GitHub repo: - https://github.com/ayehninnkhine ### Licensing Information GPL-3.0 ### Citation Information See https://github.com/ayehninnkhine/MyanmarNewsClassificationSystem ### Contributions Thanks to [@mapmeld](https://github.com/mapmeld) for adding this dataset.
1,790
[ [ -0.00009810924530029297, -0.01071929931640625, -0.001850128173828125, 0.013702392578125, -0.04010009765625, 0.003326416015625, -0.034698486328125, -0.006359100341796875, 0.01544952392578125, 0.03167724609375, -0.01190185546875, -0.0543212890625, -0.037353515625,...
narrativeqa_manual
2022-11-18T21:32:14.000Z
[ "task_categories:text2text-generation", "task_ids:abstractive-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:1712.07040", "region:us" ]
null
The Narrative QA Manual dataset is a reading comprehension dataset, in which the reader must answer questions about stories by reading entire books or movie scripts. The QA tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience.\THIS DATASET REQUIRES A MANUALLY DOWNLOADED FILE! Because of a script in the original repository which downloads the stories from original URLs everytime, The links are sometimes broken or invalid. Therefore, you need to manually download the stories for this dataset using the script provided by the authors (https://github.com/deepmind/narrativeqa/blob/master/download_stories.sh). Running the shell script creates a folder named "tmp" in the root directory and downloads the stories there. This folder containing the storiescan be used to load the dataset via `datasets.load_dataset("narrativeqa_manual", data_dir="<path/to/folder>")`.
@article{kovcisky2018narrativeqa, title={The narrativeqa reading comprehension challenge}, author={Ko{\v{c}}isk{\'y}, Tom{\'a}{\v{s}} and Schwarz, Jonathan and Blunsom, Phil and Dyer, Chris and Hermann, Karl Moritz and Melis, G{\'a}bor and Grefenstette, Edward}, journal={Transactions of the Association for Computational Linguistics}, volume={6}, pages={317--328}, year={2018}, publisher={MIT Press} }
0
89
2022-03-02T23:29:22
--- annotations_creators: - crowdsourced language_creators: - found language: - en license: - apache-2.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text2text-generation task_ids: - abstractive-qa paperswithcode_id: narrativeqa pretty_name: NarrativeQA dataset_info: features: - name: document struct: - name: id dtype: string - name: kind dtype: string - name: url dtype: string - name: file_size dtype: int32 - name: word_count dtype: int32 - name: start dtype: string - name: end dtype: string - name: summary struct: - name: text dtype: string - name: tokens sequence: string - name: url dtype: string - name: title dtype: string - name: text dtype: string - name: question struct: - name: text dtype: string - name: tokens sequence: string - name: answers list: - name: text dtype: string - name: tokens sequence: string splits: - name: train num_bytes: 9115940054 num_examples: 32747 - name: test num_bytes: 2911702563 num_examples: 10557 - name: validation num_bytes: 968994186 num_examples: 3461 download_size: 22638273 dataset_size: 12996636803 --- # Dataset Card for Narrative QA Manual ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [NarrativeQA Homepage](https://deepmind.com/research/open-source/narrativeqa) - **Repository:** [NarrativeQA Repo](https://github.com/deepmind/narrativeqa) - **Paper:** [The NarrativeQA Reading Comprehension Challenge](https://arxiv.org/pdf/1712.07040.pdf) - **Leaderboard:** - **Point of Contact:** [Tomáš Kočiský](mailto:tkocisky@google.com) [Jonathan Schwarz](mailto:schwarzjn@google.com) [Phil Blunsom](pblunsom@google.com) [Chris Dyer](cdyer@google.com) [Karl Moritz Hermann](mailto:kmh@google.com) [Gábor Melis](mailto:melisgl@google.com) [Edward Grefenstette](mailto:etg@google.com) ### Dataset Summary NarrativeQA Manual is an English-language dataset of stories and corresponding questions designed to test reading comprehension, especially on long documents. THIS DATASET REQUIRES A MANUALLY DOWNLOADED FILE! Because of a script in the original repository which downloads the stories from original URLs everytime, the links are sometimes broken or invalid. Therefore, you need to manually download the stories for this dataset using the script provided by the authors (https://github.com/deepmind/narrativeqa/blob/master/download_stories.sh). Running the shell script creates a folder named "tmp" in the root directory and downloads the stories there. This folder containing the stories can be used to load the dataset via `datasets.load_dataset("narrativeqa_manual", data_dir="<path/to/folder>")`. ### Supported Tasks and Leaderboards The dataset is used to test reading comprehension. There are 2 tasks proposed in the paper: "summaries only" and "stories only", depending on whether the human-generated summary or the full story text is used to answer the question. ### Languages English ## Dataset Structure ### Data Instances A typical data point consists of a question and answer pair along with a summary/story which can be used to answer the question. Additional information such as the url, word count, wikipedia page, are also provided. A typical example looks like this: ``` { "document": { "id": "23jncj2n3534563110", "kind": "movie", "url": "https://www.imsdb.com/Movie%20Scripts/Name%20of%20Movie.html", "file_size": 80473, "word_count": 41000, "start": "MOVIE screenplay by", "end": ". THE END", "summary": { "text": "Joe Bloggs begins his journey exploring...", "tokens": ["Joe", "Bloggs", "begins", "his", "journey", "exploring",...], "url": "http://en.wikipedia.org/wiki/Name_of_Movie", "title": "Name of Movie (film)" }, "text": "MOVIE screenplay by John Doe\nSCENE 1..." }, "question": { "text": "Where does Joe Bloggs live?", "tokens": ["Where", "does", "Joe", "Bloggs", "live", "?"], }, "answers": [ {"text": "At home", "tokens": ["At", "home"]}, {"text": "His house", "tokens": ["His", "house"]} ] } ``` ### Data Fields - `document.id` - Unique ID for the story. - `document.kind` - "movie" or "gutenberg" depending on the source of the story. - `document.url` - The URL where the story was downloaded from. - `document.file_size` - File size (in bytes) of the story. - `document.word_count` - Number of tokens in the story. - `document.start` - First 3 tokens of the story. Used for verifying the story hasn't been modified. - `document.end` - Last 3 tokens of the story. Used for verifying the story hasn't been modified. - `document.summary.text` - Text of the wikipedia summary of the story. - `document.summary.tokens` - Tokenized version of `document.summary.text`. - `document.summary.url` - Wikipedia URL of the summary. - `document.summary.title` - Wikipedia Title of the summary. - `question` - `{"text":"...", "tokens":[...]}` for the question about the story. - `answers` - List of `{"text":"...", "tokens":[...]}` for valid answers for the question. ### Data Splits The data is split into training, valiudation, and test sets based on story (i.e. the same story cannot appear in more than one split): | Train | Valid | Test | | ------ | ----- | ----- | | 32747 | 3461 | 10557 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Stories and movies scripts were downloaded from [Project Gutenburg](https://www.gutenberg.org) and a range of movie script repositories (mainly [imsdb](http://www.imsdb.com)). #### Who are the source language producers? The language producers are authors of the stories and scripts as well as Amazon Turk workers for the questions. ### Annotations #### Annotation process Amazon Turk Workers were provided with human written summaries of the stories (To make the annotation tractable and to lead annotators towards asking non-localized questions). Stories were matched with plot summaries from Wikipedia using titles and verified the matching with help from human annotators. The annotators were asked to determine if both the story and the summary refer to a movie or a book (as some books are made into movies), or if they are the same part in a series produced in the same year. Annotators on Amazon Mechanical Turk were instructed to write 10 question–answer pairs each based solely on a given summary. Annotators were instructed to imagine that they are writing questions to test students who have read the full stories but not the summaries. We required questions that are specific enough, given the length and complexity of the narratives, and to provide adiverse set of questions about characters, events, why this happened, and so on. Annotators were encouraged to use their own words and we prevented them from copying. We asked for answers that are grammatical, complete sentences, and explicitly allowed short answers (one word, or a few-word phrase, or ashort sentence) as we think that answering with a full sentence is frequently perceived as artificial when asking about factual information. Annotators were asked to avoid extra, unnecessary information in the question or the answer, and to avoid yes/no questions or questions about the author or the actors. #### Who are the annotators? Amazon Mechanical Turk workers. ### Personal and Sensitive Information None ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is released under a [Apache-2.0 License](https://github.com/deepmind/narrativeqa/blob/master/LICENSE). ### Citation Information ``` @article{narrativeqa, author = {Tom\'a\v s Ko\v cisk\'y and Jonathan Schwarz and Phil Blunsom and Chris Dyer and Karl Moritz Hermann and G\'abor Melis and Edward Grefenstette}, title = {The {NarrativeQA} Reading Comprehension Challenge}, journal = {Transactions of the Association for Computational Linguistics}, url = {https://TBD}, volume = {TBD}, year = {2018}, pages = {TBD}, } ``` ### Contributions Thanks to [@rsanjaykamath](https://github.com/rsanjaykamath) for adding this dataset.
9,716
[ [ -0.03778076171875, -0.0545654296875, 0.03302001953125, -0.007564544677734375, -0.0204315185546875, 0.0148162841796875, -0.007411956787109375, -0.0153350830078125, 0.02490234375, 0.04010009765625, -0.0667724609375, -0.05035400390625, -0.035736083984375, 0.025...
ro_sent
2023-01-25T14:43:14.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ro", "license:unknown", "arxiv:2009.08712", "region:us" ]
null
This dataset is a Romanian Sentiment Analysis dataset. It is present in a processed form, as used by the authors of `Romanian Transformers` in their examples and based on the original data present in `https://github.com/katakonst/sentiment-analysis-tensorflow`. The original dataset is collected from product and movie reviews in Romanian.
@article{dumitrescu2020birth, title={The birth of Romanian BERT}, author={Dumitrescu, Stefan Daniel and Avram, Andrei-Marius and Pyysalo, Sampo}, journal={arXiv preprint arXiv:2009.08712}, year={2020} }
0
89
2022-03-02T23:29:22
--- annotations_creators: - found language_creators: - found language: - ro license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification pretty_name: RoSent dataset_info: features: - name: original_id dtype: string - name: id dtype: string - name: sentence dtype: string - name: label dtype: class_label: names: '0': negative '1': positive splits: - name: train num_bytes: 8367687 num_examples: 17941 - name: test num_bytes: 6837430 num_examples: 11005 download_size: 14700057 dataset_size: 15205117 --- # Dataset Card for RoSent ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [GitHub](https://github.com/dumitrescustefan/Romanian-Transformers/tree/examples/examples/sentiment_analysis) - **Repository:** [GitHub](https://github.com/dumitrescustefan/Romanian-Transformers/tree/examples/examples/sentiment_analysis) - **Paper:** [arXiv preprint](https://arxiv.org/pdf/2009.08712.pdf) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset is a Romanian Sentiment Analysis dataset. It is present in a processed form, as used by the authors of [`Romanian Transformers`](https://github.com/dumitrescustefan/Romanian-Transformers) in their examples and based on the original data present in at [this GitHub repository](https://github.com/katakonst/sentiment-analysis-tensorflow). The original data contains product and movie reviews in Romanian. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages This dataset is present in Romanian language. ## Dataset Structure ### Data Instances An instance from the `train` split: ``` {'id': '0', 'label': 1, 'original_id': '0', 'sentence': 'acest document mi-a deschis cu adevarat ochii la ceea ce oamenii din afara statelor unite s-au gandit la atacurile din 11 septembrie. acest film a fost construit in mod expert si prezinta acest dezastru ca fiind mai mult decat un atac asupra pamantului american. urmarile acestui dezastru sunt previzionate din multe tari si perspective diferite. cred ca acest film ar trebui sa fie mai bine distribuit pentru acest punct. de asemenea, el ajuta in procesul de vindecare sa vada in cele din urma altceva decat stirile despre atacurile teroriste. si unele dintre piese sunt de fapt amuzante, dar nu abuziv asa. acest film a fost extrem de recomandat pentru mine, si am trecut pe acelasi sentiment.'} ``` ### Data Fields - `original_id`: a `string` feature containing the original id from the file. - `id`: a `string` feature . - `sentence`: a `string` feature. - `label`: a classification label, with possible values including `negative` (0), `positive` (1). ### Data Splits This dataset has two splits: `train` with 17941 examples, and `test` with 11005 examples. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The source dataset is present at the [this GitHub repository](https://github.com/katakonst/sentiment-analysis-tensorflow) and is based on product and movie reviews. The original source is unknown. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Stefan Daniel Dumitrescu, Andrei-Marious Avram, Sampo Pyysalo, [@katakonst](https://github.com/katakonst) ### Licensing Information [More Information Needed] ### Citation Information ``` @article{dumitrescu2020birth, title={The birth of Romanian BERT}, author={Dumitrescu, Stefan Daniel and Avram, Andrei-Marius and Pyysalo, Sampo}, journal={arXiv preprint arXiv:2009.08712}, year={2020} } ``` ### Contributions Thanks to [@gchhablani](https://github.com/gchhablani) and [@iliemihai](https://github.com/iliemihai) for adding this dataset.
5,360
[ [ -0.0355224609375, -0.043975830078125, -0.0002639293670654297, 0.0262298583984375, -0.0293121337890625, 0.0000286102294921875, -0.01983642578125, -0.0133209228515625, 0.041656494140625, 0.00983428955078125, -0.0645751953125, -0.0701904296875, -0.054412841796875, ...
tsac
2023-01-25T14:54:29.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:aeb", "license:lgpl-3.0", "region:us" ]
null
Tunisian Sentiment Analysis Corpus. About 17k user comments manually annotated to positive and negative polarities. This corpus is collected from Facebook users comments written on official pages of Tunisian radios and TV channels namely Mosaique FM, JawhraFM, Shemes FM, HiwarElttounsi TV and Nessma TV. The corpus is collected from a period spanning January 2015 until June 2016.
@inproceedings{medhaffar-etal-2017-sentiment, title = "Sentiment Analysis of {T}unisian Dialects: Linguistic Ressources and Experiments", author = "Medhaffar, Salima and Bougares, Fethi and Est{`e}ve, Yannick and Hadrich-Belguith, Lamia", booktitle = "Proceedings of the Third {A}rabic Natural Language Processing Workshop", month = apr, year = "2017", address = "Valencia, Spain", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/W17-1307", doi = "10.18653/v1/W17-1307", pages = "55--61", abstract = "Dialectal Arabic (DA) is significantly different from the Arabic language taught in schools and used in written communication and formal speech (broadcast news, religion, politics, etc.). There are many existing researches in the field of Arabic language Sentiment Analysis (SA); however, they are generally restricted to Modern Standard Arabic (MSA) or some dialects of economic or political interest. In this paper we are interested in the SA of the Tunisian Dialect. We utilize Machine Learning techniques to determine the polarity of comments written in Tunisian Dialect. First, we evaluate the SA systems performances with models trained using freely available MSA and Multi-dialectal data sets. We then collect and annotate a Tunisian Dialect corpus of 17.000 comments from Facebook. This corpus allows us a significant accuracy improvement compared to the best model trained on other Arabic dialects or MSA data. We believe that this first freely available corpus will be valuable to researchers working in the field of Tunisian Sentiment Analysis and similar areas.", }
0
89
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - found language: - aeb license: - lgpl-3.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification paperswithcode_id: tsac pretty_name: Tunisian Sentiment Analysis Corpus dataset_info: features: - name: id dtype: string - name: sentence dtype: string - name: target dtype: class_label: names: '0': '1' '1': '-1' splits: - name: train num_bytes: 1020146 num_examples: 13669 - name: test num_bytes: 268504 num_examples: 3400 download_size: 963015 dataset_size: 1288650 --- # Dataset Card for Tunisian Sentiment Analysis Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** None - **Repository:** https://github.com/fbougares/TSAC - **Paper:** https://www.aclweb.org/anthology/W17-1307 - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]() - **Point of Contact:** Salima Mdhaffar (firstname.lastname@univ-lemans.fr) ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
3,436
[ [ -0.043487548828125, -0.0280303955078125, 0.01207733154296875, 0.028289794921875, -0.0238494873046875, 0.01312255859375, -0.03173828125, -0.0203094482421875, 0.0374755859375, 0.0278472900390625, -0.057220458984375, -0.08349609375, -0.060211181640625, 0.003622...
turku_ner_corpus
2023-01-25T14:54:48.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:fi", "license:cc-by-nc-sa-4.0", "region:us"...
null
An open, broad-coverage corpus for Finnish named entity recognition presented in Luoma et al. (2020) A Broad-coverage Corpus for Finnish Named Entity Recognition.
@inproceedings{luoma-etal-2020-broad, title = "A Broad-coverage Corpus for {F}innish Named Entity Recognition", author = {Luoma, Jouni and Oinonen, Miika and Pyyk{\"o}nen, Maria and Laippala, Veronika and Pyysalo, Sampo}, booktitle = "Proceedings of The 12th Language Resources and Evaluation Conference", year = "2020", url = "https://www.aclweb.org/anthology/2020.lrec-1.567", pages = "4615--4624", }
0
89
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - fi license: - cc-by-nc-sa-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: Turku NER corpus dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: ner_tags sequence: class_label: names: '0': B-DATE '1': B-EVENT '2': B-LOC '3': B-ORG '4': B-PER '5': B-PRO '6': I-DATE '7': I-EVENT '8': I-LOC '9': I-ORG '10': I-PER '11': I-PRO '12': O splits: - name: train num_bytes: 3257447 num_examples: 12217 - name: validation num_bytes: 364223 num_examples: 1364 - name: test num_bytes: 416644 num_examples: 1555 download_size: 1659911 dataset_size: 4038314 --- # Dataset Card for Turku NER corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://turkunlp.org/fin-ner.html - **Repository:** https://github.com/TurkuNLP/turku-ner-corpus/ - **Paper:** https://www.aclweb.org/anthology/2020.lrec-1.567/ - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]() - **Point of Contact:** {jouni.a.luoma,mhtoin,maria.h.pyykonen,mavela,sampo.pyysalo}@utu.f ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
3,770
[ [ -0.040740966796875, -0.04986572265625, -0.0002727508544921875, 0.012725830078125, -0.0216064453125, -0.0032749176025390625, -0.036712646484375, -0.0214385986328125, 0.048309326171875, 0.049285888671875, -0.042633056640625, -0.0718994140625, -0.0579833984375, ...
ARKseal/YFCC14M_subset_webdataset
2021-11-27T22:47:47.000Z
[ "region:us" ]
ARKseal
null
null
0
89
2022-03-02T23:29:22
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
AlexZapolskii/zapolskii-amazon
2021-12-22T22:13:57.000Z
[ "region:us" ]
AlexZapolskii
null
null
0
89
2022-03-02T23:29:22
dataset from kaggle https://www.kaggle.com/c/amazon-pet-product-reviews-classification
86
[ [ -0.038482666015625, -0.022186279296875, 0.00087738037109375, 0.021820068359375, -0.004749298095703125, -0.0016794204711914062, 0.007381439208984375, -0.0321044921875, 0.0227508544921875, 0.05572509765625, -0.04559326171875, -0.04840087890625, -0.021636962890625,...
Baybars/parla_text_corpus
2022-10-21T15:29:15.000Z
[ "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:various", "multilinguality:monolingual", "size_categories:100k<n<1M", "source_datasets:found", "language:ca", "license:cc-by-4.0", "robust-speech-event", "region:us" ]
Baybars
null
null
0
89
2022-03-02T23:29:22
--- annotations_creators: - no-annotation language_creators: - various language: - ca license: - cc-by-4.0 multilinguality: - monolingual pretty_name: ParlaTextCorpus size_categories: - 100k<n<1M source_datasets: - found task_categories: - sequence-modeling task_ids: - language-modeling tags: - robust-speech-event --- # ParlaTextCorpus Spoken text corpus for Catalan. Derived and cleaned from three sources. OpenSubtitles, Tv3Parla and Festcat.
446
[ [ -0.01119232177734375, -0.024658203125, 0.006832122802734375, 0.054656982421875, -0.0288238525390625, 0.0306854248046875, -0.008026123046875, -0.0148773193359375, 0.0343017578125, 0.06939697265625, -0.0160064697265625, -0.0289306640625, -0.022552490234375, 0....
Nexdata/accented_mandarin
2023-08-31T03:09:30.000Z
[ "region:us" ]
Nexdata
null
null
3
89
2022-03-02T23:29:22
--- YAML tags: - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging --- # Dataset Card for accented_mandarin ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://nexdata.ai/?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The dataset contains 2,000 hours of Mandarin Chinese speech data. The data is collected from local speakers in 26 provinces like Henan, Shanxi, Sichuan, Hunan, Fujian, etc.The content covers generic catagory,human machine interaction, smart home command and control, in-car,numbers etc. The format is 16kHz, 16bit, uncompressed wav, mono channel. The sentence accuracy is over 97%. For more details, please refer to the link: https://nexdata.ai/speechRecognition?source=Huggingface ### Supported Tasks and Leaderboards automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR). ### Languages Accented Mandarin ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
3,326
[ [ -0.020904541015625, -0.035888671875, -0.0220184326171875, 0.036102294921875, -0.0189361572265625, 0.002185821533203125, -0.032928466796875, -0.024871826171875, 0.041473388671875, 0.04559326171875, -0.038970947265625, -0.069580078125, -0.0222625732421875, 0.0...
Nexdata/mandarin_chinese
2023-08-31T03:06:29.000Z
[ "region:us" ]
Nexdata
null
null
5
89
2022-03-02T23:29:22
--- YAML tags: - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging --- # Dataset Card for mandarin_chinese ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://nexdata.ai/?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The dataset contains 15,000 hours of Mandarin Chinese speech data. It's collected from local Mandarin speakers in 33 provinces of China, covering mutiple scenes and enviroments. The format is 16kHz, 16bit, uncompressed wav, mono channel. The sentence accuracy is over 97%. For more details, please refer to the link: https://nexdata.ai/speechRecognition?source=Huggingface ### Supported Tasks and Leaderboards automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR). ### Languages Mandarin ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
3,208
[ [ -0.01329803466796875, -0.0328369140625, -0.024139404296875, 0.03839111328125, -0.018157958984375, -0.0036487579345703125, -0.028900146484375, -0.026824951171875, 0.0310211181640625, 0.035491943359375, -0.04290771484375, -0.06488037109375, -0.025146484375, 0....
Nexdata/mixed_speech_chinese_english
2023-08-31T03:12:08.000Z
[ "region:us" ]
Nexdata
null
null
4
89
2022-03-02T23:29:22
--- YAML tags: - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging --- # Dataset Card for mixed_speech_chinese_english ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://nexdata.ai/?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The dataset contains 2,000 hours of mixed speech with Chinese and English. The data is collected from speakers in 26 provinces like Henan, Shanxi, Sichuan, Hunan, Fujian, etc.The content covers generic scene and multiple human machine interation scenes, such as music, entertainment, travel, daily life. The data covers more than 30,000 English words. The sentence accuracy is over 97%. For more details, please refer to the link: https://nexdata.ai/speechRecognition?source=Huggingface ### Supported Tasks and Leaderboards automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR). ### Languages Chinese, English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
3,341
[ [ -0.022857666015625, -0.044036865234375, -0.01580810546875, 0.038818359375, -0.01788330078125, 0.00021767616271972656, -0.030487060546875, -0.035369873046875, 0.0372314453125, 0.038787841796875, -0.048187255859375, -0.060455322265625, -0.03399658203125, 0.001...
Nexdata/multi_language
2023-08-31T03:07:33.000Z
[ "region:us" ]
Nexdata
null
null
3
89
2022-03-02T23:29:22
--- YAML tags: - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging --- # Dataset Card for multi_language ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://nexdata.ai/?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The dataset contains 25,000 hours of multi-language reading speech data. It's recorded by native speakers, covering English, French, German, Russian, Spanish, Portuguese, Italian, Japanese, Korean, Hindi, Vietnamese, Tagalog, Thai etc.The recording is rich in content, covering multiple categories such as economy, entertainment, news, oral language, numbers, and letters. The format is 16kHz, 16bit, uncompressed wav, mono channel. The sentence accuracy is over 95%. For more details, please refer to the link: https://nexdata.ai/speechRecognition?source=Huggingface ### Supported Tasks and Leaderboards automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR). ### Languages English, French, German, Russian, Spanish, Portuguese, Italian, Japanese, Korean, Hindi, Vietnamese, Tagalog, Thai etc. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
3,513
[ [ -0.0335693359375, -0.046356201171875, -0.00858306884765625, 0.035369873046875, -0.01361083984375, 0.0131683349609375, -0.029693603515625, -0.037200927734375, 0.042236328125, 0.04302978515625, -0.049530029296875, -0.07196044921875, -0.04656982421875, 0.018783...
Davlan/conll2003_de_noMISC
2021-10-05T09:06:35.000Z
[ "region:us" ]
Davlan
null
null
0
89
2022-03-02T23:29:22
Entry not found
15
[ [ -0.02142333984375, -0.01495361328125, 0.05718994140625, 0.0288238525390625, -0.035064697265625, 0.046539306640625, 0.052520751953125, 0.005062103271484375, 0.0513916015625, 0.016998291015625, -0.052093505859375, -0.014984130859375, -0.060394287109375, 0.0379...
GEM/cs_restaurants
2022-10-24T15:30:14.000Z
[ "task_categories:conversational", "annotations_creators:none", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:cs", "license:cc-by-sa-4.0", "dialog-response-generation", "region:us" ]
GEM
The task is generating responses in the context of a (hypothetical) dialogue system that provides information about restaurants. The input is a basic intent/dialogue act type and a list of slots (attributes) and their values. The output is a natural language sentence.
@inproceedings{cs_restaurants, address = {Tokyo, Japan}, title = {Neural {Generation} for {Czech}: {Data} and {Baselines}}, shorttitle = {Neural {Generation} for {Czech}}, url = {https://www.aclweb.org/anthology/W19-8670/}, urldate = {2019-10-18}, booktitle = {Proceedings of the 12th {International} {Conference} on {Natural} {Language} {Generation} ({INLG} 2019)}, author = {Dušek, Ondřej and Jurčíček, Filip}, month = oct, year = {2019}, pages = {563--574}, }
1
89
2022-03-02T23:29:22
--- annotations_creators: - none language_creators: - unknown language: - cs license: - cc-by-sa-4.0 multilinguality: - unknown size_categories: - unknown source_datasets: - original task_categories: - conversational task_ids: [] pretty_name: cs_restaurants tags: - dialog-response-generation --- # Dataset Card for GEM/cs_restaurants ## Dataset Description - **Homepage:** n/a - **Repository:** https://github.com/UFAL-DSG/cs_restaurant_dataset - **Paper:** https://aclanthology.org/W19-8670/ - **Leaderboard:** N/A - **Point of Contact:** Ondrej Dusek ### Link to Main Data Card You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/cs_restaurants). ### Dataset Summary The Czech Restaurants dataset is a task oriented dialog dataset in which a model needs to verbalize a response that a service agent could provide which is specified through a series of dialog acts. The dataset originated as a translation of an English dataset to test the generation capabilities of an NLG system on a highly morphologically rich language like Czech. You can load the dataset via: ``` import datasets data = datasets.load_dataset('GEM/cs_restaurants') ``` The data loader can be found [here](https://huggingface.co/datasets/GEM/cs_restaurants). #### website n/a #### paper [Github](https://aclanthology.org/W19-8670/) #### authors Ondrej Dusek and Filip Jurcicek ## Dataset Overview ### Where to find the Data and its Documentation #### Download <!-- info: What is the link to where the original dataset is hosted? --> <!-- scope: telescope --> [Github](https://github.com/UFAL-DSG/cs_restaurant_dataset) #### Paper <!-- info: What is the link to the paper describing the dataset (open access preferred)? --> <!-- scope: telescope --> [Github](https://aclanthology.org/W19-8670/) #### BibTex <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. --> <!-- scope: microscope --> ``` @inproceedings{cs_restaurants, address = {Tokyo, Japan}, title = {Neural {Generation} for {Czech}: {Data} and {Baselines}}, shorttitle = {Neural {Generation} for {Czech}}, url = {https://www.aclweb.org/anthology/W19-8670/}, urldate = {2019-10-18}, booktitle = {Proceedings of the 12th {International} {Conference} on {Natural} {Language} {Generation} ({INLG} 2019)}, author = {Dušek, Ondřej and Jurčíček, Filip}, month = oct, year = {2019}, pages = {563--574}, } ``` #### Contact Name <!-- quick --> <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> Ondrej Dusek #### Contact Email <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> odusek@ufal.mff.cuni.cz #### Has a Leaderboard? <!-- info: Does the dataset have an active leaderboard? --> <!-- scope: telescope --> no ### Languages and Intended Use #### Multilingual? <!-- quick --> <!-- info: Is the dataset multilingual? --> <!-- scope: telescope --> no #### Covered Dialects <!-- info: What dialects are covered? Are there multiple dialects per language? --> <!-- scope: periscope --> No breakdown of dialects is provided. #### Covered Languages <!-- quick --> <!-- info: What languages/dialects are covered in the dataset? --> <!-- scope: telescope --> `Czech` #### Whose Language? <!-- info: Whose language is in the dataset? --> <!-- scope: periscope --> Six professional translators produced the outputs #### License <!-- quick --> <!-- info: What is the license of the dataset? --> <!-- scope: telescope --> cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International #### Intended Use <!-- info: What is the intended use of the dataset? --> <!-- scope: microscope --> The dataset was created to test neural NLG systems in Czech and their ability to deal with rich morphology. #### Primary Task <!-- info: What primary task does the dataset support? --> <!-- scope: telescope --> Dialog Response Generation #### Communicative Goal <!-- quick --> <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. --> <!-- scope: periscope --> Producing a text expressing the given intent/dialogue act and all and only the attributes specified in the input meaning representation. ### Credit #### Curation Organization Type(s) <!-- info: In what kind of organization did the dataset curation happen? --> <!-- scope: telescope --> `academic` #### Curation Organization(s) <!-- info: Name the organization(s). --> <!-- scope: periscope --> Charles University, Prague #### Dataset Creators <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). --> <!-- scope: microscope --> Ondrej Dusek and Filip Jurcicek #### Funding <!-- info: Who funded the data creation? --> <!-- scope: microscope --> This research was supported by the Charles University project PRIMUS/19/SCI/10 and by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221. This work used using language resources distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071). #### Who added the Dataset to GEM? <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. --> <!-- scope: microscope --> Simon Mille wrote the initial data card and Yacine Jernite the data loader. Sebastian Gehrmann migrated the data card and loader to the v2 format. ### Dataset Structure #### Data Fields <!-- info: List and describe the fields present in the dataset. --> <!-- scope: telescope --> The data is stored in a JSON or CSV format, with identical contents. The data has 4 fields: * `da`: the input meaning representation/dialogue act (MR) * `delex_da`: the input MR, delexicalized -- all slot values are replaced with placeholders, such as `X-name` * `text`: the corresponding target natural language text (reference) * `delex_text`: the target text, delexicalized (delexicalization is applied regardless of inflection) In addition, the data contains a JSON file with all possible inflected forms for all slot values in the dataset (`surface_forms.json`). Each slot -> value entry contains a list of inflected forms for the given value, with the base form (lemma), the inflected form, and a [morphological tag](https://ufal.mff.cuni.cz/pdt/Morphology_and_Tagging/Doc/hmptagqr.html). The same MR is often repeated multiple times with different synonymous reference texts. #### Reason for Structure <!-- info: How was the dataset structure determined? --> <!-- scope: microscope --> The data originated as a translation and localization of [Wen et al.'s SF restaurant](https://www.aclweb.org/anthology/D15-1199/) NLG dataset. #### How were labels chosen? <!-- info: How were the labels chosen? --> <!-- scope: microscope --> The input MRs were collected from [Wen et al.'s SF restaurant](https://www.aclweb.org/anthology/D15-1199/) NLG data and localized by randomly replacing slot values (using a list of Prague restaurant names, neighborhoods etc.). The generated slot values were then automatically replaced in reference texts in the data. #### Example Instance <!-- info: Provide a JSON formatted example of a typical instance in the dataset. --> <!-- scope: periscope --> ``` { "input": "inform_only_match(food=Turkish,name='Švejk Restaurant',near='Charles Bridge',price_range=cheap)", "target": "Našla jsem pouze jednu levnou restauraci poblíž Karlova mostu , kde podávají tureckou kuchyni , Švejk Restaurant ." } ``` #### Data Splits <!-- info: Describe and name the splits in the dataset if there are more than one. --> <!-- scope: periscope --> | Property | Value | |--------------------------------|-------| | Total instances | 5,192 | | Unique MRs | 2,417 | | Unique delexicalized instances | 2,752 | | Unique delexicalized MRs | 248 | The data is split in a roughly 3:1:1 proportion into training, development and test sections, making sure no delexicalized MR appears in two different parts. On the other hand, most DA types/intents are represented in all data parts. #### Splitting Criteria <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. --> <!-- scope: microscope --> The creators ensured that after delexicalization of the meaning representation there was no overlap between training and test. The data is split at a 3:1:1 rate between training, validation, and test. ## Dataset in GEM ### Rationale for Inclusion in GEM #### Why is the Dataset in GEM? <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? --> <!-- scope: microscope --> This is one of a few non-English data-to-text datasets, in a well-known domain, but covering a morphologically rich language that is harder to generate since named entities need to be inflected. This makes it harder to apply common techniques such as delexicalization or copy mechanisms. #### Similar Datasets <!-- info: Do other datasets for the high level task exist? --> <!-- scope: telescope --> yes #### Unique Language Coverage <!-- info: Does this dataset cover other languages than other datasets for the same task? --> <!-- scope: periscope --> yes #### Difference from other GEM datasets <!-- info: What else sets this dataset apart from other similar datasets in GEM? --> <!-- scope: microscope --> The dialog acts in this dataset are much more varied than the e2e dataset which is the closest in style. #### Ability that the Dataset measures <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: periscope --> surface realization ### GEM-Specific Curation #### Modificatied for GEM? <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? --> <!-- scope: telescope --> yes #### Additional Splits? <!-- info: Does GEM provide additional splits to the dataset? --> <!-- scope: telescope --> yes #### Split Information <!-- info: Describe how the new splits were created --> <!-- scope: periscope --> 5 challenge sets for the Czech Restaurants dataset were added to the GEM evaluation suite. 1. Data shift: We created subsets of the training and development sets of 500 randomly selected inputs each. 2. Scrambling: We applied input scrambling on a subset of 500 randomly selected test instances; the order of the input dialogue acts was randomly reassigned. 3. We identified different subsets of the test set that we could compare to each other so that we would have a better understanding of the results. There are currently two selections that we have made: The first comparison is based on input size: the number of predicates differs between different inputs, ranging from 1 to 5. The table below provides an indication of the distribution of inputs with a particular length. It is clear from the table that this distribution is not balanced, and comparisions between items should be done with caution. Particularly for input size 4 and 5, there may not be enough data to draw reliable conclusions. | Input length | Number of inputs | |--------------|------------------| | 1 | 183 | | 2 | 267 | | 3 | 297 | | 4 | 86 | | 5 | 9 | The second comparison is based on the type of act. Again we caution against comparing the different groups that have relatively few items. It is probably OK to compare `inform` and `?request`, but the other acts are all low-frequent. | Act | Frequency | |-------------------|-----------| | ?request | 149 | | inform | 609 | | ?confirm | 22 | | inform_only_match | 16 | | inform_no_match | 34 | | ?select | 12 | #### Split Motivation <!-- info: What aspects of the model's generation capacities were the splits created to test? --> <!-- scope: periscope --> Generalization and robustness. ### Getting Started with the Task #### Technical Terms <!-- info: Technical terms used in this card and the dataset and their definitions --> <!-- scope: microscope --> - utterance: something a system or user may say in a turn - meaning representation: a representation of meaning that the system should be in accordance with. The specific type of MR in this dataset are dialog acts which describe what a dialog system should do, e.g., inform a user about a value. ## Previous Results ### Previous Results #### Measured Model Abilities <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: telescope --> Surface realization #### Metrics <!-- info: What metrics are typically used for this task? --> <!-- scope: periscope --> `BLEU`, `ROUGE`, `METEOR` #### Proposed Evaluation <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. --> <!-- scope: microscope --> This dataset uses the suite of word-overlap-based automatic metrics from the E2E NLG Challenge (BLEU, NIST, ROUGE-L, METEOR, and CIDEr). In addition, the slot error rate is measured. #### Previous results available? <!-- info: Are previous results available? --> <!-- scope: telescope --> no ## Dataset Curation ### Original Curation #### Original Curation Rationale <!-- info: Original curation rationale --> <!-- scope: telescope --> The dataset was created to test neural NLG systems in Czech and their ability to deal with rich morphology. #### Communicative Goal <!-- info: What was the communicative goal? --> <!-- scope: periscope --> Producing a text expressing the given intent/dialogue act and all and only the attributes specified in the input MR. #### Sourced from Different Sources <!-- info: Is the dataset aggregated from different data sources? --> <!-- scope: telescope --> no ### Language Data #### How was Language Data Obtained? <!-- info: How was the language data obtained? --> <!-- scope: telescope --> `Created for the dataset` #### Creation Process <!-- info: If created for the dataset, describe the creation process. --> <!-- scope: microscope --> Six professional translators translated the underlying dataset with the following instructions: - Each utterance should be translated by itself - fluent spoken-style Czech should be produced - Facts should be preserved - If possible, synonyms should be varied to create diverse utterances - Entity names should be inflected as necessary - the reader of the generated text should be addressed using formal form and self-references should use the female form. The translators did not have access to the meaning representation. #### Data Validation <!-- info: Was the text validated by a different worker or a data curator? --> <!-- scope: telescope --> validated by data curator #### Was Data Filtered? <!-- info: Were text instances selected or filtered? --> <!-- scope: telescope --> not filtered ### Structured Annotations #### Additional Annotations? <!-- quick --> <!-- info: Does the dataset have additional annotations for each instance? --> <!-- scope: telescope --> none #### Annotation Service? <!-- info: Was an annotation service used? --> <!-- scope: telescope --> no ### Consent #### Any Consent Policy? <!-- info: Was there a consent policy involved when gathering the data? --> <!-- scope: telescope --> no #### Justification for Using the Data <!-- info: If not, what is the justification for reusing the data? --> <!-- scope: microscope --> It was not explicitly stated but we can safely assume that the translators agreed to this use of their data. ### Private Identifying Information (PII) #### Contains PII? <!-- quick --> <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? --> <!-- scope: telescope --> no PII #### Justification for no PII <!-- info: Provide a justification for selecting `no PII` above. --> <!-- scope: periscope --> This dataset does not include any information about individuals. ### Maintenance #### Any Maintenance Plan? <!-- info: Does the original dataset have a maintenance plan? --> <!-- scope: telescope --> no ## Broader Social Context ### Previous Work on the Social Impact of the Dataset #### Usage of Models based on the Data <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? --> <!-- scope: telescope --> no ### Impact on Under-Served Communities #### Addresses needs of underserved Communities? <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). --> <!-- scope: telescope --> yes #### Details on how Dataset Addresses the Needs <!-- info: Describe how this dataset addresses the needs of underserved communities. --> <!-- scope: microscope --> The dataset may help improve NLG methods for morphologically rich languages beyond Czech. ### Discussion of Biases #### Any Documented Social Biases? <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. --> <!-- scope: telescope --> yes #### Links and Summaries of Analysis Work <!-- info: Provide links to and summaries of works analyzing these biases. --> <!-- scope: microscope --> To ensure consistency of translation, the data always uses formal/polite address for the user, and uses the female form for first-person self-references (as if the dialogue agent producing the sentences was female). This prevents data sparsity and ensures consistent results for systems trained on the dataset, but does not represent all potential situations arising in Czech. ## Considerations for Using the Data ### PII Risks and Liability ### Licenses #### Copyright Restrictions on the Dataset <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? --> <!-- scope: periscope --> `open license - commercial use allowed` #### Copyright Restrictions on the Language Data <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? --> <!-- scope: periscope --> `open license - commercial use allowed` ### Known Technical Limitations #### Technical Limitations <!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. --> <!-- scope: microscope --> The test set may lead users to over-estimate the performance of their NLG systems with respect to their generalisability, because there are no unseen restaurants or addresses in the test set. This is something we will look into for future editions of the GEM shared task.
20,221
[ [ -0.017547607421875, -0.07147216796875, 0.0289764404296875, -0.0053253173828125, -0.01277923583984375, -0.01091766357421875, -0.033233642578125, -0.024627685546875, 0.027557373046875, 0.0396728515625, -0.046600341796875, -0.0703125, -0.025177001953125, 0.0143...
GEM/squad_v2
2022-10-24T15:30:29.000Z
[ "task_categories:other", "annotations_creators:crowd-sourced", "language_creators:unknown", "multilinguality:unknown", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "question-generation", "arxiv:1806.03822", "region:us" ]
GEM
SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
@article{2016arXiv160605250R, author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev}, Konstantin and {Liang}, Percy}, title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}", journal = {arXiv e-prints}, year = 2016, eid = {arXiv:1606.05250}, pages = {arXiv:1606.05250}, archivePrefix = {arXiv}, eprint = {1606.05250}, }
0
89
2022-03-02T23:29:22
--- annotations_creators: - crowd-sourced language_creators: - unknown language: - en license: - cc-by-sa-4.0 multilinguality: - unknown size_categories: - unknown source_datasets: - original task_categories: - other task_ids: [] pretty_name: squad_v2 tags: - question-generation --- # Dataset Card for GEM/squad_v2 ## Dataset Description - **Homepage:** https://rajpurkar.github.io/SQuAD-explorer/ - **Repository:** https://rajpurkar.github.io/SQuAD-explorer/ - **Paper:** https://arxiv.org/abs/1806.03822v1 - **Leaderboard:** https://rajpurkar.github.io/SQuAD-explorer/ - **Point of Contact:** Robin Jia ### Link to Main Data Card You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/squad_v2). ### Dataset Summary SQuAD2.0 is a dataset that tests the ability of a system to not only answer reading comprehension questions, but also abstain when presented with a question that cannot be answered based on the provided paragraph. F1 score is used to evaluate models on the leaderboard. In GEM, we are using this dataset for the question-generation task in which a model should generate squad-like questions from an input text. You can load the dataset via: ``` import datasets data = datasets.load_dataset('GEM/squad_v2') ``` The data loader can be found [here](https://huggingface.co/datasets/GEM/squad_v2). #### website [Website](https://rajpurkar.github.io/SQuAD-explorer/) #### paper [Arxiv](https://arxiv.org/abs/1806.03822v1) #### authors Pranav Rajpurkar, Robin Jia and Percy Liang ## Dataset Overview ### Where to find the Data and its Documentation #### Webpage <!-- info: What is the webpage for the dataset (if it exists)? --> <!-- scope: telescope --> [Website](https://rajpurkar.github.io/SQuAD-explorer/) #### Download <!-- info: What is the link to where the original dataset is hosted? --> <!-- scope: telescope --> [Website](https://rajpurkar.github.io/SQuAD-explorer/) #### Paper <!-- info: What is the link to the paper describing the dataset (open access preferred)? --> <!-- scope: telescope --> [Arxiv](https://arxiv.org/abs/1806.03822v1) #### BibTex <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. --> <!-- scope: microscope --> ``` @inproceedings{Rajpurkar2018KnowWY, title={Know What You Don’t Know: Unanswerable Questions for SQuAD}, author={Pranav Rajpurkar and Robin Jia and Percy Liang}, booktitle={ACL}, year={2018} } ``` #### Contact Name <!-- quick --> <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> Robin Jia #### Contact Email <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. --> <!-- scope: periscope --> robinjia@stanford.edu #### Has a Leaderboard? <!-- info: Does the dataset have an active leaderboard? --> <!-- scope: telescope --> yes #### Leaderboard Link <!-- info: Provide a link to the leaderboard. --> <!-- scope: periscope --> [Website](https://rajpurkar.github.io/SQuAD-explorer/) #### Leaderboard Details <!-- info: Briefly describe how the leaderboard evaluates models. --> <!-- scope: microscope --> SQuAD2.0 tests the ability of a system to not only answer reading comprehension questions, but also abstain when presented with a question that cannot be answered based on the provided paragraph. F1 score is used to evaluate models on the leaderboard. ### Languages and Intended Use #### Multilingual? <!-- quick --> <!-- info: Is the dataset multilingual? --> <!-- scope: telescope --> no #### Covered Languages <!-- quick --> <!-- info: What languages/dialects are covered in the dataset? --> <!-- scope: telescope --> `English` #### License <!-- quick --> <!-- info: What is the license of the dataset? --> <!-- scope: telescope --> cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International #### Intended Use <!-- info: What is the intended use of the dataset? --> <!-- scope: microscope --> The idea behind SQuAD2.0 dataset is to make the models understand when a question cannot be answered given a context. This will help in building models such that they know what they don't know, and therefore make the models understand language at a deeper level. The tasks that can be supported by the dataset are machine reading comprehension, extractive QA, and question generation. #### Primary Task <!-- info: What primary task does the dataset support? --> <!-- scope: telescope --> Question Generation #### Communicative Goal <!-- quick --> <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. --> <!-- scope: periscope --> Given an input passage and an answer span, the goal is to generate a question that asks for the answer. ### Credit #### Curation Organization Type(s) <!-- info: In what kind of organization did the dataset curation happen? --> <!-- scope: telescope --> `academic` #### Curation Organization(s) <!-- info: Name the organization(s). --> <!-- scope: periscope --> Stanford University #### Dataset Creators <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). --> <!-- scope: microscope --> Pranav Rajpurkar, Robin Jia and Percy Liang #### Funding <!-- info: Who funded the data creation? --> <!-- scope: microscope --> Facebook and NSF Graduate Research Fellowship under Grant No. DGE-114747 #### Who added the Dataset to GEM? <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. --> <!-- scope: microscope --> (Abinaya Mahendiran)[https://github.com/AbinayaM02], Manager Data Science, NEXT Labs, ### Dataset Structure #### Data Fields <!-- info: List and describe the fields present in the dataset. --> <!-- scope: telescope --> The data fields are the same among all splits. #### squad_v2 - `id`: a `string` feature. - `gem_id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. #### Example Instance <!-- info: Provide a JSON formatted example of a typical instance in the dataset. --> <!-- scope: periscope --> Here is an example of a validation data point. This example was too long and was cropped: ``` { "gem_id": "gem-squad_v2-validation-1", "id": "56ddde6b9a695914005b9629", "answers": { "answer_start": [94, 87, 94, 94], "text": ["10th and 11th centuries", "in the 10th and 11th centuries", "10th and 11th centuries", "10th and 11th centuries"] }, "context": "\"The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave thei...", "question": "When were the Normans in Normandy?", "title": "Normans" } ``` #### Data Splits <!-- info: Describe and name the splits in the dataset if there are more than one. --> <!-- scope: periscope --> The original SQuAD2.0 dataset has only training and dev (validation) splits. The train split is further divided into test split and added as part of the GEM datasets. | name | train | validation | test | | -------------- | --------: | -------------: | -------: | | squad_v2 | 90403 | 11873 | 39916 | ## Dataset in GEM ### Rationale for Inclusion in GEM #### Why is the Dataset in GEM? <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? --> <!-- scope: microscope --> SQuAD2.0 will encourage the development of new reading comprehension models that know what they don’t know, and therefore understand language at a deeper level. It can also help in building better models for answer-aware question generation . #### Similar Datasets <!-- info: Do other datasets for the high level task exist? --> <!-- scope: telescope --> no #### Unique Language Coverage <!-- info: Does this dataset cover other languages than other datasets for the same task? --> <!-- scope: periscope --> yes #### Ability that the Dataset measures <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: periscope --> Reasoning capability ### GEM-Specific Curation #### Modificatied for GEM? <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? --> <!-- scope: telescope --> yes #### GEM Modifications <!-- info: What changes have been made to he original dataset? --> <!-- scope: periscope --> `other` #### Additional Splits? <!-- info: Does GEM provide additional splits to the dataset? --> <!-- scope: telescope --> yes #### Split Information <!-- info: Describe how the new splits were created --> <!-- scope: periscope --> The train(80%) and validation(10%) split of SQuAD2.0 are made available to public whereas the test(10%) split is not available. As part of GEM, the train split, 80% of the original data is split into two train split (90%) and test split (remaining 10%). The idea is to provide all three splits for the users to use. ### Getting Started with the Task ## Previous Results ### Previous Results #### Measured Model Abilities <!-- info: What aspect of model ability can be measured with this dataset? --> <!-- scope: telescope --> Extractive QA, Question Generation #### Metrics <!-- info: What metrics are typically used for this task? --> <!-- scope: periscope --> `Other: Other Metrics`, `METEOR`, `ROUGE`, `BLEU` #### Other Metrics <!-- info: Definitions of other metrics --> <!-- scope: periscope --> - Extractive QA uses Exact Match and F1 Score - Question generation users METEOR, ROUGE-L, BLEU-4 #### Previous results available? <!-- info: Are previous results available? --> <!-- scope: telescope --> yes #### Other Evaluation Approaches <!-- info: What evaluation approaches have others used? --> <!-- scope: periscope --> Question generation users METEOR, ROUGE-L, BLEU-4 #### Relevant Previous Results <!-- info: What are the most relevant previous results for this task/dataset? --> <!-- scope: microscope --> @article{Dong2019UnifiedLM, title={Unified Language Model Pre-training for Natural Language Understanding and Generation}, author={Li Dong and Nan Yang and Wenhui Wang and Furu Wei and Xiaodong Liu and Yu Wang and Jianfeng Gao and M. Zhou and Hsiao-Wuen Hon}, journal={ArXiv}, year={2019}, volume={abs/1905.03197} } ## Dataset Curation ### Original Curation #### Original Curation Rationale <!-- info: Original curation rationale --> <!-- scope: telescope --> The dataset is curated in three stages: - Curating passages, - Crowdsourcing question-answers on those passages, - Obtaining additional answers As part of SQuAD1.1, 10000 high-quality articles from English Wikipedia is extracted using Project Nayuki’s Wikipedia’s internal PageRanks, from which 536 articles are sampled uniformly at random. From each of these articles, individual paragraphs are extracted, stripping away images, figures, tables, and discarding paragraphs shorter than 500 characters. SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. #### Communicative Goal <!-- info: What was the communicative goal? --> <!-- scope: periscope --> To build systems that not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. #### Sourced from Different Sources <!-- info: Is the dataset aggregated from different data sources? --> <!-- scope: telescope --> yes #### Source Details <!-- info: List the sources (one per line) --> <!-- scope: periscope --> Wikipedia ### Language Data #### How was Language Data Obtained? <!-- info: How was the language data obtained? --> <!-- scope: telescope --> `Found` #### Where was it found? <!-- info: If found, where from? --> <!-- scope: telescope --> `Single website` #### Topics Covered <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? --> <!-- scope: periscope --> The dataset contains 536 articles covering a wide range of topics, from musical celebrities to abstract concepts. #### Data Validation <!-- info: Was the text validated by a different worker or a data curator? --> <!-- scope: telescope --> validated by crowdworker #### Data Preprocessing <!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) --> <!-- scope: microscope --> From the sampled articles from Wikipedia, individual paragraphs are extracted, stripping away images, figures, tables, and discarding paragraphs shorter than 500 characters and partitioned into training(80%), development set(10%) and test set(10%). #### Was Data Filtered? <!-- info: Were text instances selected or filtered? --> <!-- scope: telescope --> algorithmically #### Filter Criteria <!-- info: What were the selection criteria? --> <!-- scope: microscope --> To retrieve high-quality articles, Project Nayuki’s Wikipedia’s internal PageRanks was used to obtain the top 10000 articles of English Wikipedia, from which 536 articles are sampled uniformly at random. ### Structured Annotations #### Additional Annotations? <!-- quick --> <!-- info: Does the dataset have additional annotations for each instance? --> <!-- scope: telescope --> crowd-sourced #### Number of Raters <!-- info: What is the number of raters --> <!-- scope: telescope --> unknown #### Rater Qualifications <!-- info: Describe the qualifications required of an annotator. --> <!-- scope: periscope --> Crowdworkers from the United States or Canada with a 97% HIT acceptance rate, a minimum of 1000 HITs, were employed to create questions. #### Raters per Training Example <!-- info: How many annotators saw each training example? --> <!-- scope: periscope --> 0 #### Raters per Test Example <!-- info: How many annotators saw each test example? --> <!-- scope: periscope --> 0 #### Annotation Service? <!-- info: Was an annotation service used? --> <!-- scope: telescope --> yes #### Which Annotation Service <!-- info: Which annotation services were used? --> <!-- scope: periscope --> `other`, `Amazon Mechanical Turk` #### Annotation Values <!-- info: Purpose and values for each annotation --> <!-- scope: microscope --> For SQuAD 1.1 , crowdworkers were tasked with asking and answering up to 5 questions on the content of that paragraph. The questions had to be entered in a text field, and the answers had to be highlighted in the paragraph. For SQuAD2.0, each task consisted of an entire article from SQuAD 1.1. For each paragraph in the article, workers were asked to pose up to five questions that were impossible to answer based on the paragraph alone, while referencing entities in the paragraph and ensuring that a plausible answer is present. #### Any Quality Control? <!-- info: Quality control measures? --> <!-- scope: telescope --> validated by another rater #### Quality Control Details <!-- info: Describe the quality control measures that were taken. --> <!-- scope: microscope --> Questions from workers who wrote 25 or fewer questions on an article is removed; this filter helped remove noise from workers who had trouble understanding the task, and therefore quit before completing the whole article. This filter to both SQuAD2.0 and the existing answerable questions from SQuAD 1.1. ### Consent #### Any Consent Policy? <!-- info: Was there a consent policy involved when gathering the data? --> <!-- scope: telescope --> no ### Private Identifying Information (PII) #### Contains PII? <!-- quick --> <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? --> <!-- scope: telescope --> unlikely #### Any PII Identification? <!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? --> <!-- scope: periscope --> no identification ### Maintenance #### Any Maintenance Plan? <!-- info: Does the original dataset have a maintenance plan? --> <!-- scope: telescope --> no ## Broader Social Context ### Previous Work on the Social Impact of the Dataset #### Usage of Models based on the Data <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? --> <!-- scope: telescope --> no ### Impact on Under-Served Communities #### Addresses needs of underserved Communities? <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). --> <!-- scope: telescope --> no ### Discussion of Biases #### Any Documented Social Biases? <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. --> <!-- scope: telescope --> yes ## Considerations for Using the Data ### PII Risks and Liability ### Licenses ### Known Technical Limitations
17,828
[ [ -0.026397705078125, -0.06427001953125, 0.0267486572265625, 0.0005469322204589844, 0.004241943359375, 0.0223236083984375, -0.0075836181640625, -0.02227783203125, 0.0130462646484375, 0.0198211669921875, -0.07891845703125, -0.040496826171875, -0.023345947265625, ...
Graphcore/gqa-lxmert
2023-01-09T11:28:07.000Z
[ "language:en", "license:cc-by-4.0", "region:us" ]
Graphcore
GQA is a new dataset for real-world visual reasoning and compositional question answering, seeking to address key shortcomings of previous visual question answering (VQA) datasets.
@inproceedings{hudson2019gqa, title={Gqa: A new dataset for real-world visual reasoning and compositional question answering}, author={Hudson, Drew A and Manning, Christopher D}, booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition}, pages={6700--6709}, year={2019} }
0
89
2022-03-02T23:29:22
--- language: - en license: - cc-by-4.0 ---
45
[ [ -0.00897979736328125, -0.01299285888671875, 0.045623779296875, 0.03460693359375, -0.01352691650390625, 0.0164794921875, 0.0252227783203125, 0.00353240966796875, 0.040496826171875, 0.051788330078125, -0.045501708984375, -0.016632080078125, -0.049652099609375, ...
khondoker/SentNoB
2023-04-23T10:32:36.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "language:bn", "region:us" ]
khondoker
null
null
0
89
2022-03-02T23:29:22
--- language: - bn task_categories: - text-classification pretty_name: SentNoB task_ids: - sentiment-classification annotations_creators: - expert-generated language_creators: - expert-generated paperswithcode_id: sentnob --- # Dataset Card for "SentNoB" ### Dataset Summary Social Media User Comments' Sentiment Analysis Dataset. Each user comments are labeled with either positive (1), negative (2), or neutral (0). ### Citation Information ``` @inproceedings{islam2021sentnob, title={SentNoB: A Dataset for Analysing Sentiment on Noisy Bangla Texts}, author={Islam, Khondoker Ittehadul and Kar, Sudipta and Islam, Md Saiful and Amin, Mohammad Ruhul}, booktitle={Findings of the Association for Computational Linguistics: EMNLP 2021}, pages={3265--3271}, year={2021} } ```
790
[ [ -0.0283355712890625, -0.030914306640625, -0.0029201507568359375, 0.032989501953125, -0.0350341796875, -0.0026569366455078125, -0.02496337890625, -0.00463104248046875, 0.0360107421875, 0.0311431884765625, -0.041656494140625, -0.0570068359375, -0.050750732421875, ...
Lucylulu/imdb
2021-12-08T20:20:37.000Z
[ "region:us" ]
Lucylulu
null
null
0
89
2022-03-02T23:29:22
Entry not found
15
[ [ -0.021392822265625, -0.01494598388671875, 0.05718994140625, 0.028839111328125, -0.0350341796875, 0.046539306640625, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.01702880859375, -0.052093505859375, -0.01494598388671875, -0.06036376953125, 0.03790...
allegro/klej-polemo2-in
2022-08-30T06:57:28.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:cc-by-sa-4.0", "region:us" ]
allegro
null
null
0
89
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - other language: - pl license: - cc-by-sa-4.0 multilinguality: - monolingual pretty_name: 'PolEmo2.0-IN' size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification --- # klej-polemo2-in ## Description The PolEmo2.0 is a dataset of online consumer reviews from four domains: medicine, hotels, products, and university. It is human-annotated on a level of full reviews and individual sentences. It comprises over 8000 reviews, about 85% from the medicine and hotel domains. We use the PolEmo2.0 dataset to form two tasks. Both use the same training dataset, i.e., reviews from medicine and hotel domains, but are evaluated on a different test set. **In-Domain** is the first task, and we use accuracy to evaluate model performance within the in-domain context, i.e., on a test set of reviews from medicine and hotels domains. ## Tasks (input, output, and metrics) The task is to predict the correct label of the review. **Input** ('*text'* column): sentence **Output** ('*target'* column): label for sentence sentiment ('zero': neutral, 'minus': negative, 'plus': positive, 'amb': ambiguous) **Domain**: Online reviews **Measurements**: Accuracy **Example**: Input: `Lekarz zalecił mi kurację alternatywną do dotychczasowej , więc jeszcze nie daję najwyższej oceny ( zobaczymy na ile okaże się skuteczna ) . Do Pana doktora nie mam zastrzeżeń : bardzo profesjonalny i kulturalny . Jedyny minus dotyczy gabinetu , który nie jest nowoczesny , co może zniechęcać pacjentki .` Input (translated by DeepL): `The doctor recommended me an alternative treatment to the current one , so I do not yet give the highest rating ( we will see how effective it turns out to be ) . To the doctor I have no reservations : very professional and cultured . The only minus is about the office , which is not modern , which may discourage patients .` Output: `amb` (ambiguous) ## Data splits | Subset | Cardinality | |:-----------|--------------:| | train | 5783 | | test | 722 | | validation | 723 | ## Class distribution in train | Class | Sentiment | train | validation | test | |:------|:----------|------:|-----------:|------:| | minus | positive | 0.379 | 0.375 | 0.416 | | plus | negative | 0.271 | 0.289 | 0.273 | | amb | ambiguous | 0.182 | 0.160 | 0.150 | | zero | neutral | 0.168 | 0.176 | 0.162 | ## Citation ``` @inproceedings{kocon-etal-2019-multi, title = "Multi-Level Sentiment Analysis of {P}ol{E}mo 2.0: Extended Corpus of Multi-Domain Consumer Reviews", author = "Koco{\'n}, Jan and Mi{\l}kowski, Piotr and Za{\'s}ko-Zieli{\'n}ska, Monika", booktitle = "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/K19-1092", doi = "10.18653/v1/K19-1092", pages = "980--991", abstract = "In this article we present an extended version of PolEmo {--} a corpus of consumer reviews from 4 domains: medicine, hotels, products and school. Current version (PolEmo 2.0) contains 8,216 reviews having 57,466 sentences. Each text and sentence was manually annotated with sentiment in 2+1 scheme, which gives a total of 197,046 annotations. We obtained a high value of Positive Specific Agreement, which is 0.91 for texts and 0.88 for sentences. PolEmo 2.0 is publicly available under a Creative Commons copyright license. We explored recent deep learning approaches for the recognition of sentiment, such as Bi-directional Long Short-Term Memory (BiLSTM) and Bidirectional Encoder Representations from Transformers (BERT).", } ``` ## License ``` Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) ``` ## Links [HuggingFace](https://huggingface.co/datasets/allegro/klej-polemo2-in) [Source](https://clarin-pl.eu/dspace/handle/11321/710) [Paper](https://aclanthology.org/K19-1092/) ## Examples ### Loading ```python from pprint import pprint from datasets import load_dataset dataset = load_dataset("allegro/klej-polemo2-in") pprint(dataset['train'][0]) # {'sentence': 'Super lekarz i człowiek przez duże C . Bardzo duże doświadczenie ' # 'i trafne diagnozy . Wielka cierpliwość do ludzi starszych . Od ' # 'lat opiekuje się moją Mamą staruszką , i twierdzę , że mamy duże ' # 'szczęście , że mamy takiego lekarza . Naprawdę nie wiem cobyśmy ' # 'zrobili , gdyby nie Pan doktor . Dzięki temu , moja mama żyje . ' # 'Każda wizyta u specjalisty jest u niego konsultowana i uważam , ' # 'że jest lepszy od każdego z nich . Mamy do Niego prawie ' # 'nieograniczone zaufanie . Można wiele dobrego o Panu doktorze ' # 'jeszcze napisać . Niestety , ma bardzo dużo pacjentów , jest ' # 'przepracowany ( z tego powodu nawet obawiam się o jego zdrowie ) ' # 'i dostęp do niego jest trudny , ale zawsze możliwy .', # 'target': '__label__meta_plus_m'} ``` ### Evaluation ```python import random from pprint import pprint from datasets import load_dataset, load_metric dataset = load_dataset("allegro/klej-polemo2-in") dataset = dataset.class_encode_column("target") references = dataset["test"]["target"] # generate random predictions predictions = [random.randrange(max(references) + 1) for _ in range(len(references))] acc = load_metric("accuracy") f1 = load_metric("f1") acc_score = acc.compute(predictions=predictions, references=references) f1_score = f1.compute(predictions=predictions, references=references, average="macro") pprint(acc_score) pprint(f1_score) # {'accuracy': 0.25069252077562326} # {'f1': 0.23760962219870274} ```
5,963
[ [ -0.04315185546875, -0.037506103515625, 0.0238800048828125, 0.0228271484375, -0.030731201171875, -0.005466461181640625, -0.0364990234375, -0.04278564453125, 0.0262908935546875, 0.018402099609375, -0.04010009765625, -0.052093505859375, -0.04852294921875, 0.018...
allegro/klej-polemo2-out
2022-08-30T06:57:07.000Z
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:cc-by-sa-4.0", "region:us" ]
allegro
null
null
0
89
2022-03-02T23:29:22
--- annotations_creators: - expert-generated language_creators: - other language: - pl license: - cc-by-sa-4.0 multilinguality: - monolingual pretty_name: 'PolEmo2.0-OUT' size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification --- # klej-polemo2-out ## Description The PolEmo2.0 is a dataset of online consumer reviews from four domains: medicine, hotels, products, and university. It is human-annotated on a level of full reviews and individual sentences. It comprises over 8000 reviews, about 85% from the medicine and hotel domains. We use the PolEmo2.0 dataset to form two tasks. Both use the same training dataset, i.e., reviews from medicine and hotel domains, but are evaluated on a different test set. **Out-of-Domain** is the second task, and we test the model on out-of-domain reviews, i.e., from product and university domains. Since the original test sets for those domains are scarce (50 reviews each), we decided to use the original out-of-domain training set of 900 reviews for testing purposes and create a new split of development and test sets. As a result, the task consists of 1000 reviews, comparable in size to the in-domain test dataset of 1400 reviews. ## Tasks (input, output, and metrics) The task is to predict the correct label of the review. **Input** ('*text'* column): sentence **Output** ('*target'* column): label for sentence sentiment ('zero': neutral, 'minus': negative, 'plus': positive, 'amb': ambiguous) **Domain**: Online reviews **Measurements**: Accuracy **Example**: Input: `Lekarz zalecił mi kurację alternatywną do dotychczasowej , więc jeszcze nie daję najwyższej oceny ( zobaczymy na ile okaże się skuteczna ) . Do Pana doktora nie mam zastrzeżeń : bardzo profesjonalny i kulturalny . Jedyny minus dotyczy gabinetu , który nie jest nowoczesny , co może zniechęcać pacjentki .` Input (translated by DeepL): `The doctor recommended me an alternative treatment to the current one , so I do not yet give the highest rating ( we will see how effective it turns out to be ) . To the doctor I have no reservations : very professional and cultured . The only minus is about the office , which is not modern , which may discourage patients .` Output: `amb` (ambiguous) ## Data splits | Subset | Cardinality | |:-----------|--------------:| | train | 5783 | | test | 722 | | validation | 723 | ## Class distribution | Class | Sentiment | train | validation | test | |:------|:----------|------:|-----------:|------:| | minus | positive | 0.379 | 0.334 | 0.368 | | plus | negative | 0.271 | 0.332 | 0.302 | | amb | ambiguous | 0.182 | 0.332 | 0.328 | | zero | neutral | 0.168 | 0.002 | 0.002 | ## Citation ``` @inproceedings{kocon-etal-2019-multi, title = "Multi-Level Sentiment Analysis of {P}ol{E}mo 2.0: Extended Corpus of Multi-Domain Consumer Reviews", author = "Koco{\'n}, Jan and Mi{\l}kowski, Piotr and Za{\'s}ko-Zieli{\'n}ska, Monika", booktitle = "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/K19-1092", doi = "10.18653/v1/K19-1092", pages = "980--991", abstract = "In this article we present an extended version of PolEmo {--} a corpus of consumer reviews from 4 domains: medicine, hotels, products and school. Current version (PolEmo 2.0) contains 8,216 reviews having 57,466 sentences. Each text and sentence was manually annotated with sentiment in 2+1 scheme, which gives a total of 197,046 annotations. We obtained a high value of Positive Specific Agreement, which is 0.91 for texts and 0.88 for sentences. PolEmo 2.0 is publicly available under a Creative Commons copyright license. We explored recent deep learning approaches for the recognition of sentiment, such as Bi-directional Long Short-Term Memory (BiLSTM) and Bidirectional Encoder Representations from Transformers (BERT).", } ``` ## License ``` Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) ``` ## Links [HuggingFace](https://huggingface.co/datasets/allegro/klej-polemo2-out) [Source](https://clarin-pl.eu/dspace/handle/11321/710) [Paper](https://aclanthology.org/K19-1092/) ## Examples ### Loading ```python from pprint import pprint from datasets import load_dataset dataset = load_dataset("allegro/klej-polemo2-out") pprint(dataset['train'][0]) # {'sentence': 'Super lekarz i człowiek przez duże C . Bardzo duże doświadczenie ' # 'i trafne diagnozy . Wielka cierpliwość do ludzi starszych . Od ' # 'lat opiekuje się moją Mamą staruszką , i twierdzę , że mamy duże ' # 'szczęście , że mamy takiego lekarza . Naprawdę nie wiem cobyśmy ' # 'zrobili , gdyby nie Pan doktor . Dzięki temu , moja mama żyje . ' # 'Każda wizyta u specjalisty jest u niego konsultowana i uważam , ' # 'że jest lepszy od każdego z nich . Mamy do Niego prawie ' # 'nieograniczone zaufanie . Można wiele dobrego o Panu doktorze ' # 'jeszcze napisać . Niestety , ma bardzo dużo pacjentów , jest ' # 'przepracowany ( z tego powodu nawet obawiam się o jego zdrowie ) ' # 'i dostęp do niego jest trudny , ale zawsze możliwy .', # 'target': '__label__meta_plus_m'} ``` ### Evaluation ```python import random from pprint import pprint from datasets import load_dataset, load_metric dataset = load_dataset("allegro/klej-polemo2-out") dataset = dataset.class_encode_column("target") references = dataset["test"]["target"] # generate random predictions predictions = [random.randrange(max(references) + 1) for _ in range(len(references))] acc = load_metric("accuracy") f1 = load_metric("f1") acc_score = acc.compute(predictions=predictions, references=references) f1_score = f1.compute(predictions=predictions, references=references, average="macro") pprint(acc_score) pprint(f1_score) # {'accuracy': 0.2894736842105263} # {'f1': 0.2484406098784191} ```
6,244
[ [ -0.046112060546875, -0.037353515625, 0.022064208984375, 0.0225372314453125, -0.0302276611328125, -0.00608062744140625, -0.036651611328125, -0.04144287109375, 0.02667236328125, 0.0199127197265625, -0.04156494140625, -0.050933837890625, -0.046630859375, 0.0164...
MLCommons/ml_spoken_words
2022-12-06T11:11:02.000Z
[ "task_categories:audio-classification", "annotations_creators:machine-generated", "language_creators:other", "multilinguality:multilingual", "size_categories:10M<n<100M", "source_datasets:extended|common_voice", "language:ar", "language:as", "language:br", "language:ca", "language:cnh", "langu...
MLCommons
Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken words in 50 languages collectively spoken by over 5 billion people, for academic research and commercial applications in keyword spotting and spoken term search, licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords, totaling 23.4 million 1-second spoken examples (over 6,000 hours). The dataset has many use cases, ranging from voice-enabled consumer devices to call center automation. This dataset is generated by applying forced alignment on crowd-sourced sentence-level audio to produce per-word timing estimates for extraction. All alignments are included in the dataset.
@inproceedings{mazumder2021multilingual, title={Multilingual Spoken Words Corpus}, author={Mazumder, Mark and Chitlangia, Sharad and Banbury, Colby and Kang, Yiping and Ciro, Juan Manuel and Achorn, Keith and Galvez, Daniel and Sabini, Mark and Mattson, Peter and Kanter, David and others}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021} }
16
89
2022-03-02T23:29:22
--- annotations_creators: - machine-generated language_creators: - other language: - ar - as - br - ca - cnh - cs - cv - cy - de - dv - el - en - eo - es - et - eu - fa - fr - fy - ga - gn - ha - ia - id - it - ka - ky - lt - lv - mn - mt - nl - or - pl - pt - rm - ro - ru - rw - sah - sk - sl - sv - ta - tr - tt - uk - vi - zh license: - cc-by-4.0 multilinguality: - multilingual size_categories: - 10M<n<100M source_datasets: - extended|common_voice task_categories: - audio-classification task_ids: [] pretty_name: Multilingual Spoken Words language_bcp47: - fy-NL - ga-IE - rm-sursilv - rm-vallader - sv-SE - zh-CN tags: - other-keyword-spotting --- # Dataset Card for Multilingual Spoken Words ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://mlcommons.org/en/multilingual-spoken-words/ - **Repository:** https://github.com/harvard-edge/multilingual_kws - **Paper:** https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/fe131d7f5a6b38b23cc967316c13dae2-Paper-round2.pdf - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken words in 50 languages collectively spoken by over 5 billion people, for academic research and commercial applications in keyword spotting and spoken term search, licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords, totaling 23.4 million 1-second spoken examples (over 6,000 hours). The dataset has many use cases, ranging from voice-enabled consumer devices to call center automation. This dataset is generated by applying forced alignment on crowd-sourced sentence-level audio to produce per-word timing estimates for extraction. All alignments are included in the dataset. Data is provided in two formats: `wav` (16KHz) and `opus` (48KHz). Default configurations look like `"{lang}_{format}"`, so to load, for example, Tatar in wav format do: ```python ds = load_dataset("MLCommons/ml_spoken_words", "tt_wav") ``` To download multiple languages in a single dataset pass list of languages to `languages` argument: ```python ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"]) ``` To download a specific format pass it to the `format` argument (default format is `wav`): ```python ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"], format="opus") ``` Note that each time you provide different sets of languages, examples are generated from scratch even if you already provided one or several of them before because custom configurations are created each time (the data is **not** redownloaded though). ### Supported Tasks and Leaderboards Keyword spotting, Spoken term search ### Languages The dataset is multilingual. To specify several languages to download pass a list of them to the `languages` argument: ```python ds = load_dataset("MLCommons/ml_spoken_words", languages=["ar", "tt", "br"]) ``` The dataset contains data for the following languages: Low-resourced (<10 hours): * Arabic (0.1G, 7.6h) * Assamese (0.9M, 0.1h) * Breton (69M, 5.6h) * Chuvash (28M, 2.1h) * Chinese (zh-CN) (42M, 3.1h) * Dhivehi (0.7M, 0.04h) * Frisian (0.1G, 9.6h) * Georgian (20M, 1.4h) * Guarani (0.7M, 1.3h) * Greek (84M, 6.7h) * Hakha Chin (26M, 0.1h) * Hausa (90M, 1.0h) * Interlingua (58M, 4.0h) * Irish (38M, 3.2h) * Latvian (51M, 4.2h) * Lithuanian (21M, 0.46h) * Maltese (88M, 7.3h) * Oriya (0.7M, 0.1h) * Romanian (59M, 4.5h) * Sakha (42M, 3.3h) * Slovenian (43M, 3.0h) * Slovak (31M, 1.9h) * Sursilvan (61M, 4.8h) * Tamil (8.8M, 0.6h) * Vallader (14M, 1.2h) * Vietnamese (1.2M, 0.1h) Medium-resourced (>10 & <100 hours): * Czech (0.3G, 24h) * Dutch (0.8G, 70h) * Estonian (0.2G, 19h) * Esperanto (1.3G, 77h) * Indonesian (0.1G, 11h) * Kyrgyz (0.1G, 12h) * Mongolian (0.1G, 12h) * Portuguese (0.7G, 58h) * Swedish (0.1G, 12h) * Tatar (4G, 30h) * Turkish (1.3G, 29h) * Ukrainian (0.2G, 18h) Hig-resourced (>100 hours): * Basque (1.7G, 118h) * Catalan (8.7G, 615h) * English (26G, 1957h) * French (9.3G, 754h) * German (14G, 1083h) * Italian (2.2G, 155h) * Kinyarwanda (6.1G, 422h) * Persian (4.5G, 327h) * Polish (1.8G, 130h) * Russian (2.1G, 137h) * Spanish (4.9G, 349h) * Welsh (4.5G, 108h) ## Dataset Structure ### Data Instances ```python {'file': 'абзар_common_voice_tt_17737010.opus', 'is_valid': True, 'language': 0, 'speaker_id': '687025afd5ce033048472754c8d2cb1cf8a617e469866bbdb3746e2bb2194202094a715906f91feb1c546893a5d835347f4869e7def2e360ace6616fb4340e38', 'gender': 0, 'keyword': 'абзар', 'audio': {'path': 'абзар_common_voice_tt_17737010.opus', 'array': array([2.03458695e-34, 2.03458695e-34, 2.03458695e-34, ..., 2.03458695e-34, 2.03458695e-34, 2.03458695e-34]), 'sampling_rate': 48000}} ``` ### Data Fields * file: strinrelative audio path inside the archive * is_valid: if a sample is valid * language: language of an instance. Makes sense only when providing multiple languages to the dataset loader (for example, `load_dataset("ml_spoken_words", languages=["ar", "tt"])`) * speaker_id: unique id of a speaker. Can be "NA" if an instance is invalid * gender: speaker gender. Can be one of `["MALE", "FEMALE", "OTHER", "NAN"]` * keyword: word spoken in a current sample * audio: a dictionary containing the relative path to the audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus, it is important to first query the sample index before the "audio" column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]` ### Data Splits The data for each language is splitted into train / validation / test parts. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The data comes form Common Voice dataset. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information he dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is licensed under [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) and can be used for academic research and commercial applications in keyword spotting and spoken term search. ### Citation Information ``` @inproceedings{mazumder2021multilingual, title={Multilingual Spoken Words Corpus}, author={Mazumder, Mark and Chitlangia, Sharad and Banbury, Colby and Kang, Yiping and Ciro, Juan Manuel and Achorn, Keith and Galvez, Daniel and Sabini, Mark and Mattson, Peter and Kanter, David and others}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)}, year={2021} } ``` ### Contributions Thanks to [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
8,597
[ [ -0.0290679931640625, -0.03662109375, -0.0032329559326171875, 0.025634765625, -0.01256561279296875, -0.0005741119384765625, -0.048431396484375, -0.0289154052734375, 0.0305023193359375, 0.02880859375, -0.04278564453125, -0.07568359375, -0.041229248046875, 0.02...
Yaxin/SemEval2016Task5Raw
2022-08-15T08:19:35.000Z
[ "region:us" ]
Yaxin
A collection of SemEval2016 specifically designed to aid research in multilingual Aspect Based Sentiment Analysis.
@inproceedings{pontiki2016semeval, title={Semeval-2016 task 5: Aspect based sentiment analysis}, author={Pontiki, Maria and Galanis, Dimitrios and Papageorgiou, Haris and Androutsopoulos, Ion and Manandhar, Suresh and Al-Smadi, Mohammad and Al-Ayyoub, Mahmoud and Zhao, Yanyan and Qin, Bing and De Clercq, Orph{\'e}e and others}, booktitle={International workshop on semantic evaluation}, pages={19--30}, year={2016} }
2
89
2022-04-20T14:39:38
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
bigbio/scai_chemical
2022-12-22T15:46:32.000Z
[ "multilinguality:monolingual", "language:en", "license:unknown", "region:us" ]
bigbio
SCAI Chemical is a corpus of MEDLINE abstracts that has been annotated to give an overview of the different chemical name classes found in MEDLINE text.
@inproceedings{kolarik:lrec-ws08, author = {Kol{\'a}{\vr}ik, Corinna and Klinger, Roman and Friedrich, Christoph M and Hofmann-Apitius, Martin and Fluck, Juliane}, title = {Chemical Names: {T}erminological Resources and Corpora Annotation}, booktitle = {LREC Workshop on Building and Evaluating Resources for Biomedical Text Mining}, year = {2008}, }
2
89
2022-11-13T22:11:56
--- language: - en bigbio_language: - English license: unknown multilinguality: monolingual bigbio_license_shortname: UNKNOWN pretty_name: SCAI Chemical homepage: https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpora-for-chemical-entity-recognition.html bigbio_pubmed: True bigbio_public: True bigbio_tasks: - NAMED_ENTITY_RECOGNITION --- # Dataset Card for SCAI Chemical ## Dataset Description - **Homepage:** https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpora-for-chemical-entity-recognition.html - **Pubmed:** True - **Public:** True - **Tasks:** NER SCAI Chemical is a corpus of MEDLINE abstracts that has been annotated to give an overview of the different chemical name classes found in MEDLINE text. ## Citation Information ``` @inproceedings{kolarik:lrec-ws08, author = {Kol{'a}{ r}ik, Corinna and Klinger, Roman and Friedrich, Christoph M and Hofmann-Apitius, Martin and Fluck, Juliane}, title = {Chemical Names: {T}erminological Resources and Corpora Annotation}, booktitle = {LREC Workshop on Building and Evaluating Resources for Biomedical Text Mining}, year = {2008}, } ```
1,200
[ [ -0.001697540283203125, -0.0159149169921875, 0.03948974609375, -0.00580596923828125, -0.00994110107421875, 0.00447845458984375, -0.02484130859375, -0.0299224853515625, 0.046661376953125, 0.025970458984375, -0.036163330078125, -0.051300048828125, -0.04165649414062...
gonzalobenegas/clinvar
2023-02-09T23:32:45.000Z
[ "region:us" ]
gonzalobenegas
null
null
0
89
2023-02-09T23:32:39
Entry not found
15
[ [ -0.0213775634765625, -0.01497650146484375, 0.05718994140625, 0.02880859375, -0.0350341796875, 0.046478271484375, 0.052490234375, 0.00507354736328125, 0.051361083984375, 0.0170135498046875, -0.052093505859375, -0.01497650146484375, -0.0604248046875, 0.0379028...
IlyaGusev/ru_news
2023-03-20T23:05:08.000Z
[ "task_categories:text-generation", "size_categories:1M<n<10M", "language:ru", "region:us" ]
IlyaGusev
null
null
3
89
2023-03-12T20:56:14
--- dataset_info: features: - name: url dtype: string - name: text dtype: string - name: title dtype: string - name: source dtype: string - name: timestamp dtype: uint64 splits: - name: train num_bytes: 12858731888 num_examples: 4137525 download_size: 3669747077 dataset_size: 12858731888 task_categories: - text-generation language: - ru size_categories: - 1M<n<10M --- # RuNews dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Description](#description) - [Usage](#usage) - [Data Instances](#data-instances) - [Personal and Sensitive Information](#personal-and-sensitive-information) ## Description **Summary:** Dataset of news from several sources: * [Lenta.ru by yutkin](https://github.com/yutkin/Lenta.Ru-News-Dataset) * [Several sources by buriy](https://github.com/buriy/russian-nlp-datasets/releases) * [ODS Newsviz Tass](https://github.com/newsviz/newsviz) * [Taiga fontanka](https://tatianashavrina.github.io/taiga_site/) * [News from Telegram contest](https://github.com/IlyaGusev/tgcontest) **Script:** [create_ru_news.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_ru_news.py) **Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu) **Languages:** Russian. ## Usage Prerequisites: ```bash pip install datasets zstandard jsonlines pysimdjson ``` Dataset iteration: ```python from datasets import load_dataset dataset = load_dataset('IlyaGusev/ru_news', split="train", streaming=True) for example in dataset: print(example["text"]) ``` ## Data Instances ``` { "title": "Заместитель главы района в Якутии пожаловался на пьянство начальника", "text": "Заместитель главы Нерюнгринского района Якутии Геннадий Ленц пожаловался руководителю республики Егору Борисову на своего начальника. Как рассказал Ленц 'Интерфаксу', Андрей Фитисов пьет на рабочем месте и 'уходит в многодневные загулы'...", "timestamp": 1346284800, "url": "https://lenta.ru/news/2012/08/30/alco/", "source": "lenta" } ``` ## Personal and Sensitive Information The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible.
2,234
[ [ -0.01422119140625, -0.028076171875, 0.02734375, 0.0026569366455078125, -0.033477783203125, -0.0038852691650390625, -0.022705078125, -0.0127105712890625, 0.02001953125, 0.0287933349609375, -0.04974365234375, -0.06793212890625, -0.037689208984375, 0.0151824951...
sarus-tech/phee
2023-06-21T19:36:26.000Z
[ "arxiv:2210.12560", "region:us" ]
sarus-tech
Data and Code for [``PHEE: A Dataset for Pharmacovigilance Event Extraction from Text``](https://arxiv.org/abs/2210.12560/)\
@misc{sun2022phee, title={PHEE: A Dataset for Pharmacovigilance Event Extraction from Text}, author={Zhaoyue Sun and Jiazheng Li and Gabriele Pergola and Byron C. Wallace and Bino John and Nigel Greene and Joseph Kim and Yulan He}, year={2022}, eprint={2210.12560}, archivePrefix={arXiv}, primaryClass={cs.CL} }
1
89
2023-06-21T14:49:19
# PHEE dataset This dataset is port of https://github.com/ZhaoyueSun/PHEE, the data used in: [``PHEE: A Dataset for Pharmacovigilance Event Extraction from Text``](https://arxiv.org/abs/2210.12560/)
200
[ [ 0.00795745849609375, -0.0227813720703125, 0.03594970703125, -0.006069183349609375, -0.0100555419921875, -0.021270751953125, 0.0169525146484375, -0.0172271728515625, 0.032958984375, 0.05450439453125, -0.042205810546875, -0.038299560546875, -0.03125, -0.003480...
vuducanh/b3-userstudy-data
2023-10-23T12:57:27.000Z
[ "license:mit", "region:us" ]
vuducanh
null
null
0
89
2023-09-06T10:41:57
--- license: mit --- dataset sources: shark_dataset_location = "https://www.kaggle.com/datasets/mysarahmadbhat/shark-attacks" nba_dataset_location = "https://zenodo.org/record/6419727" fec_dataset_location = "https://github.com/wesm/pydata-book/blob/2nd-edition/datasets/fec/P00000001-ALL.csv"
309
[ [ -0.005100250244140625, -0.049530029296875, 0.019866943359375, -0.002593994140625, -0.01253509521484375, 0.024871826171875, 0.0291595458984375, -0.0243377685546875, 0.027587890625, 0.0304718017578125, -0.053985595703125, -0.04010009765625, -0.0247650146484375, ...
yzhuang/autotree_pmlb_10000_banana_sgosdt_l256_dim10_d3_sd0
2023-09-07T01:51:46.000Z
[ "region:us" ]
yzhuang
null
null
0
89
2023-09-07T01:51:42
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 154520000 num_examples: 10000 - name: validation num_bytes: 154520000 num_examples: 10000 download_size: 50636856 dataset_size: 309040000 --- # Dataset Card for "autotree_pmlb_10000_banana_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
841
[ [ -0.034210205078125, -0.021636962890625, 0.01328277587890625, 0.033111572265625, -0.0222930908203125, 0.0228271484375, 0.047821044921875, -0.00016224384307861328, 0.0625, 0.034820556640625, -0.056549072265625, -0.034454345703125, -0.05828857421875, 0.00442123...
yzhuang/autotree_automl_10000_eye_movements_sgosdt_l256_dim10_d3_sd0
2023-09-07T03:32:07.000Z
[ "region:us" ]
yzhuang
null
null
0
89
2023-09-07T03:31:59
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 155715478 dataset_size: 472880000 --- # Dataset Card for "autotree_automl_10000_eye_movements_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
851
[ [ -0.0171051025390625, -0.018890380859375, 0.0272979736328125, 0.01177215576171875, -0.0125274658203125, 0.0025386810302734375, 0.045166015625, -0.018798828125, 0.061492919921875, 0.03143310546875, -0.06304931640625, -0.04638671875, -0.04931640625, 0.000883579...
yzhuang/autotree_pmlb_10000_spambase_sgosdt_l256_dim10_d3_sd0
2023-09-07T03:32:53.000Z
[ "region:us" ]
yzhuang
null
null
0
89
2023-09-07T03:32:48
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 62261087 dataset_size: 472880000 --- # Dataset Card for "autotree_pmlb_10000_spambase_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
843
[ [ -0.040008544921875, -0.0228271484375, 0.005802154541015625, 0.037933349609375, -0.0119781494140625, 0.0119476318359375, 0.04937744140625, 0.0088958740234375, 0.058349609375, 0.031402587890625, -0.0640869140625, -0.0455322265625, -0.0537109375, 0.008743286132...
yzhuang/autotree_automl_10000_california_sgosdt_l256_dim8_d3_sd0
2023-09-07T03:44:46.000Z
[ "region:us" ]
yzhuang
null
null
0
89
2023-09-07T03:44:39
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 215960000 num_examples: 10000 - name: validation num_bytes: 215960000 num_examples: 10000 download_size: 151409122 dataset_size: 431920000 --- # Dataset Card for "autotree_automl_10000_california_sgosdt_l256_dim8_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
847
[ [ -0.026641845703125, -0.0177001953125, 0.0295257568359375, 0.0254058837890625, -0.0108795166015625, 0.01126861572265625, 0.04876708984375, -0.0011196136474609375, 0.055206298828125, 0.0328369140625, -0.058746337890625, -0.048492431640625, -0.04351806640625, 0...
yzhuang/autotree_pmlb_10000_phoneme_sgosdt_l256_dim10_d3_sd0
2023-09-07T04:06:07.000Z
[ "region:us" ]
yzhuang
null
null
0
89
2023-09-07T04:06:02
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 185240000 num_examples: 10000 - name: validation num_bytes: 185240000 num_examples: 10000 download_size: 68514231 dataset_size: 370480000 --- # Dataset Card for "autotree_pmlb_10000_phoneme_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
842
[ [ -0.0293121337890625, -0.01410675048828125, 0.00849151611328125, 0.0309906005859375, -0.0185089111328125, 0.01255035400390625, 0.04827880859375, -0.0021762847900390625, 0.0595703125, 0.026214599609375, -0.06005859375, -0.041168212890625, -0.041900634765625, 0...
yzhuang/autotree_automl_10000_default-of-credit-card-clients_sgosdt_l256_dim10_d3_sd0
2023-09-07T04:10:11.000Z
[ "region:us" ]
yzhuang
null
null
0
89
2023-09-07T04:10:04
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 122258450 dataset_size: 472880000 --- # Dataset Card for "autotree_automl_10000_default-of-credit-card-clients_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
868
[ [ -0.0293121337890625, -0.01284027099609375, 0.01401519775390625, 0.0128326416015625, -0.01861572265625, 0.009979248046875, 0.039825439453125, -0.004306793212890625, 0.046173095703125, 0.032470703125, -0.051483154296875, -0.048187255859375, -0.037933349609375, ...
yzhuang/autotree_pmlb_10000_Hill_Valley_with_noise_sgosdt_l256_dim10_d3_sd0
2023-09-07T04:14:27.000Z
[ "region:us" ]
yzhuang
null
null
0
89
2023-09-07T04:14:20
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 172085873 dataset_size: 472880000 --- # Dataset Card for "autotree_pmlb_10000_Hill_Valley_with_noise_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
858
[ [ -0.0232391357421875, -0.0279998779296875, 0.007381439208984375, 0.0237884521484375, -0.01308441162109375, 0.00435638427734375, 0.051361083984375, -0.0030078887939453125, 0.04241943359375, 0.026336669921875, -0.07427978515625, -0.045928955078125, -0.0470886230468...
yzhuang/autotree_pmlb_10000_clean2_sgosdt_l256_dim10_d3_sd0
2023-09-07T04:54:58.000Z
[ "region:us" ]
yzhuang
null
null
0
89
2023-09-07T04:54:53
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 111490531 dataset_size: 472880000 --- # Dataset Card for "autotree_pmlb_10000_clean2_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
842
[ [ -0.0276336669921875, -0.01396942138671875, 0.00803375244140625, 0.01702880859375, -0.0183868408203125, 0.01059722900390625, 0.04705810546875, -0.0022258758544921875, 0.057342529296875, 0.0313720703125, -0.060455322265625, -0.038360595703125, -0.041595458984375, ...
yzhuang/autotree_automl_10000_house_16H_sgosdt_l256_dim10_d3_sd0
2023-09-07T05:11:36.000Z
[ "region:us" ]
yzhuang
null
null
0
89
2023-09-07T05:11:28
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 168523499 dataset_size: 472880000 --- # Dataset Card for "autotree_automl_10000_house_16H_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
847
[ [ -0.034149169921875, -0.022186279296875, 0.0140380859375, 0.01995849609375, -0.008270263671875, 0.016326904296875, 0.040618896484375, -0.0020999908447265625, 0.051422119140625, 0.025390625, -0.052764892578125, -0.0467529296875, -0.047088623046875, 0.006427764...
yzhuang/autotree_pmlb_10000_Hill_Valley_without_noise_sgosdt_l256_dim10_d3_sd0
2023-09-07T05:25:19.000Z
[ "region:us" ]
yzhuang
null
null
0
89
2023-09-07T05:25:11
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 179483399 dataset_size: 472880000 --- # Dataset Card for "autotree_pmlb_10000_Hill_Valley_without_noise_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
861
[ [ -0.02447509765625, -0.0276031494140625, 0.009490966796875, 0.0225067138671875, -0.0149688720703125, 0.0035076141357421875, 0.051971435546875, -0.00420379638671875, 0.044830322265625, 0.030120849609375, -0.076171875, -0.049224853515625, -0.046844482421875, -0...
yzhuang/autotree_pmlb_10000_magic_sgosdt_l256_dim10_d3_sd0
2023-09-07T05:44:01.000Z
[ "region:us" ]
yzhuang
null
null
0
89
2023-09-07T05:43:51
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 188904676 dataset_size: 472880000 --- # Dataset Card for "autotree_pmlb_10000_magic_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
841
[ [ -0.0288848876953125, -0.01396942138671875, 0.0019664764404296875, 0.027069091796875, -0.01531219482421875, 0.01305389404296875, 0.042022705078125, 0.001987457275390625, 0.060394287109375, 0.0303802490234375, -0.064208984375, -0.03546142578125, -0.047027587890625...
yzhuang/autotree_automl_10000_MagicTelescope_sgosdt_l256_dim10_d3_sd0
2023-09-07T05:48:36.000Z
[ "region:us" ]
yzhuang
null
null
0
89
2023-09-07T05:48:29
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 186721409 dataset_size: 472880000 --- # Dataset Card for "autotree_automl_10000_MagicTelescope_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
852
[ [ -0.035247802734375, -0.004497528076171875, 0.01544952392578125, 0.0161590576171875, -0.017608642578125, 0.0021190643310546875, 0.038116455078125, -0.002574920654296875, 0.052734375, 0.0272979736328125, -0.0653076171875, -0.03955078125, -0.05084228515625, 0.0...
yzhuang/autotree_automl_10000_MiniBooNE_sgosdt_l256_dim10_d3_sd0
2023-09-07T06:03:38.000Z
[ "region:us" ]
yzhuang
null
null
0
89
2023-09-07T06:03:29
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 293033260 dataset_size: 472880000 --- # Dataset Card for "autotree_automl_10000_MiniBooNE_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
847
[ [ -0.036773681640625, -0.0179901123046875, 0.01654052734375, 0.0195465087890625, -0.015228271484375, 0.0076141357421875, 0.042266845703125, -0.0031261444091796875, 0.058013916015625, 0.0243682861328125, -0.06610107421875, -0.035308837890625, -0.046539306640625, ...
yzhuang/autotree_automl_10000_jannis_sgosdt_l256_dim10_d3_sd0
2023-09-07T06:07:04.000Z
[ "region:us" ]
yzhuang
null
null
0
89
2023-09-07T06:06:55
--- dataset_info: features: - name: id dtype: int64 - name: input_x sequence: sequence: float32 - name: input_y sequence: sequence: float32 - name: input_y_clean sequence: sequence: float32 - name: rtg sequence: float64 - name: status sequence: sequence: float32 - name: split_threshold sequence: sequence: float32 - name: split_dimension sequence: int64 splits: - name: train num_bytes: 236440000 num_examples: 10000 - name: validation num_bytes: 236440000 num_examples: 10000 download_size: 292435235 dataset_size: 472880000 --- # Dataset Card for "autotree_automl_10000_jannis_sgosdt_l256_dim10_d3_sd0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
844
[ [ -0.0361328125, -0.00948333740234375, 0.0206146240234375, 0.0177001953125, -0.01788330078125, 0.007617950439453125, 0.0382080078125, -0.006130218505859375, 0.0595703125, 0.030975341796875, -0.058349609375, -0.04742431640625, -0.050506591796875, 0.013259887695...