author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ydshieh | null | @article{DBLP:journals/corr/LinMBHPRDZ14,
author = {Tsung{-}Yi Lin and
Michael Maire and
Serge J. Belongie and
Lubomir D. Bourdev and
Ross B. Girshick and
James Hays and
Pietro Perona and
Deva Ramanan and
Piotr Doll{'{a} }r and
C. Lawrence Zitnick},
title = {Microsoft {COCO:} Common Objects in Context},
journal = {CoRR},
volume = {abs/1405.0312},
year = {2014},
url = {http://arxiv.org/abs/1405.0312},
archivePrefix = {arXiv},
eprint = {1405.0312},
timestamp = {Mon, 13 Aug 2018 16:48:13 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | COCO is a large-scale object detection, segmentation, and captioning dataset. | false | 937 | false | ydshieh/coco_dataset_script | 2022-02-14T17:32:43.000Z | null | false | 6414bae7a39b5f41feab2fd6a1cb773033254c93 | [] | [] | https://huggingface.co/datasets/ydshieh/coco_dataset_script/resolve/main/README.md | ## Usage
For testing purpose, you can use the hosted dummy dataset (`dummy_data`) as follows:
```
import datasets
ds = datasets.load_dataset("ydshieh/coco_dataset_script", "2017", data_dir="./dummy_data/")
```
For using the COCO dataset (2017), you need to download it manually first:
```
wget http://images.cocodataset.org/zips/train2017.zip
wget http://images.cocodataset.org/zips/val2017.zip
wget http://images.cocodataset.org/zips/test2017.zip
wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
wget http://images.cocodataset.org/annotations/image_info_test2017.zip
```
Then to load the dataset:
```
COCO_DIR = ...(path to the downloaded dataset directory)...
ds = datasets.load_dataset("ydshieh/coco_dataset_script", "2017", data_dir=COCO_DIR)
``` |
yharyarias | null | null | null | false | 1 | false | yharyarias/tirads_tiroides | 2022-01-24T01:53:21.000Z | null | false | 3673fb0d96829eb005d6d0816ed0be21bbac249f | [] | [] | https://huggingface.co/datasets/yharyarias/tirads_tiroides/resolve/main/README.md | Thyroid ultrasound images, classified into 5 classes that correspond to the European EU-TIRADS scale, this consists of:
EU-TIRADS 1: no nodule
EU-TIRADS 2: benign
EU-TIRADS 3: low risk (oval, smooth margin, iso / hyperechoic, no high risk features)
EU-TIRADS 4: intermediate risk (oval, smooth margin, mildly hypoechoic, no high risk features)
EU-TIRADS 5: any high risk features (non-oval, irregular margin, microcalcifications, marked hypoechogenicity)
Ultrasound images of the thyroid that were taken from the ultrasound scanners of the FOSCAL/FOSUNAB clinic, as a final master's project for the Polytechnic University of Valencia, in collaboration with doctors Federico Lubinus and Boris Marconi, who together with Yhary Arias have worked on the classification of said ultrasounds that are saved in .DICOM format and then transformed to PNG to make the process lighter. The strategy that was carried out for the collection of images and later their labeling was: for each examination that was carried out on patients with or without a possible diagnosis, only the images without personal or sensitive information were kept, all this on a hard drive. , then a pre-processing of the images was done, their format was changed and finally they were mounted on a web page with a single view to facilitate the classification of the doctors who were in charge of this arduous task. Ultrasounds were classified into 5 classes that correspond to the European EU-TIRADS scale, this consists of:
EU-TIRADS 1: no nodule
EU-TIRADS 2: benign
EU-TIRADS 3: low risk (oval, smooth margin, iso / hyperechoic, no high risk features)
EU-TIRADS 4: intermediate risk (oval, smooth margin, mildly hypoechoic, no high risk features)
EU-TIRADS 5: any high risk features (non-oval, irregular margin, microcalcifications, marked hypoechogenicity)
Risk of malignancy
EU-TIRADS 1: n/a
EU-TIRADS 2: 0%
EU-TIRADS 3: low risk (2-4%)
EU-TIRADS 4: intermediate risk (6-17%)
EU-TIRADS 5: high risk (26-87%)
References
1. Gilles Russ, Steen J. Bonnema, Murat Faik Erdogan, Cosimo Durante, Rose Ngu, Laurence Leenhardt. European Thyroid Association Guidelines for Ultrasound Malignancy Risk Stratification of Thyroid Nodules in Adults: The EU-TIRADS. (2019) European ThyroidJournal. 6 (5): 225. doi:10.1159/000478927 - Pubmed
2. Gilles Russ, Bénédicte Royer, Claude Bigorgne, Agnès Rouxel, Marie Bienvenu-Perrard, Laurence Leenhardt. Prospective evaluation of thyroid imaging reporting and data system on 4550 nodules with and without elastography. (2013) European Journal of Endocrinology. 168 (5): 649. doi:10.1530/EJE-12-0936 - Pubmed
3. Jung Hyun Yoon, Kyunghwa Han, Eun-Kyung Kim, Hee Jung Moon, Jin Young Kwak. Diagnosis and Management of Small Thyroid Nodules: A Comparative Study with Six Guidelines for Thyroid Nodules. (2016) Radiology. 283 (2): 560-569. doi:10.1148/radiol.2016160641 - Pubmed
4. Ting Xu, Ya Wu, Run-Xin Wu, Yu-Zhi Zhang, Jing-Yu Gu, Xin-Hua Ye, Wei Tang, Shu-Hang Xu, Chao Liu, Xiao-Hong Wu. Validation and comparison of three newly-released Thyroid Imaging Reporting and Data Systems for cancer risk determination. (2019). Endocrine. 64 (2): 299. doi:10.1007/s12020-018-1817-8 - Pubmed
5. Ting Xu, Ya Wu, Run-Xin Wu, Yu-Zhi Zhang, Jing-Yu Gu, Xin-Hua Ye, Wei Tang, Shu-Hang Xu, Chao Liu, Xiao-Hong Wu. Validation and comparison of three newly-released Thyroid Imaging Reporting and Data Systems for cancer risk determination. (2019). Endocrine. 64 (2): 299. doi:10.1007/s12020-018-1817-8 - Pubmed
6. Grani, Giorgio, Lamartina, Livia, Ascoli, Valeria, Bosco, Daniela, Biffoni, Marco, Giacomelli, Laura, Maranghi, Marianna, Falcone, Rosa, Ramundo, Valeria, Cantisani, Vito, Filetti, Sebastiano, Durante, Cosimo. Reducing the Number of Unnecessary Thyroid Biopsies While Improving Diagnostic Accuracy: Toward the “Right” TIRADS. (2019) The Journal of Clinical Endocrinology & Metabolism. 104 (1): 95. doi:10.1210/jc.2018-01674 - Pubmed
7. Giorgio Grani, Livia Lamartina, Vito Cantisani, Marianna Maranghi, Piernatale Lucia, Cosimo Durante. Interobserver agreement of various thyroid imaging reporting and data systems. (2018) Endocrine Connections. 7 (1): 1. doi:10.1530/EC-17-0336 - Pubmed
Taken from: https://radiopaedia.org/articles/european-thyroid-association-tirads
*Citation Information*
@yharyarias{tirads_tiroides:2022,
author = {Yhary Arias, Federico Lubinus, Boris Marconi},
title = {Common Voice: Thyroid Ultrasound Imaging Dataset},
thesistitle = {Sistema para la clasificación y reconocimiento de imágenes de ultrasonido en
tiroides, basado en técnicas de aprendizaje profundo para el apoyo en el proceso
de diagnóstico según la escala EU-TIRADS},
year = 2022
}
Bucaramanga, Santander, 2022 |
yhavinga | null | @article{JMLR:v21:20-074,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
} | A thoroughly cleaned version of the Dutch portion of the multilingual
colossal, cleaned version of Common Crawl's web crawl corpus (mC4) by AllenAI.
Based on Common Crawl dataset: "https://commoncrawl.org".
This is the processed version of Google's mC4 dataset by AllenAI, with further cleaning
detailed in the repository README file. | false | 255 | false | yhavinga/mc4_nl_cleaned | 2022-10-25T07:28:22.000Z | mc4 | false | 8e6113cc20fe8ef7c4bc02a2b166fbb88f536a69 | [] | [
"arxiv:1910.10683",
"annotations_creators:no-annotation",
"language_creators:found",
"language:nl",
"language:en",
"license:odc-by",
"multilinguality:monolingual",
"multilinguality:en-nl",
"size_categories:120k",
"size_categories:1M<n<10M",
"size_categories:10M<n<100M",
"size_categories:100M<n... | https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- nl
- en
license:
- odc-by
multilinguality:
- monolingual
- en-nl
size_categories:
micro:
- 120k
tiny:
- 1M<n<10M
small:
- 10M<n<100M
medium:
- 10M<n<100M
large:
- 10M<n<100M
full:
- 100M<n<1B
source_datasets:
- extended
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: mc4
pretty_name: mC4_nl_cleaned
---
# Dataset Card for Clean Dutch mC4
## Table of Contents
- [Dataset Card for Clean](#dataset-card-for-mc4)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Preprocessing](#preprocessing)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Original Homepage:** [HF Hub](https://huggingface.co/datasets/allenai/c4)
- **Paper:** [ArXiv](https://arxiv.org/abs/1910.10683)
### Dataset Summary
A cleaned version (151GB) of the Dutch part (277GB) of the C4 multilingual dataset (mC4).
While this dataset is monolingual, it is possible to download `en-nl` interleaved data, see the Dataset Config section below.
Based on the [Common Crawl dataset](https://commoncrawl.org).
The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4).
### Preprocessing
The Dutch portion of mC4 was cleaned in a similar fashion as the English cleaned C4 version.
See [GitLab](https://gitlab.com/yhavinga/c4nlpreproc) for details.
In summary, the preprocessing procedure includes:
- Removing documents containing words from a selection of the [Dutch and English List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words).
- Removing sentences containing:
- Less than 3 words.
- A word longer than 250 characters.
- An end symbol not matching end-of-sentence punctuation.
- Strings associated to javascript code (e.g. `{`), lorem ipsum, policy information in Dutch or English.
- Removing documents (after sentence filtering):
- Containing less than 5 sentences.
- Containing less than 500 or more than 50'000 characters.
- Not identified as prevalently Dutch by the `LangDetect` package.
Using parallel processing with 96 CPU cores on a TPUv3 via Google Cloud to perform the complete clean of all the original Dutch
shards of mC4 (1024 of ~220Mb train, 4 of ~24Mb validation) required roughly 10 hours due to the demanding steps of sentence
tokenization and language detection. The total size of compressed `.json.gz` files is roughly halved after the procedure.
## Dataset Structure
### Data Instances
An example from the dataset:
```
{
'timestamp': '2019-02-22T15:37:25Z',
'url': 'https://ondernemingen.bnpparibasfortis.be/nl/artikel?n=vijf-gouden-tips-voor-succesvol-zaken-doen-met-japan',
'text': 'Japanse bedrijven zijn niet alleen hondstrouw aan hun leveranciers , ze betalen ook nog eens erg stipt. Alleen is het niet zo makkelijk er een voet tussen de deur te krijgen. Met de volgende tips hebt u alvast een streepje voor.\nIn Japan draait alles om vertrouwen. Neem voldoende tijd om een relatie op te bouwen.Aarzel niet om tijdig een lokale vertrouwenspersoon in te schakelen.\nJapan is een erg competitieve markt.Kwaliteit en prijs zijn erg belangrijk, u zult dus het beste van uzelf moeten geven. Gelukkig is de beloning groot. Japanse zakenlui zijn loyaal en betalen stipt!\nJapanners houden er eigenzinnige eisen op na. Kom dus niet aanzetten met uw standaardproducten voor de Europese markt. Zo moet een producent van diepvriesfrieten bijvoorbeeld perfect identieke frietjes kunnen leveren in mini- verpakkingen. Het goede nieuws is dat Japanners voor kwaliteit graag diep in hun buidel tasten.\nEn u dacht dat Europa lijdt aan reglementitis? Japanners kennen er ook wat van. Tal van voorschriften zeggen wat je wel en niet mag doen. Gelukkig zijn de regels helder geformuleerd.\nHet gebruik van het Engels is niet echt ingeburgerd in Japan. Betrek een tolk bij uw onderhandelingen en zorg voor correcte vertalingen van handleidingen of softwareprogramma’s.'
}
```
### Data Fields
The data contains the following fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp of extraction as a string
### Data Configs
To build mC4, the original authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages.
For Dutch, the whole corpus of scraped text was divided in `1032` jsonl files, `1024` for training following
the naming style `c4-nl-cleaned.tfrecord-0XXXX-of-01024.json.gz` and 4 for validation following the
naming style `c4-nl-cleaned.tfrecord-0000X-of-00004.json.gz`. The full set of pre-processed files takes roughly 208GB of disk space to download with Git LFS.
For ease of use under different storage capacities, the following incremental configs are available: (note: files on disk are compressed)
| config | train size (docs, words, download + preproc disk space) | validation size |
|:-------|--------------------------------------------------------:|----------------:|
| micro | 125k docs, 23M words (<1GB) | 16k docs |
| tiny | 6M docs, 2B words (6 GB + 15 GB) | 16k docs |
| small | 15M docs, 6B words (14 GB + 36 GB) | 16k docs |
| medium | 31M docs, 12B words (28 GB + 72 GB) | 32k docs |
| large | 47M docs, 19B words (42 GB + 108 GB) | 48k docs |
| full | 64M docs, 25B words (58 GB + 148 GB) | 64k docs |
For each config above there also exists a config `<name>_en_nl` that interleaves `nl` and `en` examples from the cleaned
`en` variant of C4.
You can load any config like this:
```python
from datasets import load_dataset
datasets = load_dataset('yhavinga/mc4_nl_cleaned', 'tiny', streaming=True)
print(datasets)
```
This will print
```
DatasetDict({
train: Dataset({
features: ['text', 'timestamp', 'url'],
num_rows: 6303893
})
validation: Dataset({
features: ['text', 'timestamp', 'url'],
num_rows: 16189
})
})
```
Since the configs are quite large, you may want to traverse them using the streaming mode available starting from — Datasets v1.9.0:
```python
from datasets import load_dataset
mc4_nl_full_stream = load_dataset('yhavinga/mc4_nl_cleaned', "full", split='train', streaming=True)
print(next(iter(mc4_nl_full_stream))) # Prints the example presented above
```
## Dataset Creation
Refer to the original paper for more considerations regarding the choice of sources and the scraping process for creating `mC4`.
## Considerations for Using the Data
### Social Impact of Dataset
With more than 151GB (58GB compressed) of cleaned Dutch text and more than 23B estimated words, this is by far the largest available cleaned corpus for the Dutch language.
The second largest dataset available is [OSCAR](https://oscar-corpus.com/), which is only 39GB in size for its deduplicated variant, and contains vulgarity.
Using this corpus for training language models with adequate computational resources will allow researchers to reach parity with the performances observed for the English language.
This can in turn have important repercussions for the development of commercial language technology applications for the Dutch language.
### Discussion of Biases
Despite the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will
inevitably reflect biases present in blog articles and comments on the Internet.
This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.
## Additional Information
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
```
@article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
}
```
### Contributions
Thanks to [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com), [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for
providing the `cleaned_it_mc4` example that shows how upload a dataset to the Huggingface hub.
|
yluisfern | null | null | null | false | 1 | false | yluisfern/PBU | 2021-04-02T16:39:30.000Z | null | false | 9111d6987c89a76a1a640bfc661ccdb712e9e4cd | [] | [] | https://huggingface.co/datasets/yluisfern/PBU/resolve/main/README.md | https://www.geogebra.org/m/cwcveget
https://www.geogebra.org/m/b8dzxk6z
https://www.geogebra.org/m/nqanttum
https://www.geogebra.org/m/pd3g8a4u
https://www.geogebra.org/m/jw8324jz
https://www.geogebra.org/m/wjbpvz5q
https://www.geogebra.org/m/qm3g3ma6
https://www.geogebra.org/m/sdajgph8
https://www.geogebra.org/m/e3ghhcbf
https://www.geogebra.org/m/msne4bfm
https://www.geogebra.org/m/nmcv2te5
https://www.geogebra.org/m/hguqx6cn
https://www.geogebra.org/m/jnyvpgqu
https://www.geogebra.org/m/syctd97g
https://www.geogebra.org/m/nq9erdby
https://www.geogebra.org/m/au4har8c
https://network.aza.org/network/members/profile?UserKey=811de229-7f08-4360-863c-ac04181ba9c0
https://network.aza.org/network/members/profile?UserKey=31b495a0-36f7-4a50-ba3e-d76e3487278c
https://network.aza.org/network/members/profile?UserKey=753c0ddd-bded-4b03-8c68-11dacdd1f676
https://network.aza.org/network/members/profile?UserKey=db9d0a25-1615-4e39-b61f-ad68766095b3
https://network.aza.org/network/members/profile?UserKey=59279f52-50cf-4686-9fb0-9ab613211ead
https://network.aza.org/network/members/profile?UserKey=67b3ce20-cc3a-420f-8933-10796f301060
https://network.aza.org/network/members/profile?UserKey=f5e610c3-6400-4429-b42b-97eeeeb284a9
https://network.aza.org/network/members/profile?UserKey=ccda0739-f5f5-4ecc-a729-77c9a6825897
https://network.aza.org/network/members/profile?UserKey=3983471f-cf43-4a4a-90d3-148040f92dd9
https://network.aza.org/network/members/profile?UserKey=9f16d7a8-3502-4904-a99a-38362de78973
https://network.aza.org/network/members/profile?UserKey=961981d5-9743-44ac-8525-d4c8b708eb5a
https://network.aza.org/network/members/profile?UserKey=178276d7-c64d-408e-af52-96d1ebd549fc |
yonesuke | null | null | null | false | 1 | false | yonesuke/Ising2D | 2022-01-18T11:50:23.000Z | null | false | 06ee53dad2bab38ab0c45f13cd6d3c1c85d640ee | [] | [] | https://huggingface.co/datasets/yonesuke/Ising2D/resolve/main/README.md | - hoge
- fuga |
yonesuke | null | null | null | false | 1 | false | yonesuke/Vicsek | 2022-02-17T05:34:34.000Z | null | false | e5a3648ec4ec400d298640b5ee252ee82dc5eebe | [] | [
"license:mit"
] | https://huggingface.co/datasets/yonesuke/Vicsek/resolve/main/README.md | ---
license: mit
---
|
ysharma | null | null | null | false | 1 | false | ysharma/rickandmorty | 2022-01-02T00:45:54.000Z | null | false | 3368ab40c719d3fc556a2d11b8c1d32fac9278be | [] | [] | https://huggingface.co/datasets/ysharma/rickandmorty/resolve/main/README.md | This dataset contains scripts for all episodes of Rick and Morty season 1,2, and 3.
Columns : index, season no., episode no., episode name, (character) name, line (dialogue) |
yuanchuan | null | @techreport{kee2021,
author = {Yuan Chuan Kee},
title = {Synthesis of a large dataset of annotated reference strings for developing citation parsers},
institution = {National University of Singapore},
year = {2021}
} | A repository of reference strings annotated using CSL processor using citations obtained from various sources. | false | 1 | false | yuanchuan/annotated_reference_strings | 2022-10-26T14:53:23.000Z | null | false | 86de7d45936fe0885b6783dff6bdd6e6eca8eff0 | [] | [
"annotations_creators:other",
"language_creators:found",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"task_categories:token-classification",
"task_ids:parsing"
] | https://huggingface.co/datasets/yuanchuan/annotated_reference_strings/resolve/main/README.md | ---
annotations_creators:
- other
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- parsing
pretty_name: Annotated Reference Strings
---
# Dataset Card for annotated_reference_strings
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.github.com/kylase](https://www.github.com/kylase)
- **Repository:** [https://www.github.com/kylase](https://www.github.com/kylase)
- **Point of Contact:** [Yuan Chuan Kee](https://www.github.com/kylase)
### Dataset Summary
The `annotated_reference_strings` dataset comprises millions of the annotated reference strings, i.e. each token of the strings have an associated label such as author, title, year, etc.
These strings are synthesized using citation processor on millions of citations obtained from various sources, spanning different scientific domains.
### Supported Tasks
This dataset can be used for structure prediction.
### Languages
The dataset is composed of reference strings that are in English.
## Dataset Structure
### Data Instances
```json
{
"source": "pubmed",
"lang": "en",
"entry_type": "article",
"doi_prefix": "pubmed19n0001",
"csl_style": "annual-reviews",
"content": "<citation-number>8.</citation-number> <author>Mohr W.</author> <year>1977.</year> <title>[Morphology of bone tumors. 2. Morphology of benign bone tumors].</title> <container-title>Aktuelle Probleme in Chirurgie und Orthopadie.</container-title> <volume>5:</volume> <page>29–42</page>"
}
```
#### Important Note
1. Each citation is rendered to _at most_ **17** CSL styles. Therefore, there will be near duplicates.
2. All characters (including punctuations) of a segment (**a segment consists of 1 or more token**) are enclosed by tag(s).
1. Only tokens that act as "conjunctions" are not enclosed in tags. These tokens will be labelled as `other`.
3. There will be instances which a segment can be enclosed by more than one tag e.g. `<issued><year>2021</year></issued>`. This depends on how the styles' author(s).
### Data Fields
- `source`: Describe the source of the citation. `{pubmed, jstor, crossref}`
- `lang`: Describe the language of the citation. `{en}`
- `entry_type`: Describe the BibTeX entry type. `{article, book, inbook, misc, techreport, phdthesis, incollection, inproceedings}`
- `doi_prefix`: For JSTOR and CrossRef, it is the prefix of the DOI. For PubMed, it is the directory (e.g. `pubmed19nXXXX` where `XXXX` is 4 digits) of which the citation is generated from.
- `csl_style`: The CSL style which the citation is rendered as.
- `content`: The rendered citation of a specific style with each segment enclosed by tags named after the CSL variables
### Data Splits
Data splits are not available yet.
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The citations that are used to generate these reference strings are obtained from 3 main sources:
- [PubMed](https://www.nlm.nih.gov/databases/download/pubmed_medline.html) (2019 Baseline)
- CrossRef via [Open Academic Graph v2](https://www.microsoft.com/en-us/research/project/open-academic-graph/)
- JSTOR Sample Datasets (not available online as of publication date)
If the citation is not in BibTeX format, [bibutils](https://sourceforge.net/p/bibutils/home/Bibutils/) is used to convert it to BibTeX.
#### Who are the source language producers?
The manner which the citations are rendered as reference strings are based on rules/specifications dictated by the publisher.
[Citation Style Language](https://citationstyles.org/) (CSL) is an established standard which such specifications are prescribed.
Thousands of citation styles are available.
### Annotations
#### Annotation process
The annotation process involves 2 main interventions:
1. Modification of the styles' CSL specification to inject the CSL variable names as part of the render process
2. Sanitization of the rendered strings using regular expressions to ensure all tokens and characters are enclosed in the tags
#### Who are the annotators?
The original CSL specification are available on [GitHub](https://github.com/citation-style-language/styles).
The modification of the styles and the sanitization process are done by the author of this work.
## Additional Information
### Licensing Information
This dataset is licensed under [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
This dataset is a product of a Master Project done in the National University of Singapore.
If you are using it, please cite the following:
```bibtex
@techreport{kee2021,
author = {Yuan Chuan Kee},
title = {Synthesis of a large dataset of annotated reference strings for developing citation parsers},
institution = {National University of Singapore},
year = {2021}
}
```
### Contributions
Thanks to [@kylase](https://github.com/kylase) for adding this dataset.
|
z-uo | null | null | null | false | 1 | false | z-uo/female-LJSpeech-italian | 2022-10-23T04:56:44.000Z | null | false | 14ab48911e45af72b8aec9f6eda9906694c3f094 | [] | [
"task_ids:tts",
"language:it",
"multilinguality:monolingual"
] | https://huggingface.co/datasets/z-uo/female-LJSpeech-italian/resolve/main/README.md | ---
task_ids:
- tts
language:
- it
task_categories:
- tts
multilinguality:
- monolingual
---
# Italian Male Voice
This dataset is an Italian version of [LJSpeech](https://keithito.com/LJ-Speech-Dataset/), that merge all female audio of the same speaker finded into [M-AILABS Speech Dataset](https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/).
This dataset contains 8h 23m of one speacker recorded at 16000Hz. This is a valid choiche to train an italian TTS deep model with female voice. |
z-uo | null | null | null | false | 1 | false | z-uo/male-LJSpeech-italian | 2022-10-23T04:57:26.000Z | null | false | ac9f1f8c8831eb367b460ff1c87b991ad1996519 | [] | [
"task_ids:tts",
"language:it",
"multilinguality:monolingual"
] | https://huggingface.co/datasets/z-uo/male-LJSpeech-italian/resolve/main/README.md | ---
task_ids:
- tts
language:
- it
task_categories:
- tts
multilinguality:
- monolingual
---
# Italian Male Voice
This dataset is an Italian version of [LJSpeech](https://keithito.com/LJ-Speech-Dataset/), that merge all male audio of the same speaker finded into [M-AILABS Speech Dataset](https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/).
This dataset contains 31h 45m of one speacker recorded at 16000Hz. This is a valid choiche to train an italian TTS deep model with male voice. |
z-uo | null | null | null | false | 2 | false | z-uo/squad-it | 2022-10-25T10:01:57.000Z | null | false | d73d22a877588114280072b6639292f9c3a99e5b | [] | [
"language:it",
"multilinguality:monolingual",
"size_categories:8k<n<10k",
"task_categories:question-answering",
"task_ids:extractive-qa"
] | https://huggingface.co/datasets/z-uo/squad-it/resolve/main/README.md | ---
language:
- it
multilinguality:
- monolingual
size_categories:
- 8k<n<10k
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Squad-it
This dataset is an adapted version of that [squad-it](https://github.com/crux82/squad-it) to train on HuggingFace models.
It contains:
- train samples: 87599
- test samples : 10570
This dataset is for question answering and his format is the following:
```
[
{
"answers": [
{
"answer_start": [1],
"text": ["Questo è un testo"]
},
],
"context": "Questo è un testo relativo al contesto.",
"id": "1",
"question": "Questo è un testo?",
"title": "train test"
}
]
```
It can be used to train many models like T5, Bert, Distilbert... |
zhoujun | null | null | null | false | 1 | false | zhoujun/hitab | 2022-02-08T08:35:57.000Z | null | false | beefaac934f54882041d2840222dbd0b7f48ea34 | [] | [] | https://huggingface.co/datasets/zhoujun/hitab/resolve/main/README.md | annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
languages:
- en
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- tableqa, data2text
task_ids:
- tableqa |
zhufy | null | @article{Artetxe:etal:2019,
author = {Mikel Artetxe and Sebastian Ruder and Dani Yogatama},
title = {On the cross-lingual transferability of monolingual representations},
journal = {CoRR},
volume = {abs/1910.11856},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.11856}
} | XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating cross-lingual question answering
performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set
of SQuAD v1.1 (Rajpurkar et al., 2016) together with their professional translations into ten languages: Spanish, German,
Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, Hindi and Romanian. Consequently, the dataset is entirely parallel
across 12 languages. | false | 1 | false | zhufy/xquad_split | 2022-02-24T02:29:43.000Z | null | false | b37680e9413ca148de6f60b3c4b9c956a11974c4 | [] | [] | https://huggingface.co/datasets/zhufy/xquad_split/resolve/main/README.md |
# Dataset Card
## Dataset Summary
We split [the original xquad dataset] (https://github.com/deepmind/xquad) into subsets.
We keep the original data format.
## Supported Tasks
extractive question answering
## Language
Thai
## Dataset Split
There are 876/161/153 question-answer pairs from 34/7/7 articles for train/validation/test separately.
|
zwang199 | null | null | null | false | 1 | false | zwang199/autonlp-data-traffic_nlp_binary | 2022-10-25T10:02:03.000Z | null | false | c574d814c1502e2cdbe22ad61ae0e56013f08a9a | [] | [
"language:en",
"task_categories:text-classification"
] | https://huggingface.co/datasets/zwang199/autonlp-data-traffic_nlp_binary/resolve/main/README.md | ---
language:
- en
task_categories:
- text-classification
---
# AutoNLP Dataset for project: traffic_nlp_binary
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project traffic_nlp_binary.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "1 train is still delayed in both directions",
"target": 1
},
{
"text": "maybe there was no train traffic ????. i know the feeling.",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=2, names=['0', '1'], names_file=None, id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2195 |
| valid | 549 |
|
fancyerii | null | null | null | false | 2 | false | fancyerii/test | 2022-10-25T10:02:14.000Z | null | false | ad25d57e9499f8417e25ac06dd57f6010786aa65 | [] | [
"size_categories:10K<n<100K",
"task_categories:text-classification",
"task_ids:semantic-similarity-classification"
] | https://huggingface.co/datasets/fancyerii/test/resolve/main/README.md | ---
annotations_creators: []
language_creators: []
language: []
license: []
multilinguality: []
pretty_name: demo
size_categories:
- 10K<n<100K
source_datasets: []
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: [HomePage](https://fancyerii.github.io)**
- **Repository: fancyerii**
- **Paper: No Paper**
- **Leaderboard: No**
- **Point of Contact:**
### Dataset Summary
测试数据集
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
中文
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fancyerii](https://github.com/fancyerii) for adding this dataset.
|
huggan | null | null | null | false | 58 | false | huggan/anime-faces | 2022-03-22T10:01:22.000Z | null | false | 67ebcf8c69b45feb3883d695f04227078a6c9da9 | [] | [
"license:cc0-1.0"
] | https://huggingface.co/datasets/huggan/anime-faces/resolve/main/README.md | ---
license: cc0-1.0
---
# Dataset Card for anime-faces
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.kaggle.com/soumikrakshit/anime-faces
- **Repository:** https://www.kaggle.com/soumikrakshit/anime-faces
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** https://github.com/Mckinsey666
### Dataset Summary
This is a dataset consisting of 21551 anime faces scraped from www.getchu.com, which are then cropped using the anime face detection algorithm in https://github.com/nagadomi/lbpcascade_animeface. All images are resized to 64 * 64 for the sake of convenience. Please also cite the two sources when using this dataset.
Some outliers are still present in the dataset:
Bad cropping results
Some non-human faces.
Feel free to contribute to this dataset by adding images of similar quality or adding image labels.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
Has a data folder with png files inside.
### Data Splits
Only training set
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
---
annotations_creators:
- found
language_creators:
- found
languages:
- unknown
licenses:
- unknown
multilinguality:
- unknown
pretty_name: anime-faces
size_categories:
- unknown
source_datasets:
- original
task_categories:
- image-classification
task_ids: []
--- |
GEM-submissions | null | null | null | false | 2 | false | GEM-submissions/lewtun__this-is-a-test__1646314818 | 2022-03-03T13:40:29.000Z | null | false | f0f49db9aeb2fe8e7640ae7ee10da1582ecd9569 | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is a test",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test__1646314818/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is a test
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test
|
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/lewtun__this-is-a-test__1646316929 | 2022-03-03T14:15:35.000Z | null | false | 2a1eb941a4459be7ac03c51e4c2875d938aee9bf | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is a test",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test__1646316929/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is a test
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test
|
firzens | null | null | null | false | 3 | false | firzens/authors | 2022-03-04T07:48:26.000Z | null | false | fa900453f521486ba24c32a3045e2ee7ccd2a40f | [] | [] | https://huggingface.co/datasets/firzens/authors/resolve/main/README.md | |
NLPC-UOM | null | null | null | false | 1 | false | NLPC-UOM/Sinhala-Tamil-Aligned-Parallel-Corpus | 2022-10-25T10:02:16.000Z | null | false | fdf66398fed02051156c3b34d80b2f4fbe5f01f4 | [] | [
"language:si",
"license:mit"
] | https://huggingface.co/datasets/NLPC-UOM/Sinhala-Tamil-Aligned-Parallel-Corpus/resolve/main/README.md | ---
annotations_creators: []
language:
- si
license:
- mit
--- |
NLPC-UOM | null | null | null | false | 2 | false | NLPC-UOM/AnanyaSinhalaNERDataset | 2022-10-25T10:02:18.000Z | null | false | d8ff10fc5ffd05877bf61ea19f0833565c5a6fd8 | [] | [
"language:si",
"license:mit"
] | https://huggingface.co/datasets/NLPC-UOM/AnanyaSinhalaNERDataset/resolve/main/README.md | # AnanyaSinhalaNERDataset
---
annotations_creators: []
language:
- si
license:
- mit
---
This is part of the dataset used in the paper: Manamini, S.A.P.M., Ahamed, A.F., Rajapakshe, R.A.E.C., Reemal, G.H.A., Jayasena, S., Dias, G.V. and Ranathunga, S., 2016, April. Ananya-a Named-Entity-Recognition (NER) system for Sinhala language. In 2016 Moratuwa Engineering Research Conference (MERCon) (pp. 30-35). IEEE.
|
openclimatefix | null | @InProceedings{ocf:gfs,
title = {GFS Forecast Dataset},
author={Jacob Bieker},
year={2022}
} | This dataset consists of various NOAA datasets related to operational forecasts, including FNL Analysis files,
GFS operational forecasts, and the raw observations used to initialize the grid. | false | 5 | false | openclimatefix/gfs-reforecast | 2022-10-28T10:25:32.000Z | null | false | 8596eadefb500d1943e7b5e04a78a88ab065eacc | [] | [] | https://huggingface.co/datasets/openclimatefix/gfs-reforecast/resolve/main/README.md | [Needs More Information]
# Dataset Card for GFS-Reforecast
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Jacob Bieker](mailto:jacob@openclimatefix.org)
### Dataset Summary
This dataset consists of various sets of historical operational GFS forecasts, and analysis files from 2016-2022. The analysis files and forecasts are initialized at 00, 06, 12, and 18 UTC every day and ran for multiple hours. Additionally, raw observations are also included, which are the observations that are used to initialize the analysis and forecasts. The dataset is being expanded over time as more historical data is processed, and more observations as well.
The `data/forecasts/GFSv16/` folder holds the historical operational forecasts out to 48 hours from initialization, on all pressure levels, and for all variables that are present in every timestep (so not any accumulated values). The data is all stored as zipped Zarr stores, openable by xarray.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
This dataset was constructed to help create a similar and expanded dataset to that used in Kiesler 2022 paper, where graph networks were used for weather forecasting.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
US Government License, no restrictions
### Citation Information
@article(gfs,
author = {Jacob Bieker}
title = {GFS NWP Weather Dataset}
year = {2022}
} |
nlpaueb | null | @inproceedings{loukas-etal-2022-finer,
title = "{FiNER: Financial Numeric Entity Recognition for XBRL Tagging}",
author = "Loukas, Lefteris and
Fergadiotis, Manos and
Chalkidis, Ilias and
Spyropoulou, Eirini and
Malakasiotis, Prodromos and
Androutsopoulos, Ion and
Paliouras George",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics",
month = "may",
year = "2022",
publisher = "Association for Computational Linguistics",
} | FiNER-139 is a named entity recognition dataset consisting of 10K annual
and quarterly English reports (filings) of publicly traded companies
downloaded from the U.S. Securities and Exchange Commission (SEC)
annotated with 139 XBRL tags in the IOB2 format. | false | 392 | false | nlpaueb/finer-139 | 2022-10-23T05:05:03.000Z | null | false | 080f677a026e304c38666d759ef625d621dc8cb9 | [] | [
"arxiv:2203.06482",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/nlpaueb/finer-139/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: FiNER-139
size_categories:
- 1M<n<10M
source_datasets: []
task_categories:
- structure-prediction
- named-entity-recognition
- entity-extraction
task_ids:
- named-entity-recognition
---
# Dataset Card for FiNER-139
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [SEC-BERT](#sec-bert)
- [About Us](#about-us)
## Dataset Description
- **Homepage:** [FiNER](https://github.com/nlpaueb/finer)
- **Repository:** [FiNER](https://github.com/nlpaueb/finer)
- **Paper:** [FiNER, Loukas et al. (2022)](https://arxiv.org/abs/2203.06482)
- **Point of Contact:** [Manos Fergadiotis](mailto:fergadiotis@aueb.gr)
### Dataset Summary
<div style="text-align: justify">
<strong>FiNER-139</strong> is comprised of 1.1M sentences annotated with <strong>eXtensive Business Reporting Language (XBRL)</strong> tags extracted from annual and quarterly reports of publicly-traded companies in the US.
Unlike other entity extraction tasks, like named entity recognition (NER) or contract element extraction, which typically require identifying entities of a small set of common types (e.g., persons, organizations), FiNER-139 uses a much larger label set of <strong>139 entity types</strong>.
Another important difference from typical entity extraction is that FiNER focuses on numeric tokens, with the correct tag depending mostly on context, not the token itself.
</div>
### Supported Tasks
<div style="text-align: justify">
To promote transparency among shareholders and potential investors, publicly traded companies are required to file periodic financial reports annotated with tags from the eXtensive Business Reporting Language (XBRL), an XML-based language, to facilitate the processing of financial information.
However, manually tagging reports with XBRL tags is tedious and resource-intensive.
We, therefore, introduce <strong>XBRL tagging</strong> as a <strong>new entity extraction task</strong> for the <strong>financial domain</strong> and study how financial reports can be automatically enriched with XBRL tags.
To facilitate research towards automated XBRL tagging we release FiNER-139.
</div>
### Languages
**FiNER-139** is compiled from approximately 10k annual and quarterly **English** reports
## Dataset Structure
### Data Instances
This is a "train" split example:
```json
{
'id': 40
'tokens': ['In', 'March', '2014', ',', 'the', 'Rialto', 'segment', 'issued', 'an', 'additional', '$', '100', 'million', 'of', 'the', '7.00', '%', 'Senior', 'Notes', ',', 'at', 'a', 'price', 'of', '102.25', '%', 'of', 'their', 'face', 'value', 'in', 'a', 'private', 'placement', '.']
'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 37, 0, 0, 0, 41, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
```
### Data Fields
**id**: ID of the example <br>
**tokens**: List of tokens for the specific example. <br>
**ner_tags**: List of tags for each token in the example. Tags are provided as integer classes.<br>
If you want to use the class names you can access them as follows:
```python
import datasets
finer_train = datasets.load_dataset("nlpaueb/finer-139", split="train")
finer_tag_names = finer_train.features["ner_tags"].feature.names
```
**finer_tag_names** contains a list of class names corresponding to the integer classes e.g.
```
0 -> "O"
1 -> "B-AccrualForEnvironmentalLossContingencies"
```
### Data Splits
| Training | Validation | Test
| -------- | ---------- | -------
| 900,384 | 112,494 | 108,378
## Dataset Creation
### Curation Rationale
The dataset was curated by [Loukas et al. (2022)](https://arxiv.org/abs/2203.06482) <br>
### Source Data
#### Initial Data Collection and Normalization
<div style="text-align: justify">
FiNER-139 is compiled from approximately 10k annual and quarterly English reports (filings) of publicly traded companies downloaded from the [US Securities
and Exchange Commission's (SEC)](https://www.sec.gov/) [Electronic Data Gathering, Analysis, and Retrieval (EDGAR)](https://www.sec.gov/edgar.shtml) system.
The reports span a 5-year period, from 2016 to 2020. They are annotated with XBRL tags by professional auditors and describe the performance and projections of the companies. XBRL defines approximately 6k entity types from the US-GAAP taxonomy. FiNER-139 is annotated with the 139 most frequent XBRL entity types with at least 1,000 appearances.
We used regular expressions to extract the text notes from the Financial Statements Item of each filing, which is the primary source of XBRL tags in annual and quarterly reports. We used the <strong>IOB2</strong> annotation scheme to distinguish tokens at the beginning, inside, or outside of tagged expressions, which leads to 279 possible token labels.
</div>
### Annotations
#### Annotation process
<div style="text-align: justify">
All the examples were annotated by professional auditors as required by the Securities & Exchange Commission (SEC) legislation.
Even though the gold XBRL tags come from professional auditors there are still some discrepancies. Consult [Loukas et al. (2022)](https://arxiv.org/abs/2203.06482), (Section 9.4) for more details
</div>
#### Who are the annotators?
Professional auditors
### Personal and Sensitive Information
The dataset contains publicly available annual and quarterly reports (filings)
## Additional Information
### Dataset Curators
[Loukas et al. (2022)](https://arxiv.org/abs/2203.06482)
### Licensing Information
<div style="text-align: justify">
Access to SEC's EDGAR public database is free, allowing research of public companies' financial information and operations by reviewing the filings the companies makes with the SEC.
</div>
### Citation Information
If you use this dataset cite the following
```
@inproceedings{loukas-etal-2022-finer,
title = {FiNER: Financial Numeric Entity Recognition for XBRL Tagging},
author = {Loukas, Lefteris and
Fergadiotis, Manos and
Chalkidis, Ilias and
Spyropoulou, Eirini and
Malakasiotis, Prodromos and
Androutsopoulos, Ion and
Paliouras George},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022)},
publisher = {Association for Computational Linguistics},
location = {Dublin, Republic of Ireland},
year = {2022},
url = {https://arxiv.org/abs/2203.06482}
}
```
## SEC-BERT
<img align="center" src="https://i.ibb.co/0yz81K9/sec-bert-logo.png" alt="SEC-BERT" width="400"/>
<div style="text-align: justify">
We also pre-train our own BERT models (<strong>SEC-BERT</strong>) for the financial domain, intended to assist financial NLP research and FinTech applications. <br>
<strong>SEC-BERT</strong> consists of the following models:
* [**SEC-BERT-BASE**](https://huggingface.co/nlpaueb/sec-bert-base): Same architecture as BERT-BASE trained on financial documents.
* [**SEC-BERT-NUM**](https://huggingface.co/nlpaueb/sec-bert-num): Same as SEC-BERT-BASE but we replace every number token with a [NUM] pseudo-token handling all numeric expressions in a uniform manner, disallowing their fragmentation
* [**SEC-BERT-SHAPE**](https://huggingface.co/nlpaueb/sec-bert-shape): Same as SEC-BERT-BASE but we replace numbers with pseudo-tokens that represent the number’s shape, so numeric expressions (of known shapes) are no longer fragmented, e.g., '53.2' becomes '[XX.X]' and '40,200.5' becomes '[XX,XXX.X]'.
These models were pre-trained on 260,773 10-K filings (annual reports) from 1993-2019, publicly available at [U.S. Securities and Exchange Commission (SEC)](https://www.sec.gov/)
</div>
## About Us
<div style="text-align: justify">
[**AUEB's Natural Language Processing Group**](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts.
The group's current research interests include:
* question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,
* natural language generation from databases and ontologies, especially Semantic Web ontologies,
text classification, including filtering spam and abusive content,
* information extraction and opinion mining, including legal text analytics and sentiment analysis,
* natural language processing tools for Greek, for example parsers and named-entity recognizers,
machine learning in natural language processing, especially deep learning.
The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
</div>
[Manos Fergadiotis](https://manosfer.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) |
GEM-submissions | null | null | null | false | 3 | false | GEM-submissions/ratishsp__seqplan__1646397329 | 2022-03-04T12:35:32.000Z | null | false | 9283dd0d667c67679d54ae59bf871e765e81a8d7 | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:SeqPlan",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/ratishsp__seqplan__1646397329/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: SeqPlan
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: SeqPlan
|
GEM-submissions | null | null | null | false | 3 | false | GEM-submissions/ratishsp__seqplan__1646397829 | 2022-03-14T09:21:16.000Z | null | false | 376f8f130939ea4c01e718c71e2cf8f88577e5ef | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:SeqPlan - RotoWire",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/ratishsp__seqplan__1646397829/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: SeqPlan - RotoWire
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: SeqPlan - RotoWire
|
Alvenir | null | null | Dataset of a little bit more than 5hours primarily intended as an evaluation dataset for Danish. | false | 45 | false | Alvenir/alvenir_asr_da_eval | 2022-06-16T09:13:33.000Z | null | false | 4bbf7c8537c8d75ea9b57ec23b4e33505d365cce | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/Alvenir/alvenir_asr_da_eval/resolve/main/README.md | ---
license: cc-by-4.0
---
# Dataset Card alvenir_asr_da_eval
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Prompts/sentence selection](#prompts/sentence-selection)
- [Recording](#recording)
- [Evaluation](#evaluation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** https://alvenir.ai
- **Repository:** https://github.com/danspeech/alvenir-asr-da-eval/
### Dataset Summary
This dataset was created by Alvenir in order to evaluate ASR models in Danish. It can also be used for training but the amount is very limited.
The dataset consists of .wav files with corresponding reference text. The amount of data is just above 5 hours spread across 50 speakers with age in the interval 20-60 years old. The data was collected by a third party vendor through their software and people. All recordings have been validated.
## Dataset Structure
### Data Instances
A data point consists of a path to the audio file, called path and its sentence. Additional fields will eventually be added such as age and gender.
`
{'audio': {'path': `some_path.wav', 'array': array([-0.044223, -0.00031411, -0.00435671, ..., 0.00612312, 0.00014581, 0.00091009], dtype=float32), 'sampling_rate': 16000}}
`
### Data Fields
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
sentence: The sentence the user was prompted to speak
### Data Splits
Since the idea behind the dataset is for it to be used as a test/eval ASR dataset for Danish, there is only test split.
## Dataset Creation
### Prompts/sentence selection
The sentences used for prompts were gathered from the danish part of open subtitles (OSS) (need reference) and wikipedia (WIKI). The OSS prompts sampled randomly across the dataset making sure that all prompts are unique. The WIKI prompts were selected by first training a topic model with 30 topics on wikipedia and than randomly sampling an equal amount of unique sentences from each topic. All sentences were manually inspected.
### Recording
50 unique speakers were all sent 20 WIKI sentences and 60 sentences from OSS. The recordings took place through third party recording software.
### Evaluation
All recordings were evaluated by third party to confirm alignment between audio and text.
### Personal and Sensitive Information
The dataset consists of people who have given their voice to the dataset for ASR purposes. You agree to not attempt to determine the identity of any of the speakers in the dataset.
### Licensing Information
[cc-by-4.0](https://creativecommons.org/licenses/by/4.0/)
|
google | null | @article{conneau2022xtreme,
title={XTREME-S: Evaluating Cross-lingual Speech Representations},
author={Conneau, Alexis and Bapna, Ankur and Zhang, Yu and Ma, Min and von Platen, Patrick and Lozhkov, Anton and Cherry, Colin and Jia, Ye and Rivera, Clara and Kale, Mihir and others},
journal={arXiv preprint arXiv:2203.10752},
year={2022}
} | XTREME-S covers four task families: speech recognition, classification, speech-to-text translation and retrieval. Covering 102
languages from 10+ language families, 3 different domains and 4
task families, XTREME-S aims to simplify multilingual speech
representation evaluation, as well as catalyze research in “universal” speech representation learning. | false | 613 | false | google/xtreme_s | 2022-07-28T12:47:02.000Z | librispeech-1 | false | 3cf59334aa52a74c008a67a3de30f98dd8a28118 | [] | [
"arxiv:2203.10752",
"arxiv:2205.12446",
"arxiv:2007.10310",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language:afr",
"language:amh",
"language:ar... | https://huggingface.co/datasets/google/xtreme_s/resolve/main/README.md | ---
annotations_creators:
- expert-generated
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- expert-generated
language:
- afr
- amh
- ara
- asm
- ast
- azj
- bel
- ben
- bos
- cat
- ceb
- cmn
- ces
- cym
- dan
- deu
- ell
- eng
- spa
- est
- fas
- ful
- fin
- tgl
- fra
- gle
- glg
- guj
- hau
- heb
- hin
- hrv
- hun
- hye
- ind
- ibo
- isl
- ita
- jpn
- jav
- kat
- kam
- kea
- kaz
- khm
- kan
- kor
- ckb
- kir
- ltz
- lug
- lin
- lao
- lit
- luo
- lav
- mri
- mkd
- mal
- mon
- mar
- msa
- mlt
- mya
- nob
- npi
- nld
- nso
- nya
- oci
- orm
- ory
- pan
- pol
- pus
- por
- ron
- rus
- bul
- snd
- slk
- slv
- sna
- som
- srp
- swe
- swh
- tam
- tel
- tgk
- tha
- tur
- ukr
- umb
- urd
- uzb
- vie
- wol
- xho
- yor
- yue
- zul
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: librispeech-1
pretty_name: 'The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech
(XTREME-S) benchmark is a benchmark designed to evaluate speech representations
across languages, tasks, domains and data regimes. It covers 102 languages from 10+ language families, 3 different domains and 4 task families: speech recognition, translation, classification and retrieval.'
size_categories:
- 10K<n<100K
source_datasets:
- extended|multilingual_librispeech
- extended|covost2
task_categories:
- automatic-speech-recognition
- speech-processing
task_ids:
- speech-recognition
---
# XTREME-S
## Dataset Description
- **Fine-Tuning script:** [research-projects/xtreme-s](https://github.com/huggingface/transformers/tree/master/examples/research_projects/xtreme-s)
- **Paper:** [XTREME-S: Evaluating Cross-lingual Speech Representations](https://arxiv.org/abs/2203.10752)
- **Leaderboard:** [TODO(PVP)]()
- **FLEURS amount of disk used:** 350 GB
- **Multilingual Librispeech amount of disk used:** 2700 GB
- **Voxpopuli amount of disk used:** 400 GB
- **Covost2 amount of disk used:** 70 GB
- **Minds14 amount of disk used:** 5 GB
- **Total amount of disk used:** ca. 3500 GB
The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech (XTREME-S) benchmark is a benchmark designed to evaluate speech representations across languages, tasks, domains and data regimes. It covers 102 languages from 10+ language families, 3 different domains and 4 task families: speech recognition, translation, classification and retrieval.
***TLDR; XTREME-S is the first speech benchmark that is both diverse, fully accessible, and reproducible. All datasets can be downloaded with a single line of code.
An easy-to-use and flexible fine-tuning script is provided and actively maintained.***
XTREME-S covers speech recognition with Fleurs, Multilingual LibriSpeech (MLS) and VoxPopuli, speech translation with CoVoST-2, speech classification with LangID (Fleurs) and intent classification (MInds-14) and finally speech(-text) retrieval with Fleurs. Each of the tasks covers a subset of the 102 languages included in XTREME-S, from various regions:
- **Western Europe**: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh*
- **Eastern Europe**: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*
- **Central-Asia/Middle-East/North-Africa**: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*
- **Sub-Saharan Africa**: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*
- **South-Asia**: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*
- **South-East Asia**: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*
- **CJK languages**: *Cantonese and Mandarin Chinese, Japanese, Korean*
## Design principles
### Diversity
XTREME-S aims for task, domain and language
diversity. Tasks should be diverse and cover several domains to
provide a reliable evaluation of model generalization and
robustness to noisy naturally-occurring speech in different
environments. Languages should be diverse to ensure that
models can adapt to a wide range of linguistic and phonological
phenomena.
### Accessibility
The sub-dataset for each task can be downloaded
with a **single line of code** as shown in [Supported Tasks](#supported-tasks).
Each task is available under a permissive license that allows the use and redistribution
of the data for research purposes. Tasks have been selected based on their usage by
pre-existing multilingual pre-trained models, for simplicity.
### Reproducibility
We produce fully **open-sourced, maintained and easy-to-use** fine-tuning scripts
for each task as shown under [Fine-tuning Example](#fine-tuning-and-evaluation-example).
XTREME-S encourages submissions that leverage publicly available speech and text datasets. Users should detail which data they use.
In general, we encourage settings that can be reproduced by the community, but also encourage the exploration of new frontiers for speech representation learning.
## Fine-tuning and Evaluation Example
We provide a fine-tuning script under [**research-projects/xtreme-s**](https://github.com/huggingface/transformers/tree/master/examples/research_projects/xtreme-s).
The fine-tuning script is written in PyTorch and allows one to fine-tune and evaluate any [Hugging Face model](https://huggingface.co/models) on XTREME-S.
The example script is actively maintained by [@anton-l](https://github.com/anton-l) and [@patrickvonplaten](https://github.com/patrickvonplaten). Feel free
to reach out via issues or pull requests on GitHub if you have any questions.
## Leaderboards
The leaderboard for the XTREME-S benchmark can be found at [this address (TODO(PVP))]().
## Supported Tasks
Note that the suppoprted tasks are focused particularly on linguistic aspect of speech,
while nonlinguistic/paralinguistic aspects of speech relevant to e.g. speech synthesis or voice conversion are **not** evaluated.
<p align="center">
<img src="https://github.com/patrickvonplaten/scientific_images/raw/master/xtreme_s.png" alt="Datasets used in XTREME"/>
</p>
### 1. Speech Recognition (ASR)
We include three speech recognition datasets: FLEURS-ASR, MLS and VoxPopuli (optionally BABEL). Multilingual fine-tuning is used for these three datasets.
#### FLEURS-ASR
*FLEURS-ASR* is the speech version of the FLORES machine translation benchmark, covering 2000 n-way parallel sentences in n=102 languages.
```py
from datasets import load_dataset
fleurs_asr = load_dataset("google/xtreme_s", "fleurs.af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_asr = load_dataset("google/xtreme_s", "fleurs.all")
# see structure
print(fleurs_asr)
# load audio sample on the fly
audio_input = fleurs_asr["train"][0]["audio"] # first decoded audio sample
transcription = fleurs_asr["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
# for analyses see language groups
all_language_groups = fleurs_asr["train"].features["lang_group_id"].names
lang_group_id = fleurs_asr["train"][0]["lang_group_id"]
all_language_groups[lang_group_id]
```
#### Multilingual LibriSpeech (MLS)
*MLS* is a large multilingual corpus derived from read audiobooks from LibriVox and consists of 8 languages. For this challenge the training data is limited to 10-hours splits.
```py
from datasets import load_dataset
mls = load_dataset("google/xtreme_s", "mls.pl") # for Polish
# to download all data for multi-lingual fine-tuning uncomment following line
# mls = load_dataset("google/xtreme_s", "mls.all")
# see structure
print(mls)
# load audio sample on the fly
audio_input = mls["train"][0]["audio"] # first decoded audio sample
transcription = mls["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
```
#### VoxPopuli
*VoxPopuli* is a large-scale multilingual speech corpus for representation learning and semi-supervised learning, from which we use the speech recognition dataset. The raw data is collected from 2009-2020 European Parliament event recordings. We acknowledge the European Parliament for creating and sharing these materials.
**VoxPopuli has to download the whole dataset 100GB since languages
are entangled into each other - maybe not worth testing here due to the size**
```py
from datasets import load_dataset
voxpopuli = load_dataset("google/xtreme_s", "voxpopuli.ro") # for Romanian
# to download all data for multi-lingual fine-tuning uncomment following line
# voxpopuli = load_dataset("google/xtreme_s", "voxpopuli.all")
# see structure
print(voxpopuli)
# load audio sample on the fly
audio_input = voxpopuli["train"][0]["audio"] # first decoded audio sample
transcription = voxpopuli["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
```
#### (Optionally) BABEL
*BABEL* from IARPA is a conversational speech recognition dataset in low-resource languages. First, download LDC2016S06, LDC2016S12, LDC2017S08, LDC2017S05 and LDC2016S13. BABEL is the only dataset in our benchmark who is less easily accessible, so you will need to sign in to get access to it on LDC. Although not officially part of the XTREME-S ASR datasets, BABEL is often used for evaluating speech representations on a difficult domain (phone conversations).
```py
from datasets import load_dataset
babel = load_dataset("google/xtreme_s", "babel.as")
```
**The above command is expected to fail with a nice error message,
explaining how to download BABEL**
The following should work:
```py
from datasets import load_dataset
babel = load_dataset("google/xtreme_s", "babel.as", data_dir="/path/to/IARPA_BABEL_OP1_102_LDC2016S06.zip")
# see structure
print(babel)
# load audio sample on the fly
audio_input = babel["train"][0]["audio"] # first decoded audio sample
transcription = babel["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
```
### 2. Speech Translation (ST)
We include the CoVoST-2 dataset for automatic speech translation.
#### CoVoST-2
The *CoVoST-2* benchmark has become a commonly used dataset for evaluating automatic speech translation. It covers language pairs from English into 15 languages, as well as 21 languages into English. We use only the "X->En" direction to evaluate cross-lingual representations. The amount of supervision varies greatly in this setting, from one hour for Japanese->English to 180 hours for French->English. This makes pretraining particularly useful to enable such few-shot learning. We enforce multiligual fine-tuning for simplicity. Results are splitted in high/med/low-resource language pairs as explained in the [paper (TODO(PVP))].
```py
from datasets import load_dataset
covost_2 = load_dataset("google/xtreme_s", "covost2.id.en") # for Indonesian to English
# to download all data for multi-lingual fine-tuning uncomment following line
# covost_2 = load_dataset("google/xtreme_s", "covost2.all")
# see structure
print(covost_2)
# load audio sample on the fly
audio_input = covost_2["train"][0]["audio"] # first decoded audio sample
transcription = covost_2["train"][0]["transcription"] # first transcription
translation = covost_2["train"][0]["translation"] # first translation
# use audio_input and translation to fine-tune your model for AST
```
### 3. Speech Classification
We include two multilingual speech classification datasets: FLEURS-LangID and Minds-14.
#### Language Identification - FLEURS-LangID
LangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.
```py
from datasets import load_dataset
fleurs_langID = load_dataset("google/xtreme_s", "fleurs.all") # to download all data
# see structure
print(fleurs_langID)
# load audio sample on the fly
audio_input = fleurs_langID["train"][0]["audio"] # first decoded audio sample
language_class = fleurs_langID["train"][0]["lang_id"] # first id class
language = fleurs_langID["train"].features["lang_id"].names[language_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
#### Intent classification - Minds-14
Minds-14 is an intent classification made from e-banking speech datasets in 14 languages, with 14 intent labels. We impose a single multilingual fine-tuning to increase the size of the train and test sets and reduce the variance associated with the small size of the dataset per language.
```py
from datasets import load_dataset
minds_14 = load_dataset("google/xtreme_s", "minds14.fr-FR") # for French
# to download all data for multi-lingual fine-tuning uncomment following line
# minds_14 = load_dataset("google/xtreme_s", "minds14.all")
# see structure
print(minds_14)
# load audio sample on the fly
audio_input = minds_14["train"][0]["audio"] # first decoded audio sample
intent_class = minds_14["train"][0]["intent_class"] # first transcription
intent = minds_14["train"].features["intent_class"].names[intent_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
### 4. (Optionally) Speech Retrieval
We optionally include one speech retrieval dataset: FLEURS-Retrieval as explained in the [FLEURS paper](https://arxiv.org/abs/2205.12446).
#### FLEURS-Retrieval
FLEURS-Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use FLEURS-Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English "key" utterance corresponding to the speech translation of "queries" in 15 languages. Results have to be reported on the test sets of FLEURS-Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.
```py
from datasets import load_dataset
fleurs_retrieval = load_dataset("google/xtreme_s", "fleurs.af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_retrieval = load_dataset("google/xtreme_s", "fleurs.all")
# see structure
print(fleurs_retrieval)
# load audio sample on the fly
audio_input = fleurs_retrieval["train"][0]["audio"] # decoded audio sample
text_sample_pos = fleurs_retrieval["train"][0]["transcription"] # positive text sample
text_sample_neg = fleurs_retrieval["train"][1:20]["transcription"] # negative text samples
# use `audio_input`, `text_sample_pos`, and `text_sample_neg` to fine-tune your model for retrieval
```
Users can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.
## Dataset Structure
The XTREME-S benchmark is composed of the following datasets:
- [FLEURS](https://huggingface.co/datasets/google/fleurs#dataset-structure)
- [Multilingual Librispeech (MLS)](https://huggingface.co/datasets/facebook/multilingual_librispeech#dataset-structure)
Note that for MLS, XTREME-S uses `path` instead of `file` and `transcription` instead of `text`.
- [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli#dataset-structure)
- [Minds14](https://huggingface.co/datasets/polyai/minds14#dataset-structure)
- [Covost2](https://huggingface.co/datasets/covost2#dataset-structure)
Note that for Covost2, XTREME-S uses `path` instead of `file` and `transcription` instead of `sentence`.
- [BABEL](https://huggingface.co/datasets/ldc/iarpa_babel#dataset-structure)
Please click on the link of the dataset cards to get more information about its dataset structure.
## Dataset Creation
The XTREME-S benchmark is composed of the following datasets:
- [FLEURS](https://huggingface.co/datasets/google/fleurs#dataset-creation)
- [Multilingual Librispeech (MLS)](https://huggingface.co/datasets/facebook/multilingual_librispeech#dataset-creation)
- [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli#dataset-creation)
- [Minds14](https://huggingface.co/datasets/polyai/minds14#dataset-creation)
- [Covost2](https://huggingface.co/datasets/covost2#dataset-creation)
- [BABEL](https://huggingface.co/datasets/ldc/iarpa_babel#dataset-creation)
Please visit the corresponding dataset cards to get more information about the source data.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).
### Discussion of Biases
Most datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through XTREME-S should generalize to all languages.
### Other Known Limitations
The benchmark has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on XTREME-S should still correlate well with actual progress made for speech understanding.
## Additional Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
#### XTREME-S
```
@article{conneau2022xtreme,
title={XTREME-S: Evaluating Cross-lingual Speech Representations},
author={Conneau, Alexis and Bapna, Ankur and Zhang, Yu and Ma, Min and von Platen, Patrick and Lozhkov, Anton and Cherry, Colin and Jia, Ye and Rivera, Clara and Kale, Mihir and others},
journal={arXiv preprint arXiv:2203.10752},
year={2022}
}
```
#### MLS
```
@article{Pratap2020MLSAL,
title={MLS: A Large-Scale Multilingual Dataset for Speech Research},
author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
journal={ArXiv},
year={2020},
volume={abs/2012.03411}
}
```
#### VoxPopuli
```
@article{wang2021voxpopuli,
title={Voxpopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation},
author={Wang, Changhan and Riviere, Morgane and Lee, Ann and Wu, Anne and Talnikar, Chaitanya and Haziza, Daniel and Williamson, Mary and Pino, Juan and Dupoux, Emmanuel},
journal={arXiv preprint arXiv:2101.00390},
year={2021}
}
```
#### CoVoST 2
```
@article{DBLP:journals/corr/abs-2007-10310,
author = {Changhan Wang and
Anne Wu and
Juan Miguel Pino},
title = {CoVoST 2: {A} Massively Multilingual Speech-to-Text Translation Corpus},
journal = {CoRR},
volume = {abs/2007.10310},
year = {2020},
url = {https://arxiv.org/abs/2007.10310},
eprinttype = {arXiv},
eprint = {2007.10310},
timestamp = {Thu, 12 Aug 2021 15:37:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-10310.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
#### Minds14
```
@article{gerz2021multilingual,
title={Multilingual and cross-lingual intent detection from spoken data},
author={Gerz, Daniela and Su, Pei-Hao and Kusztos, Razvan and Mondal, Avishek and Lis, Micha{\l} and Singhal, Eshan and Mrk{\v{s}}i{\'c}, Nikola and Wen, Tsung-Hsien and Vuli{\'c}, Ivan},
journal={arXiv preprint arXiv:2104.08524},
year={2021}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@anton-l](https://github.com/anton-l), [@aconneau](https://github.com/aconneau) for adding this dataset
|
anjandash | null | null | null | false | 2 | false | anjandash/java-8m-methods-v1 | 2022-07-01T20:32:32.000Z | null | false | 4d770e93b949baa821a5a6603039849e590cb260 | [] | [
"language:java",
"license:mit",
"multilinguality:monolingual"
] | https://huggingface.co/datasets/anjandash/java-8m-methods-v1/resolve/main/README.md | ---
language:
- java
license:
- mit
multilinguality:
- monolingual
pretty_name:
- java-8m-methods-v1
--- |
null | null | @inproceedings{otegi-etal-2020-conversational,
title = "{Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for {B}asque}",
author = "Otegi, Arantxa and
Agirre, Aitor and
Campos, Jon Ander and
Soroa, Aitor and
Agirre, Eneko",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
year = "2020",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.55",
pages = "436--442",
ISBN = "979-10-95546-34-4",
} | ElkarHizketak is a low resource conversational Question Answering
(QA) dataset in Basque created by Basque speaker volunteers. The
dataset contains close to 400 dialogues and more than 1600 question
and answers, and its small size presents a realistic low-resource
scenario for conversational QA systems. The dataset is built on top of
Wikipedia sections about popular people and organizations. The
dialogues involve two crowd workers: (1) a student ask questions after
reading a small introduction about the person, but without seeing the
section text; and (2) a teacher answers the questions selecting a span
of text of the section. | false | 16 | false | elkarhizketak | 2022-11-03T15:51:00.000Z | null | false | 203e799e8154b06c56de04dac7c29ae9f01dbf0f | [] | [
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"language:eu",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:question-answering",
"task_ids:extractive-qa",
"tags:dialogue-qa"
] | https://huggingface.co/datasets/elkarhizketak/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- eu
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
pretty_name: ElkarHizketak
tags:
- dialogue-qa
dataset_info:
features:
- name: dialogue_id
dtype: string
- name: wikipedia_page_title
dtype: string
- name: background
dtype: string
- name: section_title
dtype: string
- name: context
dtype: string
- name: turn_ids
sequence: string
- name: questions
sequence: string
- name: yesnos
sequence:
class_label:
names:
0: y
1: n
2: x
- name: answers
sequence:
- name: texts
sequence: string
- name: answer_starts
sequence: int32
- name: input_texts
sequence: string
- name: orig_answers
struct:
- name: texts
sequence: string
- name: answer_starts
sequence: int32
config_name: plain_text
splits:
- name: test
num_bytes: 127640
num_examples: 38
- name: train
num_bytes: 1024378
num_examples: 301
- name: validation
num_bytes: 125667
num_examples: 38
download_size: 1927474
dataset_size: 1277685
---
# Dataset Card for ElkarHizketak
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ElkarHizketak homepage](http://ixa.si.ehu.es/node/12934)
- **Paper:** [Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque](https://aclanthology.org/2020.lrec-1.55/)
- **Point of Contact:** [Arantxa Otegi](mailto:arantza.otegi@ehu.eus)
### Dataset Summary
ElkarHizketak is a low resource conversational Question Answering (QA) dataset in Basque created by Basque speaker volunteers. The dataset contains close to 400 dialogues and more than 1600 question and answers, and its small size presents a realistic low-resource scenario for conversational QA systems. The dataset is built on top of Wikipedia sections about popular people and organizations. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section.
### Supported Tasks and Leaderboards
- `extractive-qa`: The dataset can be used to train a model for Conversational Question Answering.
### Languages
The text in the dataset is in Basque.
## Dataset Structure
### Data Instances
An example from the train split:
```
{'dialogue_id': 'C_50be3f56f0d04c99a82f1f950baf0c2d',
'wikipedia_page_title': 'Howard Becker',
'background': 'Howard Saul Becker (Chicago,Illinois, 1928ko apirilaren 18an) Estatu Batuetako soziologoa bat da. Bere ekarpen handienak desbiderakuntzaren soziologian, artearen soziologian eta musikaren soziologian egin ditu. "Outsiders" (1963) bere lanik garrantzitsuetako da eta bertan garatu zuen bere etiketatze-teoria. Nahiz eta elkarrekintza sinbolikoaren edo gizarte-konstruktibismoaren korronteen barruan sartu izan, berak ez du bere burua inongo paradigman kokatzen. Chicagoko Unibertsitatean graduatua, Becker Chicagoko Soziologia Eskolako bigarren belaunaldiaren barruan kokatu ohi da, Erving Goffman eta Anselm Strauss-ekin batera.',
'section_title': 'Hastapenak eta hezkuntza.',
'context': 'Howard Saul Becker Chicagon jaio zen 1928ko apirilaren 18an. Oso gazte zelarik piano jotzen asi zen eta 15 urte zituenean dagoeneko tabernetan aritzen zen pianoa jotzen. Beranduago Northwestern Unibertsitateko banda batean jo zuen. Beckerren arabera, erdi-profesional gisa aritu ahal izan zen Bigarren Mundu Gerra tokatu eta musikari gehienak soldadugai zeudelako. Musikari bezala egin zuen lan horretan egin zuen lehen aldiz drogaren kulturaren ezagutza, aurrerago ikerketa-gai hartuko zuena. 1946an bere graduazpiko soziologia titulua lortu zuen Chicagoko Unibertsitatean. Ikasten ari zen bitartean, pianoa jotzen jarraitu zuen modu erdi-profesionalean. Hala ere, soziologiako masterra eta doktoretza eskuratu zituen Chicagoko Unibertsitatean. Unibertsitate horretan Chicagoko Soziologia Eskolaren jatorrizko tradizioaren barruan hezia izan zen. Chicagoko Soziologia Eskolak garrantzi berezia ematen zion datu kualitatiboen analisiari eta Chicagoko hiria hartzen zuen ikerketa eremu bezala. Beckerren hasierako lan askok eskola honen tradizioaren eragina dute, bereziko Everett C. Hughes-en eragina, bere tutore eta gidari izan zena. Askotan elkarrekintzaile sinboliko bezala izendatua izan da, nahiz eta Beckerek berak ez duen gogoko izendapen hori. Haren arabera, bere leinu akademikoa Georg Simmel, Robert E. Park eta Everett Hughes dira. Doktoretza lortu ostean, 23 urterekin, Beckerrek marihuanaren erabilpena ikertu zuen "Institut for Juvenil Reseac"h-en. Ondoren Illinoisko Unibertsitatean eta Standfor Unibertsitateko ikerketa institutu batean aritu zen bere irakasle karrera hasi aurretik. CANNOTANSWER',
'turn_id': 'C_50be3f56f0d04c99a82f1f950baf0c2d_q#0',
'question': 'Zer da desbiderakuntzaren soziologia?',
'yesno': 2,
'answers': {'text': ['CANNOTANSWER'],
'answer_start': [1601],
'input_text': ['CANNOTANSWER']},
'orig_answer': {'text': 'CANNOTANSWER', 'answer_start': 1601}}
```
### Data Fields
The different fields are:
- `dialogue_id`: string,
- `wikipedia_page_title`: title of the wikipedia page as a string,
- `background`: string,
- `section_title`: title os the section as a string,
- `context`: context of the question as a string string,
- `turn_id`: string,
- `question`: question as a string,
- `yesno`: Class label that represents if the question is a yes/no question. Possible values are "y" (0), "n" (1), "x" (2),
- `answers`: a dictionary with three fields:
- `text`: list of texts of the answer as a string,
- `answer_start`: list of positions of the answers in the context as an int32,
- `input_text`: list of strings,
}
),
- `orig_answer`: {
- `text`: original answer text as a string,
- `answer_start`: original position of the answer as an int32,
},
### Data Splits
The data is split into a training, development and test set. The split sizes are as follow:
- train: 1,306 questions / 301 dialogues
- development: 161 questions / 38 dialogues
- test: 167 questions / 38 dialogues
## Dataset Creation
### Curation Rationale
This is the first non-English conversational QA dataset and the first conversational dataset for Basque. Its small size presents a realistic low-resource scenario for conversational QA systems.
### Source Data
#### Initial Data Collection and Normalization
First we selected sections of Wikipedia articles about people, as less specialized knowledge is required to converse about people than other categories. In order to retrieve articles we selected the following categories in Basque Wikipedia: Biografia (Biography is the equivalent category in English Wikipedia), Biografiak (People) and Gizabanako biziak (Living people). We applied this category filter and downloaded the articles using a querying tool provided by the Wikimedia foundation. Once we retrieved the articles, we selected sections from them that contained between 175 and 300 words. These filters and threshold were set after some pilot studies where we check the adequacy of the people involved in the selected articles and the length of the passages in order to have enough but not to much information to hold a conversation.
Then, dialogues were collected during some online sessions that we arranged with Basque speaking volunteers. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section.
#### Who are the source language producers?
The language producers are Basque speaking volunteers which hold a conversation using a text-based chat interface developed for those purposes.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by Arantxa Otegi, Jon Ander Campos, Aitor Soroa and Eneko Agirre from the [HiTZ Basque Center for Language Technologies](https://www.hitz.eus/) and [Ixa NLP Group](https://www.ixa.eus/) at the University of the Basque Country (UPV/EHU).
### Licensing Information
Copyright (C) by Ixa Taldea, University of the Basque Country UPV/EHU.
This dataset is licensed under the Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0).
To view a copy of this license, visit [https://creativecommons.org/licenses/by-sa/4.0/legalcode](https://creativecommons.org/licenses/by-sa/4.0/legalcode).
### Citation Information
If you are using this dataset in your work, please cite this publication:
```bibtex
@inproceedings{otegi-etal-2020-conversational,
title = "{Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque}",
author = "Otegi, Arantxa and
Agirre, Aitor and
Campos, Jon Ander and
Soroa, Aitor and
Agirre, Eneko",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.55",
pages = "436--442"
}
```
### Contributions
Thanks to [@antxa](https://github.com/antxa) for adding this dataset. |
ruanchaves | null | @article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
} | Hashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
HashSet Distant Sampled is a sample of 20,000 camel cased hashtags from the HashSet Distant dataset. | false | 3 | false | ruanchaves/hashset_distant_sampled | 2022-10-20T19:13:24.000Z | null | false | fb8b329c87153970e0d65e79f8b50220cc2b5ed9 | [] | [
"arxiv:2201.06741",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"language:hi",
"language:en",
"license:unknown",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"tags:word-segmentation"
] | https://huggingface.co/datasets/ruanchaves/hashset_distant_sampled/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- hi
- en
license:
- unknown
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: HashSet Distant Sampled
tags:
- word-segmentation
---
# Dataset Card for HashSet Distant Sampled
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
HashSet Distant Sampled is a sample of 20,000 camel cased hashtags from the HashSet Distant dataset.
### Languages
Hindi and English.
## Dataset Structure
### Data Instances
```
{
'index': 282559,
'hashtag': 'Youth4Nation',
'segmentation': 'Youth 4 Nation'
}
```
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
ruanchaves | null | @article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
} | Hashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation. | false | 2 | false | ruanchaves/hashset_distant | 2022-10-20T19:13:21.000Z | null | false | 0df29003f66c0cb4e17e908cb42e3843d4bd6b11 | [] | [
"arxiv:2201.06741",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"language:hi",
"language:en",
"license:unknown",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"tags:word-segmentation"
] | https://huggingface.co/datasets/ruanchaves/hashset_distant/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- hi
- en
license:
- unknown
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: HashSet Distant
tags:
- word-segmentation
---
# Dataset Card for HashSet Distant
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
### Languages
Hindi and English.
## Dataset Structure
### Data Instances
```
{
'index': 282559,
'hashtag': 'Youth4Nation',
'segmentation': 'Youth 4 Nation'
}
```
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
ruanchaves | null | @article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
} | Hashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Manual: contains 1.9k manually annotated hashtags. Each row consists of the hashtag, segmented
hashtag ,named entity annotations, a list storing whether the hashtag contains mix of hindi and english
tokens and/or contains non-english tokens. | false | 1 | false | ruanchaves/hashset_manual | 2022-10-20T19:13:18.000Z | null | false | d5aeed029db258e17d93b7e2bf0d1a84ff4f56e5 | [] | [
"arxiv:2201.06741",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language:hi",
"language:en",
"license:unknown",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"task_ids:named-entity-recognition",
"tags:word-segmentation... | https://huggingface.co/datasets/ruanchaves/hashset_manual/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- hi
- en
license:
- unknown
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids:
- named-entity-recognition
pretty_name: HashSet Manual
tags:
- word-segmentation
---
# Dataset Card for HashSet Manual
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Manual: contains 1.9k manually annotated hashtags. Each row consists of the hashtag, segmented hashtag ,named entity annotations, whether the hashtag contains mix of hindi and english tokens and/or contains non-english tokens.
### Languages
Mostly Hindi and English.
## Dataset Structure
### Data Instances
```
{
"index": 10,
"hashtag": "goodnewsmegan",
"segmentation": "good news megan",
"spans": {
"start": [
8
],
"end": [
13
],
"text": [
"megan"
]
},
"source": "roman",
"gold_position": null,
"mix": false,
"other": false,
"ner": true,
"annotator_id": 1,
"annotation_id": 2088,
"created_at": "2021-12-30 17:10:33.800607",
"updated_at": "2021-12-30 17:10:59.714840",
"lead_time": 3896.182,
"rank": {
"position": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10
],
"candidate": [
"goodnewsmegan",
"goodnewsmeg an",
"goodnews megan",
"goodnewsmega n",
"go odnewsmegan",
"good news megan",
"good newsmegan",
"g oodnewsmegan",
"goodnewsme gan",
"goodnewsm egan"
]
}
}
```
### Data Fields
- `index`: a numerical index annotated by Kodali et al..
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `spans`: named entity spans.
- `source`: data source.
- `gold_position`: position of the gold segmentation on the `segmentation` field inside the `rank`.
- `mix`: The hashtag has a mix of English and Hindi tokens.
- `other`: The hashtag has non-English tokens.
- `ner`: The hashtag has named entities.
- `annotator_id`: annotator ID.
- `annotation_id`: annotation ID.
- `created_at`: Creation date timestamp.
- `updated_at`: Update date timestamp.
- `lead_time`: Lead time field annotated by Kodali et al..
- `rank`: Rank of each candidate selected by a baseline word segmenter ( WordBreaker ).
- `candidates`: Candidates selected by a baseline word segmenter ( WordBreaker ).
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
ruanchaves | null | @inproceedings{maddela-etal-2019-multi,
title = "Multi-task Pairwise Neural Ranking for Hashtag Segmentation",
author = "Maddela, Mounica and
Xu, Wei and
Preo{\c{t}}iuc-Pietro, Daniel",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1242",
doi = "10.18653/v1/P19-1242",
pages = "2538--2549",
abstract = "Hashtags are often employed on social media and beyond to add metadata to a textual utterance with the goal of increasing discoverability, aiding search, or providing additional semantics. However, the semantic content of hashtags is not straightforward to infer as these represent ad-hoc conventions which frequently include multiple words joined together and can include abbreviations and unorthodox spellings. We build a dataset of 12,594 hashtags split into individual segments and propose a set of approaches for hashtag segmentation by framing it as a pairwise ranking problem between candidate segmentations. Our novel neural approaches demonstrate 24.6{\%} error reduction in hashtag segmentation accuracy compared to the current state-of-the-art method. Finally, we demonstrate that a deeper understanding of hashtag semantics obtained through segmentation is useful for downstream applications such as sentiment analysis, for which we achieved a 2.6{\%} increase in average recall on the SemEval 2017 sentiment analysis dataset.",
} | The description below was taken from the paper "Multi-task Pairwise Neural Ranking for Hashtag Segmentation"
by Maddela et al..
"STAN large, our new expert curated dataset, which includes all 12,594 unique English hashtags and their
associated tweets from the same Stanford dataset.
STAN small is the most commonly used dataset in previous work. However, after reexamination, we found annotation
errors in 6.8% of the hashtags in this dataset, which is significant given that the error rate of the state-of-the art
models is only around 10%. Most of the errors were related to named entities. For example, #lionhead,
which refers to the “Lionhead” video game company, was labeled as “lion head”.
We therefore constructed the STAN large dataset of 12,594 hashtags with additional quality control for human annotations." | false | 2 | false | ruanchaves/stan_large | 2022-10-20T19:13:15.000Z | null | false | 926842c8fbeadabe99a88d30d4b7ce06a42fb64c | [] | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language:en",
"license:agpl-3.0",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"tags:word-segmentation"
] | https://huggingface.co/datasets/ruanchaves/stan_large/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- en
license:
- agpl-3.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: STAN Large
tags:
- word-segmentation
---
# Dataset Card for STAN Large
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [mounicam/hashtag_master](https://github.com/mounicam/hashtag_master)
- **Paper:** [Multi-task Pairwise Neural Ranking for Hashtag Segmentation](https://aclanthology.org/P19-1242/)
### Dataset Summary
The description below was taken from the paper "Multi-task Pairwise Neural Ranking for Hashtag Segmentation"
by Maddela et al..
"STAN large, our new expert curated dataset, which includes all 12,594 unique English hashtags and their
associated tweets from the same Stanford dataset.
STAN small is the most commonly used dataset in previous work. However, after reexamination, we found annotation
errors in 6.8% of the hashtags in this dataset, which is significant given that the error rate of the state-of-the art
models is only around 10%. Most of the errors were related to named entities. For example, #lionhead,
which refers to the “Lionhead” video game company, was labeled as “lion head”.
We therefore constructed the STAN large dataset of 12,594 hashtags with additional quality control for human annotations."
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 15,
"hashtag": "PokemonPlatinum",
"segmentation": "Pokemon Platinum",
"alternatives": {
"segmentation": [
"Pokemon platinum"
]
}
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `alternatives`: other segmentations that are also accepted as a gold segmentation.
Although `segmentation` has exactly the same characters as `hashtag` except for the spaces, the segmentations inside `alternatives` may have characters corrected to uppercase.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{maddela-etal-2019-multi,
title = "Multi-task Pairwise Neural Ranking for Hashtag Segmentation",
author = "Maddela, Mounica and
Xu, Wei and
Preo{\c{t}}iuc-Pietro, Daniel",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1242",
doi = "10.18653/v1/P19-1242",
pages = "2538--2549",
abstract = "Hashtags are often employed on social media and beyond to add metadata to a textual utterance with the goal of increasing discoverability, aiding search, or providing additional semantics. However, the semantic content of hashtags is not straightforward to infer as these represent ad-hoc conventions which frequently include multiple words joined together and can include abbreviations and unorthodox spellings. We build a dataset of 12,594 hashtags split into individual segments and propose a set of approaches for hashtag segmentation by framing it as a pairwise ranking problem between candidate segmentations. Our novel neural approaches demonstrate 24.6{\%} error reduction in hashtag segmentation accuracy compared to the current state-of-the-art method. Finally, we demonstrate that a deeper understanding of hashtag semantics obtained through segmentation is useful for downstream applications such as sentiment analysis, for which we achieved a 2.6{\%} increase in average recall on the SemEval 2017 sentiment analysis dataset.",
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
ruanchaves | null | @misc{bansal2015deep,
title={Towards Deep Semantic Analysis Of Hashtags},
author={Piyush Bansal and Romil Bansal and Vasudeva Varma},
year={2015},
eprint={1501.03210},
archivePrefix={arXiv},
primaryClass={cs.IR}
} | Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al.. | false | 3 | false | ruanchaves/stan_small | 2022-10-20T19:13:12.000Z | null | false | af6d38e28c5033a1f89b50b9e26950fe73550e29 | [] | [
"arxiv:1501.03210",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"tags:word-segmentation"
] | https://huggingface.co/datasets/ruanchaves/stan_small/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
- conditional-text-generation
task_ids: []
pretty_name: STAN Small
tags:
- word-segmentation
---
# Dataset Card for STAN Small
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [mounicam/hashtag_master](https://github.com/mounicam/hashtag_master)
- **Paper:** [Multi-task Pairwise Neural Ranking for Hashtag Segmentation](https://aclanthology.org/P19-1242/)
### Dataset Summary
Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 300,
"hashtag": "microsoftfail",
"segmentation": "microsoft fail",
"alternatives": {
"segmentation": [
"Microsoft fail"
]
}
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `alternatives`: other segmentations that are also accepted as a gold segmentation.
Although `segmentation` has exactly the same characters as `hashtag` except for the spaces, the segmentations inside `alternatives` may have characters corrected to uppercase.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@misc{bansal2015deep,
title={Towards Deep Semantic Analysis Of Hashtags},
author={Piyush Bansal and Romil Bansal and Vasudeva Varma},
year={2015},
eprint={1501.03210},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
ruanchaves | null | @article{celebi2018segmenting,
title={Segmenting hashtags and analyzing their grammatical structure},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
journal={Journal of the Association for Information Science and Technology},
volume={69},
number={5},
pages={675--686},
year={2018},
publisher={Wiley Online Library}
} | Dev-BOUN Development set that includes 500 manually segmented hashtags. These are selected from tweets about movies,
tv shows, popular people, sports teams etc. Test-BOUN Test set that includes 500 manually segmented hashtags.
These are selected from tweets about movies, tv shows, popular people, sports teams etc. | false | 3 | false | ruanchaves/boun | 2022-10-20T19:13:09.000Z | null | false | 27f9f67d4662570c17e251438164c3508643c32d | [] | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"tags:word-segmentation"
] | https://huggingface.co/datasets/ruanchaves/boun/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: BOUN
tags:
- word-segmentation
---
# Dataset Card for BOUN
## Dataset Description
- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
- **Paper:** [Segmenting Hashtags and Analyzing Their Grammatical Structure](https://asistdl.onlinelibrary.wiley.com/doi/epdf/10.1002/asi.23989?author_access_token=qbKcE1jrre5nbv_Tn9csbU4keas67K9QMdWULTWMo8NOtY2aA39ck2w5Sm4ePQ1MZhbjCdEuaRlPEw2Kd12jzvwhwoWP0fdroZAwWsmXHPXxryDk_oBCup1i9_VDNIpU)
### Dataset Summary
Dev-BOUN is a Development set that includes 500 manually segmented hashtags. These are selected from tweets about movies,
tv shows, popular people, sports teams etc.
Test-BOUN is a Test set that includes 500 manually segmented hashtags. These are selected from tweets about movies, tv shows, popular people, sports teams etc.
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 0,
"hashtag": "tryingtosleep",
"segmentation": "trying to sleep"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{celebi2018segmenting,
title={Segmenting hashtags and analyzing their grammatical structure},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
journal={Journal of the Association for Information Science and Technology},
volume={69},
number={5},
pages={675--686},
year={2018},
publisher={Wiley Online Library}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
ruanchaves | null | @article{celebi2018segmenting,
title={Segmenting hashtags and analyzing their grammatical structure},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
journal={Journal of the Association for Information Science and Technology},
volume={69},
number={5},
pages={675--686},
year={2018},
publisher={Wiley Online Library}
} | 1000 hashtags manually segmented by Çelebi et al. for development purposes,
randomly selected from the Stanford Sentiment Tweet Corpus by Sentiment140. | false | 3 | false | ruanchaves/dev_stanford | 2022-10-20T19:13:37.000Z | null | false | 292e00146ecc1be6feefdb52362eace417791f4f | [] | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"tags:word-segmentation"
] | https://huggingface.co/datasets/ruanchaves/dev_stanford/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: Dev-Stanford
tags:
- word-segmentation
---
# Dataset Card for Dev-Stanford
## Dataset Description
- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
- **Paper:** [Segmenting Hashtags and Analyzing Their Grammatical Structure](https://asistdl.onlinelibrary.wiley.com/doi/epdf/10.1002/asi.23989?author_access_token=qbKcE1jrre5nbv_Tn9csbU4keas67K9QMdWULTWMo8NOtY2aA39ck2w5Sm4ePQ1MZhbjCdEuaRlPEw2Kd12jzvwhwoWP0fdroZAwWsmXHPXxryDk_oBCup1i9_VDNIpU)
### Dataset Summary
1000 hashtags manually segmented by Çelebi et al. for development purposes,
randomly selected from the Stanford Sentiment Tweet Corpus by Sentiment140.
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 15,
"hashtag": "marathonmonday",
"segmentation": "marathon monday"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{celebi2018segmenting,
title={Segmenting hashtags and analyzing their grammatical structure},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
journal={Journal of the Association for Information Science and Technology},
volume={69},
number={5},
pages={675--686},
year={2018},
publisher={Wiley Online Library}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
ruanchaves | null | @misc{bansal2015deep,
title={Towards Deep Semantic Analysis Of Hashtags},
author={Piyush Bansal and Romil Bansal and Vasudeva Varma},
year={2015},
eprint={1501.03210},
archivePrefix={arXiv},
primaryClass={cs.IR}
} | Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al.. | false | 3 | false | ruanchaves/test_stanford | 2022-10-20T19:13:07.000Z | null | false | 48f64996c295b22e76cec4454362babfad31f581 | [] | [
"arxiv:1501.03210",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"tags:word-segmentation"
] | https://huggingface.co/datasets/ruanchaves/test_stanford/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: Test-Stanford
tags:
- word-segmentation
---
# Dataset Card for Test-Stanford
## Dataset Description
- **Paper:** [Towards Deep Semantic Analysis Of Hashtags](https://arxiv.org/abs/1501.03210)
### Dataset Summary
Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 1467856821,
"hashtag": "therapyfail",
"segmentation": "therapy fail",
"gold_position": 8,
"rank": {
"position": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20
],
"candidate": [
"therap y fail",
"the rap y fail",
"t her apy fail",
"the rap yfail",
"t he rap y fail",
"thera py fail",
"ther apy fail",
"th era py fail",
"therapy fail",
"therapy fai l",
"the r apy fail",
"the rapyfa il",
"the rapy fail",
"t herapy fail",
"the rapyfail",
"therapy f ai l",
"therapy fa il",
"the rapyf a il",
"therapy f ail",
"the ra py fail"
]
}
}
```
### Data Fields
- `index`: a numerical index annotated by Kodali et al..
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `gold_position`: position of the gold segmentation on the `segmentation` field inside the `rank`.
- `rank`: Rank of each candidate selected by a baseline word segmenter ( Segmentations Seeder Module ).
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@misc{bansal2015deep,
title={Towards Deep Semantic Analysis Of Hashtags},
author={Piyush Bansal and Romil Bansal and Vasudeva Varma},
year={2015},
eprint={1501.03210},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
batterydata | null | null | null | false | 2 | false | batterydata/paper-abstracts | 2022-09-05T15:54:02.000Z | null | false | 2d33f11d465c83eb043544177daceb8f4d508343 | [] | [
"language:en",
"license:apache-2.0",
"task_categories:text-classification"
] | https://huggingface.co/datasets/batterydata/paper-abstracts/resolve/main/README.md | ---
language:
- en
license:
- apache-2.0
task_categories:
- text-classification
pretty_name: 'Battery Abstracts Dataset'
---
# Battery Abstracts Dataset
This dataset includes 29,472 battery papers and 17,191 non-battery papers, a total of 46,663 papers. These papers are manually labelled in terms of the journals to which they belong. 14 battery journals and 1,044 non battery journals were selected to form this database.
- training_data.csv: Battery papers: 20,629, Non-battery papers: 12,034. Total: 32,663.
- val_data.csv: Battery papers: 5,895, Non-battery papers: 3,438. Total: 9,333.
- test_data.csv: Battery papers: 2,948, Non-battery papers: 1,719. Total: 4,667.
# Usage
```
from datasets import load_dataset
dataset = load_dataset("batterydata/paper-abstracts")
```
# Citation
```
@article{huang2022batterybert,
title={BatteryBERT: A Pretrained Language Model for Battery Database Enhancement},
author={Huang, Shu and Cole, Jacqueline M},
journal={J. Chem. Inf. Model.},
year={2022},
doi={10.1021/acs.jcim.2c00035},
url={DOI:10.1021/acs.jcim.2c00035},
pages={DOI: 10.1021/acs.jcim.2c00035},
publisher={ACS Publications}
}
``` |
Davis | null | null | null | false | 7 | false | Davis/Swahili-tweet-sentiment | 2022-03-05T17:58:17.000Z | null | false | 586ba42e6c8a76b305b4e27fc20ce99226a2c1d4 | [] | [
"license:mit"
] | https://huggingface.co/datasets/Davis/Swahili-tweet-sentiment/resolve/main/README.md | ---
license: mit
---
A new Swahili tweet dataset for sentiment analysis.
## Issues ⚠️
Incase you have any difficulties or issues while trying to run the script
you can raise it on the issues section.
## Pull Requests 🔧
If you have something to add or new idea to implement, you are welcome to create a pull requests on improvement.
## Give it a Like 👍
If you find this dataset useful, give it a like so as many people can get to know it.
## Credits
All the credits to [Davis David ](https://twitter.com/Davis_McDavid), [Zephania Reuben](https://twitter.com/nsomazr) & [Eliya Masesa](https://twitter.com/eliya_masesa) |
ruanchaves | null | @article{glushkova2019char,
title={Char-RNN and Active Learning for Hashtag Segmentation},
author={Glushkova, Taisiya and Artemova, Ekaterina},
journal={arXiv preprint arXiv:1911.03270},
year={2019}
} | 2000 real hashtags collected from several pages about civil services on vk.com (a Russian social network)
and then segmented manually. | false | 3 | false | ruanchaves/nru_hse | 2022-10-20T19:12:59.000Z | null | false | 4fb954beab9774a12cac3a13ee08616d5e10df6d | [] | [
"arxiv:1911.03270",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language:ru",
"license:unknown",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"tags:word-segmentation"
] | https://huggingface.co/datasets/ruanchaves/nru_hse/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- ru
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: NRU-HSE
tags:
- word-segmentation
---
# Dataset Card for NRU-HSE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [glushkovato/hashtag_segmentation](https://github.com/glushkovato/hashtag_segmentation/)
- **Paper:** [Char-RNN and Active Learning for Hashtag Segmentation](https://arxiv.org/abs/1911.03270)
### Dataset Summary
Real hashtags collected from several pages about civil services on vk.com (a Russian social network) and then segmented manually.
### Languages
Russian
## Dataset Structure
### Data Instances
```
{
"index": 0,
"hashtag": "ЁлкаВЗазеркалье",
"segmentation": "Ёлка В Зазеркалье"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{glushkova2019char,
title={Char-RNN and Active Learning for Hashtag Segmentation},
author={Glushkova, Taisiya and Artemova, Ekaterina},
journal={arXiv preprint arXiv:1911.03270},
year={2019}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
ruanchaves | null | @article{hill2014empirical,
title={An empirical study of identifier splitting techniques},
author={Hill, Emily and Binkley, David and Lawrie, Dawn and Pollock, Lori and Vijay-Shanker, K},
journal={Empirical Software Engineering},
volume={19},
number={6},
pages={1754--1780},
year={2014},
publisher={Springer}
} | In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
The Loyola University of Delaware Identifier Splitting Oracle is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier. | false | 1 | false | ruanchaves/loyola | 2022-10-20T19:13:04.000Z | null | false | e51544fd07e72dfa6bf830b56e417adba8dc50ba | [] | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language:code",
"license:unknown",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"tags:word-segmentation"
] | https://huggingface.co/datasets/ruanchaves/loyola/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- code
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: The Loyola University of Delaware Identifier Splitting Oracle
tags:
- word-segmentation
---
# Dataset Card for The Loyola University of Delaware Identifier Splitting Oracle
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Loyola University of Delaware Identifier Splitting Oracle](http://www.cs.loyola.edu/~binkley/ludiso/)
- **Paper:** [An empirical study of identifier splitting techniques](https://dl.acm.org/doi/10.1007/s10664-013-9261-0)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
The Loyola University of Delaware Identifier Splitting Oracle is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
- C
- C++
## Dataset Structure
### Data Instances
```
{
"index": 0,
"identifier": "::CreateProcess",
"segmentation": ":: Create Process",
"language": "cpp",
"source": "mozilla-source-1.1"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
- `language`: the programming language of the source.
- `source`: the source of the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
### Citation Information
```
@article{hill2014empirical,
title={An empirical study of identifier splitting techniques},
author={Hill, Emily and Binkley, David and Lawrie, Dawn and Pollock, Lori and Vijay-Shanker, K},
journal={Empirical Software Engineering},
volume={19},
number={6},
pages={1754--1780},
year={2014},
publisher={Springer}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
AhmedSSoliman | null | null | null | false | 2 | false | AhmedSSoliman/QRCD | 2022-03-06T18:58:06.000Z | null | false | f47b2a116e3e6ad75fc4dbf17a4c8527d0fb0126 | [] | [] | https://huggingface.co/datasets/AhmedSSoliman/QRCD/resolve/main/README.md | This dataset is presented for the task of Answering Questions on the Holy Qur'an.
https://sites.google.com/view/quran-qa-2022
QRCD (Qur'anic Reading Comprehension Dataset) is composed of 1,093 tuples of question-passage pairs that are coupled with their extracted answers to constitute 1,337 question-passage-answer triplets. It is split into training (65%), development (10%), and test (25%) sets.
QRCD is a JSON Lines (JSONL) file; each line is a JSON object that comprises a question-passage pair, along with its answers extracted from the accompanying passage. The dataset adopts the format shown below. The sample below has two JSON objects, one for each of the above two questions. |
mbartolo | null | @inproceedings{bartolo-etal-2021-improving,
title = "Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation",
author = "Bartolo, Max and
Thrush, Tristan and
Jia, Robin and
Riedel, Sebastian and
Stenetorp, Pontus and
Kiela, Douwe",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.696",
doi = "10.18653/v1/2021.emnlp-main.696",
pages = "8830--8848",
abstract = "Despite recent progress, state-of-the-art question answering models remain vulnerable to a variety of adversarial attacks. While dynamic adversarial data collection, in which a human annotator tries to write examples that fool a model-in-the-loop, can improve model robustness, this process is expensive which limits the scale of the collected data. In this work, we are the first to use synthetic adversarial data generation to make question answering models more robust to human adversaries. We develop a data generation pipeline that selects source passages, identifies candidate answers, generates questions, then finally filters or re-labels them to improve quality. Using this approach, we amplify a smaller human-written adversarial dataset to a much larger set of synthetic question-answer pairs. By incorporating our synthetic data, we improve the state-of-the-art on the AdversarialQA dataset by 3.7F1 and improve model generalisation on nine of the twelve MRQA datasets. We further conduct a novel human-in-the-loop evaluation and show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8.8{\%} of the time on average, compared to 17.6{\%} for a model trained without synthetic data.",
} | SynQA is a Reading Comprehension dataset created in the work "Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation" (https://aclanthology.org/2021.emnlp-main.696/).
It consists of 314,811 synthetically generated questions on the passages in the SQuAD v1.1 (https://arxiv.org/abs/1606.05250) training set.
In this work, we use a synthetic adversarial data generation to make QA models more robust to human adversaries. We develop a data generation pipeline that selects source passages, identifies candidate answers, generates questions, then finally filters or re-labels them to improve quality. Using this approach, we amplify a smaller human-written adversarial dataset to a much larger set of synthetic question-answer pairs. By incorporating our synthetic data, we improve the state-of-the-art on the AdversarialQA (https://adversarialqa.github.io/) dataset by 3.7F1 and improve model generalisation on nine of the twelve MRQA datasets. We further conduct a novel human-in-the-loop evaluation to show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8.8% of the time on average, compared to 17.6% for a model trained without synthetic data.
For full details on how the dataset was created, kindly refer to the paper. | false | 16 | false | mbartolo/synQA | 2022-10-25T10:02:24.000Z | null | false | f60c3e93c0985c90741d15948afc694f9460b3d9 | [] | [
"arxiv:1606.05250",
"annotations_creators:generated",
"language_creators:found",
"language:en",
"license:mit",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa"
] | https://huggingface.co/datasets/mbartolo/synQA/resolve/main/README.md | ---
annotations_creators:
- generated
language_creators:
- found
language:
- en
license: mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
pretty_name: synQA
---
# Dataset Card for synQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [synQA homepage](https://github.com/maxbartolo/improving-qa-model-robustness)
- **Paper:** [Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation](https://aclanthology.org/2021.emnlp-main.696/)
- **Point of Contact:** [Max Bartolo](max.bartolo@ucl.ac.uk)
### Dataset Summary
SynQA is a Reading Comprehension dataset created in the work "Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation" (https://aclanthology.org/2021.emnlp-main.696/).
It consists of 314,811 synthetically generated questions on the passages in the SQuAD v1.1 (https://arxiv.org/abs/1606.05250) training set.
In this work, we use a synthetic adversarial data generation to make QA models more robust to human adversaries. We develop a data generation pipeline that selects source passages, identifies candidate answers, generates questions, then finally filters or re-labels them to improve quality. Using this approach, we amplify a smaller human-written adversarial dataset to a much larger set of synthetic question-answer pairs. By incorporating our synthetic data, we improve the state-of-the-art on the AdversarialQA (https://adversarialqa.github.io/) dataset by 3.7F1 and improve model generalisation on nine of the twelve MRQA datasets. We further conduct a novel human-in-the-loop evaluation to show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8.8% of the time on average, compared to 17.6% for a model trained without synthetic data.
For full details on how the dataset was created, kindly refer to the paper.
### Supported Tasks
`extractive-qa`: The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap [F1 score](https://huggingface.co/metrics/f1).ilable as round 1 of the QA task on [Dynabench](https://dynabench.org/tasks/2#overall) and ranks models based on F1 score.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
Data is provided in the same format as SQuAD 1.1. An example is shown below:
```
{
"data": [
{
"title": "None",
"paragraphs": [
{
"context": "Architecturally, the school has a Catholic character. Atop the Main Building's gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \"Venite Ad Me Omnes\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.",
"qas": [
{
"id": "689f275aacba6c43ff112b2c7cb16129bfa934fa",
"question": "What material is the statue of Christ made of?",
"answers": [
{
"answer_start": 190,
"text": "organic copper"
}
]
},
{
"id": "73bd3f52f5934e02332787898f6e568d04bc5403",
"question": "Who is on the Main Building's gold dome?",
"answers": [
{
"answer_start": 111,
"text": "the Virgin Mary."
}
]
},
{
"id": "4d459d5b75fd8a6623446290c542f99f1538cf84",
"question": "What kind of statue is at the end of the main drive?",
"answers": [
{
"answer_start": 667,
"text": "modern stone"
}
]
},
{
"id": "987a1e469c5b360f142b0a171e15cef17cd68ea6",
"question": "What type of dome is on the Main Building at Notre Dame?",
"answers": [
{
"answer_start": 79,
"text": "gold"
}
]
}
]
}
]
}
]
}
```
### Data Fields
- title: all "None" in this dataset
- context: the context/passage
- id: a string identifier for each question
- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an `answer_start` field which is the character index of the start of the answer span, and a `text` field which is the answer text.
### Data Splits
The dataset is composed of a single split of 314,811 examples that we used in a two-stage fine-tuning process (refer to the paper for further details).
## Dataset Creation
### Curation Rationale
This dataset was created to investigate the effects of using synthetic adversarial data generation to improve robustness of state-of-the-art QA models.
### Source Data
#### Initial Data Collection and Normalization
The source passages are from Wikipedia and are the same as those used in [SQuAD v1.1](https://arxiv.org/abs/1606.05250).
#### Who are the source language producers?
The source language produces are Wikipedia editors for the passages, and a BART-Large generative model for the questions.
### Personal and Sensitive Information
No annotator identifying details are provided.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better question answering systems.
A system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a support resource for improve the ability of systems t handle questions that contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.
It should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.
### Discussion of Biases
The dataset may exhibit various biases in terms of the source passage selection, selected candidate answers, generated questions, quality re-labelling process, as well as any algorithmic biases that may be exacerbated from the adversarial annotation process used to collect the SQuAD and AdversarialQA data on which the generators were trained.
### Other Known Limitations
N/a
## Additional Information
### Dataset Curators
This dataset was initially created by Max Bartolo, Tristan Thrush, Robin Jia, Sebastian Riedel, Pontus Stenetorp, and Douwe Kiela during work carried out at University College London (UCL) and Facebook AI Research (FAIR).
### Licensing Information
This dataset is distributed under the [MIT License](https://opensource.org/licenses/MIT).
### Citation Information
```
@inproceedings{bartolo-etal-2021-improving,
title = "Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation",
author = "Bartolo, Max and
Thrush, Tristan and
Jia, Robin and
Riedel, Sebastian and
Stenetorp, Pontus and
Kiela, Douwe",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.696",
doi = "10.18653/v1/2021.emnlp-main.696",
pages = "8830--8848",
abstract = "Despite recent progress, state-of-the-art question answering models remain vulnerable to a variety of adversarial attacks. While dynamic adversarial data collection, in which a human annotator tries to write examples that fool a model-in-the-loop, can improve model robustness, this process is expensive which limits the scale of the collected data. In this work, we are the first to use synthetic adversarial data generation to make question answering models more robust to human adversaries. We develop a data generation pipeline that selects source passages, identifies candidate answers, generates questions, then finally filters or re-labels them to improve quality. Using this approach, we amplify a smaller human-written adversarial dataset to a much larger set of synthetic question-answer pairs. By incorporating our synthetic data, we improve the state-of-the-art on the AdversarialQA dataset by 3.7F1 and improve model generalisation on nine of the twelve MRQA datasets. We further conduct a novel human-in-the-loop evaluation and show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8.8{\%} of the time on average, compared to 17.6{\%} for a model trained without synthetic data.",
}
```
### Contributions
Thanks to [@maxbartolo](https://github.com/maxbartolo) for adding this dataset.
|
Paulosdeanllons | null | null | null | false | 3 | false | Paulosdeanllons/sedar | 2022-03-05T22:38:44.000Z | null | false | 3a424cd1ff2d75a58e267c7f897e1f7d6ae121d4 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Paulosdeanllons/sedar/resolve/main/README.md | ---
license: afl-3.0
---
|
ruanchaves | null | @inproceedings{li2018helpful,
title={Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF.},
author={Li, Jiechu and Du, Qingfeng and Shi, Kun and He, Yu and Wang, Xin and Xu, Jincheng},
booktitle={SEKE},
pages={175--174},
year={2018}
} | In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
BT11 is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier. | false | 3 | false | ruanchaves/bt11 | 2022-10-20T19:13:02.000Z | null | false | 1877395c47bcf77735761c694234dd55d3598bc5 | [] | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language:code",
"license:unknown",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"tags:word-segmentation"
] | https://huggingface.co/datasets/ruanchaves/bt11/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- code
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: BT11
tags:
- word-segmentation
---
# Dataset Card for BT11
## Dataset Description
- **Paper:** [Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF](https://ksiresearch.org/seke/seke18paper/seke18paper_167.pdf)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
BT11 is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
## Dataset Structure
### Data Instances
```
{
"index": 20170,
"identifier": "currentLineHighlight",
"segmentation": "current Line Highlight"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{butler2011improving,
title={Improving the tokenisation of identifier names},
author={Butler, Simon and Wermelinger, Michel and Yu, Yijun and Sharp, Helen},
booktitle={European Conference on Object-Oriented Programming},
pages={130--154},
year={2011},
organization={Springer}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
ruanchaves | null | @inproceedings{inproceedings,
author = {Lawrie, Dawn and Binkley, David and Morrell, Christopher},
year = {2010},
month = {11},
pages = {3 - 12},
title = {Normalizing Source Code Vocabulary},
journal = {Proceedings - Working Conference on Reverse Engineering, WCRE},
doi = {10.1109/WCRE.2010.10}
} | In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Binkley is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier. | false | 3 | false | ruanchaves/binkley | 2022-10-20T19:12:56.000Z | null | false | 5ccd62cfd185abd77dffc846d2cd3499e0c286c9 | [] | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language:code",
"license:unknown",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"tags:word-segmentation"
] | https://huggingface.co/datasets/ruanchaves/binkley/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- code
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: Binkley
tags:
- word-segmentation
---
# Dataset Card for Binkley
## Dataset Description
- **Paper:** [Normalizing Source Code Vocabulary](https://www.researchgate.net/publication/224198190_Normalizing_Source_Code_Vocabulary)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Binkley is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
### Languages
- C
- C++
- Java
## Dataset Structure
### Data Instances
```
{
"index": 0,
"identifier": "init_g16_i",
"segmentation": "init _ g 16 _ i"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{inproceedings,
author = {Lawrie, Dawn and Binkley, David and Morrell, Christopher},
year = {2010},
month = {11},
pages = {3 - 12},
title = {Normalizing Source Code Vocabulary},
journal = {Proceedings - Working Conference on Reverse Engineering, WCRE},
doi = {10.1109/WCRE.2010.10}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
ruanchaves | null | @inproceedings{li2018helpful,
title={Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF.},
author={Li, Jiechu and Du, Qingfeng and Shi, Kun and He, Yu and Wang, Xin and Xu, Jincheng},
booktitle={SEKE},
pages={175--174},
year={2018}
} | In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Jhotdraw is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier. | false | 3 | false | ruanchaves/jhotdraw | 2022-10-20T19:12:53.000Z | null | false | df859ecce54578af17e873cf79438b082632de1d | [] | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language:code",
"license:unknown",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"tags:word-segmentation"
] | https://huggingface.co/datasets/ruanchaves/jhotdraw/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- code
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: Jhotdraw
tags:
- word-segmentation
---
# Dataset Card for Jhotdraw
## Dataset Description
- **Paper:** [Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF](https://ksiresearch.org/seke/seke18paper/seke18paper_167.pdf)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Jhotdraw is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
## Dataset Structure
### Data Instances
```
{
"index": 0,
"identifier": "abstractconnectorserializeddataversion",
"segmentation": "abstract connector serialized data version"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{madani2010recognizing,
title={Recognizing words from source code identifiers using speech recognition techniques},
author={Madani, Nioosha and Guerrouj, Latifa and Di Penta, Massimiliano and Gueheneuc, Yann-Gael and Antoniol, Giuliano},
booktitle={2010 14th European Conference on Software Maintenance and Reengineering},
pages={68--77},
year={2010},
organization={IEEE}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
ruanchaves | null | @inproceedings{li2018helpful,
title={Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF.},
author={Li, Jiechu and Du, Qingfeng and Shi, Kun and He, Yu and Wang, Xin and Xu, Jincheng},
booktitle={SEKE},
pages={175--174},
year={2018}
} | In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Lynx is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier. | false | 2 | false | ruanchaves/lynx | 2022-10-20T19:12:51.000Z | null | false | 9046da8c9a595ead11d7d243780db677f2ce9618 | [] | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language:code",
"license:unknown",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"tags:word-segmentation"
] | https://huggingface.co/datasets/ruanchaves/lynx/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- code
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
- code-generation
- conditional-text-generation
task_ids: []
pretty_name: Lynx
tags:
- word-segmentation
---
# Dataset Card for Lynx
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF](https://ksiresearch.org/seke/seke18paper/seke18paper_167.pdf)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Lynx is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
Besides identifier segmentation, the gold labels for this dataset also include abbreviation expansion.
### Languages
- C
## Dataset Structure
### Data Instances
```
{
"index": 3,
"identifier": "abspath",
"segmentation": "abs path",
"expansion": "absolute path",
"spans": {
"text": [
"abs"
],
"expansion": [
"absolute"
],
"start": [
0
],
"end": [
4
]
}
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier, without abbreviation expansion.
- `expansion`: the gold segmentation for the identifier, with abbreviation expansion.
- `spans`: the start and end index of each abbreviation, the text of the abbreviation and its corresponding expansion.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
### Citation Information
```
@inproceedings{madani2010recognizing,
title={Recognizing words from source code identifiers using speech recognition techniques},
author={Madani, Nioosha and Guerrouj, Latifa and Di Penta, Massimiliano and Gueheneuc, Yann-Gael and Antoniol, Giuliano},
booktitle={2010 14th European Conference on Software Maintenance and Reengineering},
pages={68--77},
year={2010},
organization={IEEE}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
ruanchaves | null | @inproceedings{celebi2016segmenting,
title={Segmenting hashtags using automatically created training data},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
booktitle={Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)},
pages={2981--2985},
year={2016}
} | Automatically segmented 803K SNAP Twitter Data Set hashtags with the heuristic described in the paper "Segmenting hashtags using automatically created training data". | false | 2 | false | ruanchaves/snap | 2022-10-20T19:12:47.000Z | null | false | dec0e19ff4bab5b5b1a972909b2ea38118644d0f | [] | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"tags:word-segmentation"
] | https://huggingface.co/datasets/ruanchaves/snap/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: SNAP
tags:
- word-segmentation
---
# Dataset Card for SNAP
## Dataset Description
- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
- **Paper:** [Segmenting hashtags using automatically created training data](http://www.lrec-conf.org/proceedings/lrec2016/pdf/708_Paper.pdf)
### Dataset Summary
Automatically segmented 803K SNAP Twitter Data Set hashtags with the heuristic described in the paper "Segmenting hashtags using automatically created training data".
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 0,
"hashtag": "BrandThunder",
"segmentation": "Brand Thunder"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{celebi2016segmenting,
title={Segmenting hashtags using automatically created training data},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
booktitle={Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)},
pages={2981--2985},
year={2016}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library.
|
rocca | null | null | null | false | 6 | false | rocca/emojis | 2022-04-29T09:37:55.000Z | null | false | 0a295fc67ae9892cf83d9f585fbd5f29330bf502 | [] | [] | https://huggingface.co/datasets/rocca/emojis/resolve/main/README.md | A collection of 38,176 emoji images from Facebook, Google, Apple, WhatsApp, Samsung, [JoyPixels](https://www.joypixels.com/), Twitter, [emojidex](https://www.emojidex.com/), LG, [OpenMoji](https://openmoji.org/), and Microsoft. It includes all the emojis for these apps/platforms as of early 2022.
* Counts: Facebook=3664, Google=3664, Apple=3961, WhatsApp=3519, Samsung=3752, JoyPixels=3538, Twitter=3544, emojidex=2040, LG=3051, OpenMoji=3512, Microsoft=3931.
* Sizes: Facebook=144x144, Google=144x144, Apple=144x144, WhatsApp=144x144, Samsung=108x108, JoyPixels=144x144, Twitter=144x144, emojidex=144x144, LG=136x128, OpenMoji=144x144, Microsoft=144x144.
* The tar files directly contain the image files (they're not inside a parent folder).
* The emoji code points are at the end of the filename, but there are some adjustments needed to parse them into the Unicode character consistently across all sets of emojis in this dataset. Here's some JavaScript code to convert the file name of an emoji image into the actual Unicode emoji character:
```js
let filename = ...;
let fixedFilename = filename.replace(/(no|light|medium|medium-light|medium-dark|dark)-skin-tone/, "").replace(/__/, "_").replace(/--/, "-");
let emoji = String.fromCodePoint(...fixedFilename.split("_")[1].split(".")[0].split("-").map(hex => parseInt(hex, 16)));
```
## Facebook examples:

## Google examples:

## Apple examples:

## WhatsApp examples:

## Samsung examples:

## JoyPixels examples:

## Twitter examples:

## emojidex examples:

## LG examples:

## OpenMoji examples:

## Microsoft examples:
 |
Carlisle | null | null | null | false | 3 | false | Carlisle/msmarco-passage-non-abs | 2022-03-06T18:40:15.000Z | null | false | b6ac7236577e02ea792277816649217bd6068381 | [] | [
"license:mit"
] | https://huggingface.co/datasets/Carlisle/msmarco-passage-non-abs/resolve/main/README.md | ---
license: mit
---
|
Carlisle | null | null | null | false | 3 | false | Carlisle/msmarco-passage-abs | 2022-03-06T20:04:45.000Z | null | false | 207e3206c2b03cfd98e167d1f2588c7412e37f6b | [] | [
"license:mit"
] | https://huggingface.co/datasets/Carlisle/msmarco-passage-abs/resolve/main/README.md | ---
license: mit
---
|
gustavecortal | null | null | null | false | 18 | false | gustavecortal/fr_covid_news | 2022-10-20T19:01:24.000Z | null | false | 72047fee5890ca82c752902aedb138cc72c6fb96 | [] | [
"annotations_creators:machine-generated",
"language_creators:found",
"language:fr",
"language_bcp47:fr-FR",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:topic-classification",
"task_id... | https://huggingface.co/datasets/gustavecortal/fr_covid_news/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- fr
language_bcp47:
- fr-FR
license:
- unknown
multilinguality:
- monolingual
pretty_name: COVID-19 French News dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
- sequence-modeling
- conditional-text-generation
task_ids:
- topic-classification
- multi-label-classification
- multi-class-classification
- language-modeling
- summarization
- other-stuctured-to-text
---
# Dataset Card for COVID-19 French News dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The COVID-19 French News dataset is a French-language dataset containing just over 40k unique news articles from more than 50 different French-speaking online newspapers. The dataset has been prepared using [news-please](https://github.com/fhamborg/news-please) - an integrated web crawler and information extractor for news. The current version supports abstractive summarization and topic classification. Dataset Card not finished yet.
### Languages
The text in the dataset is in French.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `title`: title of the article
- `description`: description or a summary of the article
- `text`: the actual article text in raw form
- `domain`: source domain of the article (i.e. lemonde.fr)
- `url`: article URL, the original URL where it was scraped
- `labels`: classification labels
## Data Splits
COVID-19 French News dataset has only the training set, i.e. it has to be loaded with train split specified: fr_covid_news = load_dataset('gustavecortal/fr_covid_news', split="train")
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
### Annotations
#### Annotation process
[More Information Needed]
### Personal and Sensitive Information
As one can imagine, data contains contemporary public figures or individuals who appeared in the news.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help researchers develop better French topic classification and abstractive summarization models for news related to COVID-19.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The data was originally collected by Gustave Cortal (gustavecortal@gmail.com)
### Licensing Information
Usage of the dataset is restricted to non-commercial research purposes only.
### Citation Information
```
@dataset{fr_covid_news,
author = {Gustave Cortal},
year = {2022},
title = {COVID-19 - French News Dataset},
url = {https://www.gustavecortal.com}
}
```
### Contributions
[@gustavecortal](https://github.com/gustavecortal) |
FinScience | null | null | null | false | 3 | false | FinScience/FS-distilroberta-fine-tuned | 2022-10-25T10:02:42.000Z | null | false | e5322fec79e6702f69d79829efdc7853f1853802 | [] | [
"language:en"
] | https://huggingface.co/datasets/FinScience/FS-distilroberta-fine-tuned/resolve/main/README.md | ---
language:
- en
---
---
annotations_creators:
- crowdsourced
languages:
- en
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
--- |
Carlisle | null | null | null | false | 3 | false | Carlisle/msmacro-test | 2022-03-11T00:19:32.000Z | null | false | d2ae9ace717cb0ac375fb3b2c14d2bb5205da8a8 | [] | [
"license:mit"
] | https://huggingface.co/datasets/Carlisle/msmacro-test/resolve/main/README.md | ---
license: mit
---
|
Carlisle | null | null | null | false | 2 | false | Carlisle/msmacro-passage-non-abs-small | 2022-03-07T18:19:10.000Z | null | false | 8b0ee369302c23871e42335fe72e76622f486fdf | [] | [
"license:mit"
] | https://huggingface.co/datasets/Carlisle/msmacro-passage-non-abs-small/resolve/main/README.md | ---
license: mit
---
|
Carlisle | null | null | null | false | 3 | false | Carlisle/msmacro-test-corpus | 2022-03-11T00:13:14.000Z | null | false | 18ce5e787650a1f682fec9588df0cc463a984f0e | [] | [
"license:mit"
] | https://huggingface.co/datasets/Carlisle/msmacro-test-corpus/resolve/main/README.md | ---
license: mit
---
|
pensieves | null | @inproceedings{khetan-etal-2022-mimicause,
title={MIMICause: Representation and automatic extraction of causal relation types from clinical notes},
author={Vivek Khetan and Md Imbesat Hassan Rizvi and Jessica Huber and Paige Bartusiak and Bogdan Sacaleanu and Andrew Fano},
booktitle ={Findings of the Association for Computational Linguistics: ACL 2022},
month={may},
year={2022},
publisher={Association for Computational Linguistics},
address={Dublin, The Republic of Ireland},
url={},
doi={},
pages={},
} | MIMICause Dataset: A dataset for representation and automatic extraction of causal relation types from clinical notes.
The dataset has 2714 samples having both explicit and implicit causality in which entities are in the same sentence or different sentences.
The dataset has following nine semantic causal relations (with directionality) between entitities E1 and E2 in a text snippet:
(1) Cause(E1,E2)
(2) Cause(E2,E1)
(3) Enable(E1,E2)
(4) Enable(E2,E1)
(5) Prevent(E1,E2)
(6) Prevent(E2,E1)
(7) Hinder(E1,E2)
(8) Hinder(E2,E1)
(9) Other | false | 3 | false | pensieves/mimicause | 2022-03-29T14:54:48.000Z | null | false | 87615eac7add0a10355c50b25b5cff17e782cad3 | [] | [
"arxiv:2110.07090",
"license:apache-2.0"
] | https://huggingface.co/datasets/pensieves/mimicause/resolve/main/README.md | ---
license: apache-2.0
pretty_name: MIMICause
---
# Dataset Card for "MIMICause"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additinal-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/](https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/)
- **Paper:** [MIMICause: Representation and automatic extraction of causal relation types from clinical notes](https://arxiv.org/abs/2110.07090)
- **Size of downloaded dataset files:** 333.4 KB
- **Size of the generated dataset:** 491.2 KB
- **Total amount of disk used:** 668.2 KB
### Dataset Summary
MIMICause Dataset is a dataset for representation and automatic extraction of causal relation types from clinical notes. The MIMICause dataset requires manual download of the mimicause.zip file from the **Community Annotations Downloads** section of the n2c2 dataset on the [Harvard's DBMI Data Portal](https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/) after signing their agreement forms, which is a quick and easy procedure.
The dataset has 2714 samples having both explicit and implicit causality in which entities are in the same sentence or different sentences. The nine semantic causal relations (with directionality) between entitities E1 and E2 in a text snippets are -- (1) Cause(E1,E2) (2) Cause(E2,E1) (3) Enable(E1,E2) (4) Enable(E2,E1) (5) Prevent(E1,E2) (6) Prevent(E2,E1) (7) Hinder(E1,E2) (8) Hinder(E2,E1) (9) Other.
### Supported Tasks
Causal relation extraction between entities expressed implicitly or explicitly, in single or across multiple sentences.
## Dataset Structure
### Data Instances
An example of a data sample looks as follows:
```
{
"E1": "Florinef",
"E2": "fluid retention",
"Text": "Treated with <e1>Florinef</e1> in the past, was d/c'd due to <e2>fluid retention</e2>.",
"Label": 0
}
```
### Data Fields
The data fields are the same among all the splits.
- `E1`: a `string` value.
- `E2`: a `string` value.
- `Text`: a `large_string` value.
- `Label`: a `ClassLabel` categorical value.
### Data Splits
The original dataset that gets downloaded from the [Harvard's DBMI Data Portal](https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/) have all the data in a single split. The dataset loading provided here through huggingface datasets splits the data into the following train, validation and test splits for convenience.
| name |train|validation|test|
|---------|----:|---------:|---:|
|mimicause| 1953| 489 | 272|
## Additional Information
### Citation Information
```
@inproceedings{khetan-etal-2022-mimicause,
title={MIMICause: Representation and automatic extraction of causal relation types from clinical notes},
author={Vivek Khetan and Md Imbesat Hassan Rizvi and Jessica Huber and Paige Bartusiak and Bogdan Sacaleanu and Andrew Fano},
booktitle ={Findings of the Association for Computational Linguistics: ACL 2022},
month={may},
year={2022},
publisher={Association for Computational Linguistics},
address={Dublin, The Republic of Ireland},
url={},
doi={},
pages={},
}
``` |
z-uo | null | null | null | false | 2 | false | z-uo/qasper-squad | 2022-10-25T10:02:49.000Z | null | false | 86d2ca7da33fbef822c6a0786c12eaa8cb3772fa | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"language_bcp47:en-US"
] | https://huggingface.co/datasets/z-uo/qasper-squad/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- question-answering
task_ids:
- closed-domain-qa
pretty_name: qasper-squad
language_bcp47:
- en-US
---
# Quasper into squad version
This is a change of format of [qasper](https://huggingface.co/datasets/qasper) dataset into squad format. |
shpotes | null | @inproceedings{BehrendtNovak2017ICRA,
title={A Deep Learning Approach to Traffic Lights: Detection, Tracking, and Classification},
author={Behrendt, Karsten and Novak, Libor},
booktitle={Robotics and Automation (ICRA), 2017 IEEE International Conference on},
organization={IEEE}
} | This dataset contains 13427 camera images at a resolution of 1280x720 pixels and contains about
24000 annotated traffic lights. The annotations include bounding boxes of traffic lights as well
as the current state (active light) of each traffic light. The camera images are provided as raw
12bit HDR images taken with a red-clear-clear-blue filter and as reconstructed 8-bit RGB color
images. The RGB images are provided for debugging and can also be used for training. However, the
RGB conversion process has some drawbacks. Some of the converted images may contain artifacts and
the color distribution may seem unusual. | false | 2 | false | shpotes/bosch-small-traffic-lights-dataset | 2022-03-10T20:00:45.000Z | null | false | b333b72d400f6b4a23fd33524065cb732b372c8a | [] | [
"license:other"
] | https://huggingface.co/datasets/shpotes/bosch-small-traffic-lights-dataset/resolve/main/README.md | ---
license: other
---
|
Carlosholivan | null | null | null | false | 3 | false | Carlosholivan/base | 2022-03-08T18:14:11.000Z | null | false | abab96a91ef584e7da293226844f0eaafb9498b7 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/Carlosholivan/base/resolve/main/README.md | ---
license: apache-2.0
---
|
SocialGrep | null | null | This dataset follows the notorious subreddit /r/Antiwork, a place for many Redditors to share resources and discuss grievances with the current labour market. | false | 2 | false | SocialGrep/the-antiwork-subreddit-dataset | 2022-07-01T17:57:34.000Z | null | false | 4a906f0b97bc7341bfc5d4453ae23a78edefc0b3 | [] | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original"
] | https://huggingface.co/datasets/SocialGrep/the-antiwork-subreddit-dataset/resolve/main/README.md | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for the-antiwork-subreddit-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-antiwork-subreddit-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theantiworksubredditdataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theantiworksubredditdataset)
### Dataset Summary
This corpus contains the complete data for the activity of the /r/Antiwork subreddit until 2022-02-18.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] |
laion | null | null | null | false | 177 | false | laion/laion2B-en | 2022-03-09T00:25:22.000Z | null | false | 9d1b74d39b6517383b2a2152ae2772888b594e45 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/laion/laion2B-en/resolve/main/README.md | ---
license: cc-by-4.0
---
|
christianloyal | null | null | null | false | 2 | false | christianloyal/loyal_clinc_MLE | 2022-03-10T17:50:54.000Z | null | false | 90b930b5609f5f668c765a5d23f9610d5d0dbcf1 | [] | [
"license:mit"
] | https://huggingface.co/datasets/christianloyal/loyal_clinc_MLE/resolve/main/README.md | ---
license: mit
---
Dataset for Loyal Health Inc Software Engineer Machine Learning Interview |
laion | null | null | null | false | 74 | false | laion/laion2B-multi | 2022-03-09T03:46:58.000Z | null | false | fc4613eeec55c60d113ac9cab58dca7c3e12523e | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/laion/laion2B-multi/resolve/main/README.md | ---
license: cc-by-4.0
---
|
hadehuang | null | null | null | false | 1 | false | hadehuang/testdataset | 2022-03-09T08:24:49.000Z | null | false | 1b9776677fd2d5b21056e200089942709d0c3206 | [] | [] | https://huggingface.co/datasets/hadehuang/testdataset/resolve/main/README.md | This is my first dataset |
khcy82dyc | null | null | null | false | 2 | false | khcy82dyc/zzzz | 2022-03-09T11:03:58.000Z | null | false | 59566ca6c10db39a863bef6d894e095e85e5c930 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/khcy82dyc/zzzz/resolve/main/README.md | ---
license: apache-2.0
---
|
ai4bharat | null | @inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
} | This is the paraphrasing dataset released as part of IndicNLG Suite. Each
input is paired with up to 5 references. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 5.57M. | false | 3 | false | ai4bharat/IndicParaphrase | 2022-10-13T06:08:55.000Z | null | false | d74c67aec2ac5a2f561bcb30aa8e1fc7d7d88b92 | [] | [
"arxiv:2203.05437",
"annotations_creators:no-annotation",
"language_creators:found",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"multilingua... | https://huggingface.co/datasets/ai4bharat/IndicParaphrase/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
pretty_name: IndicParaphrase
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- conditional-text-generation
task_ids:
- conditional-text-generation-other-paraphrase-generation
---
# Dataset Card for "IndicParaphrase"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicParaphrase is the paraphrasing dataset released as part of IndicNLG Suite. Each
input is paired with up to 5 references. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 5.57M.
### Supported Tasks and Leaderboards
**Tasks:** Paraphrase generation
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One example from the `hi` dataset is given below in JSON format.
```
{
'id': '1',
'input': 'निजी क्षेत्र में प्रदेश की 75 प्रतिशत नौकरियां हरियाणा के युवाओं के लिए आरक्षित की जाएगी।',
'references': ['प्रदेश के युवाओं को निजी उद्योगों में 75 प्रतिशत आरक्षण देंगे।',
'युवाओं के लिए हरियाणा की सभी प्राइवेट नौकरियों में 75 प्रतिशत आरक्षण लागू किया जाएगा।',
'निजी क्षेत्र में 75 प्रतिशत आरक्षित लागू कर प्रदेश के युवाओं का रोजगार सुनिश्चत किया जाएगा।',
'प्राईवेट कम्पनियों में हरियाणा के नौजवानों को 75 प्रतिशत नौकरियां में आरक्षित की जाएगी।',
'प्रदेश की प्राइवेट फैक्टरियों में 75 फीसदी रोजगार हरियाणा के युवाओं के लिए आरक्षित किए जाएंगे।'],
'target': 'प्रदेश के युवाओं को निजी उद्योगों में 75 प्रतिशत आरक्षण देंगे।'
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `pivot (string)`: English sentence used as the pivot
- `input (string)`: Input sentence
- `references (list of strings)`: Paraphrases of `input`, ordered according to the least n-gram overlap
- `target (string)`: The first reference (most dissimilar paraphrase)
### Data Splits
We first select 10K instances each for the validation and test and put remaining in the training dataset. `Assamese (as)`, due to its low-resource nature, could only be split into validation and test sets with 4,420 examples each.
Individual dataset with train-dev-test example counts are given below:
Language | ISO 639-1 Code |Train | Dev | Test |
--------------|----------------|-------|-----|------|
Assamese | as | - | 4,420 | 4,420 |
Bengali | bn | 890,445 | 10,000 | 10,000 |
Gujarati | gu | 379,202 | 10,000 | 10,000 |
Hindi | hi | 929,507 | 10,000 | 10,000 |
Kannada | kn | 522,148 | 10,000 | 10,000 |
Malayalam | ml |761,933 | 10,000 | 10,000 |
Marathi | mr |406,003 | 10,000 | 10,000 |
Oriya | or | 105,970 | 10,000 | 10,000 |
Punjabi | pa | 266,704 | 10,000 | 10,000 |
Tamil | ta | 497,798 | 10,000 | 10,000 |
Telugu | te | 596,283 | 10,000 | 10,000 |
## Dataset Creation
### Curation Rationale
[More information needed]
### Source Data
[Samanantar dataset](https://indicnlp.ai4bharat.org/samanantar/)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}
```
### Contributions
|
rubrix | null | null | null | false | 3 | false | rubrix/sst2_with_predictions | 2022-09-16T13:23:05.000Z | null | false | 03d5016d18872b209e80fd9eb913225c096defd0 | [] | [] | https://huggingface.co/datasets/rubrix/sst2_with_predictions/resolve/main/README.md | # Comparing model predictions and ground truth labels with Rubrix and Hugging Face
## Build dataset
You can skip this step if you run:
```python
from datasets import load_dataset
import rubrix as rb
ds = rb.DatasetForTextClassification.from_datasets(load_dataset("rubrix/sst2_with_predictions", split="train"))
```
Otherwise, the following cell will run the pipeline over the training set and store labels and predictions.
```python
from datasets import load_dataset
from transformers import pipeline, AutoModelForSequenceClassification
import rubrix as rb
name = "distilbert-base-uncased-finetuned-sst-2-english"
# Need to define id2label because surprisingly the pipeline has uppercase label names
model = AutoModelForSequenceClassification.from_pretrained(name, id2label={0: 'negative', 1: 'positive'})
nlp = pipeline("sentiment-analysis", model=model, tokenizer=name, return_all_scores=True)
dataset = load_dataset("glue", "sst2", split="train")
# batch predict
def predict(example):
return {"prediction": nlp(example["sentence"])}
# add predictions to the dataset
dataset = dataset.map(predict, batched=True).rename_column("sentence", "text")
# build rubrix dataset from hf dataset
ds = rb.DatasetForTextClassification.from_datasets(dataset, annotation="label")
```
```python
# Install Rubrix and start exploring and sharing URLs with interesting subsets, etc.
rb.log(ds, "sst2")
```
```python
ds.to_datasets().push_to_hub("rubrix/sst2_with_predictions")
```
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]
## Analize misspredictions and ambiguous labels
### With the UI
With Rubrix's UI you can:
- Combine filters and full-text/DSL queries to quickly find important samples
- All URLs contain the state so you can share with collaborator and annotator specific dataset regions to work on.
- Sort examples by score, as well as custom metadata fields.

### Programmatically
Let's find all the wrong predictions from Python. This is useful for bulk operations (relabelling, discarding, etc.) as well as
```python
import pandas as pd
# Get dataset slice with wrong predictions
df = rb.load("sst2", query="predicted:ko").to_pandas()
# display first 20 examples
with pd.option_context('display.max_colwidth', None):
display(df[["text", "prediction", "annotation"]].head(20))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>this particular , anciently demanding métier</td>
<td>[(negative, 0.9386059045791626), (positive, 0.06139408051967621)]</td>
<td>positive</td>
</tr>
<tr>
<th>1</th>
<td>under our skin</td>
<td>[(positive, 0.7508484721183777), (negative, 0.24915160238742828)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>evokes a palpable sense of disconnection , made all the more poignant by the incessant use of cell phones .</td>
<td>[(negative, 0.6634528636932373), (positive, 0.3365470767021179)]</td>
<td>positive</td>
</tr>
<tr>
<th>3</th>
<td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>
<td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>
<td>negative</td>
</tr>
<tr>
<th>4</th>
<td>into a pulpy concept that , in many other hands would be completely forgettable</td>
<td>[(positive, 0.6178210377693176), (negative, 0.3821789622306824)]</td>
<td>negative</td>
</tr>
<tr>
<th>5</th>
<td>transcends ethnic lines .</td>
<td>[(positive, 0.9758220314979553), (negative, 0.024177948012948036)]</td>
<td>negative</td>
</tr>
<tr>
<th>6</th>
<td>is barely</td>
<td>[(negative, 0.9922297596931458), (positive, 0.00777028314769268)]</td>
<td>positive</td>
</tr>
<tr>
<th>7</th>
<td>a pulpy concept that , in many other hands would be completely forgettable</td>
<td>[(negative, 0.9738760590553284), (positive, 0.026123959571123123)]</td>
<td>positive</td>
</tr>
<tr>
<th>8</th>
<td>of hollywood heart-string plucking</td>
<td>[(positive, 0.9889695644378662), (negative, 0.011030420660972595)]</td>
<td>negative</td>
</tr>
<tr>
<th>9</th>
<td>a minimalist beauty and the beast</td>
<td>[(positive, 0.9100378751754761), (negative, 0.08996208757162094)]</td>
<td>negative</td>
</tr>
<tr>
<th>10</th>
<td>the intimate , unguarded moments of folks who live in unusual homes --</td>
<td>[(positive, 0.9967381358146667), (negative, 0.0032618637196719646)]</td>
<td>negative</td>
</tr>
<tr>
<th>11</th>
<td>steals the show</td>
<td>[(negative, 0.8031412363052368), (positive, 0.1968587338924408)]</td>
<td>positive</td>
</tr>
<tr>
<th>12</th>
<td>enough</td>
<td>[(positive, 0.7941301465034485), (negative, 0.2058698982000351)]</td>
<td>negative</td>
</tr>
<tr>
<th>13</th>
<td>accept it as life and</td>
<td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>
<td>negative</td>
</tr>
<tr>
<th>14</th>
<td>this is the kind of movie that you only need to watch for about thirty seconds before you say to yourself , ` ah , yes ,</td>
<td>[(negative, 0.7889454960823059), (positive, 0.21105451881885529)]</td>
<td>positive</td>
</tr>
<tr>
<th>15</th>
<td>plunges you into a reality that is , more often then not , difficult and sad ,</td>
<td>[(positive, 0.967541515827179), (negative, 0.03245845437049866)]</td>
<td>negative</td>
</tr>
<tr>
<th>16</th>
<td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>
<td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>
<td>negative</td>
</tr>
<tr>
<th>17</th>
<td>troubled and determined homicide cop</td>
<td>[(negative, 0.6632784008979797), (positive, 0.33672159910202026)]</td>
<td>positive</td>
</tr>
<tr>
<th>18</th>
<td>human nature is a goofball movie , in the way that malkovich was , but it tries too hard</td>
<td>[(positive, 0.5959018468856812), (negative, 0.40409812331199646)]</td>
<td>negative</td>
</tr>
<tr>
<th>19</th>
<td>to watch too many barney videos</td>
<td>[(negative, 0.9909896850585938), (positive, 0.00901023019105196)]</td>
<td>positive</td>
</tr>
</tbody>
</table>
</div>
```python
df.annotation.hist()
```
<AxesSubplot:>

```python
# Get dataset slice with wrong predictions
df = rb.load("sst2", query="predicted:ko and annotated_as:negative").to_pandas()
# display first 20 examples
with pd.option_context('display.max_colwidth', None):
display(df[["text", "prediction", "annotation"]].head(20))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>
<td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>
<td>negative</td>
</tr>
<tr>
<th>1</th>
<td>a minimalist beauty and the beast</td>
<td>[(positive, 0.9100378751754761), (negative, 0.08996208757162094)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>accept it as life and</td>
<td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>
<td>negative</td>
</tr>
<tr>
<th>3</th>
<td>plunges you into a reality that is , more often then not , difficult and sad ,</td>
<td>[(positive, 0.967541515827179), (negative, 0.03245845437049866)]</td>
<td>negative</td>
</tr>
<tr>
<th>4</th>
<td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>
<td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>
<td>negative</td>
</tr>
<tr>
<th>5</th>
<td>and social commentary</td>
<td>[(positive, 0.7863275408744812), (negative, 0.2136724889278412)]</td>
<td>negative</td>
</tr>
<tr>
<th>6</th>
<td>we do n't get williams ' usual tear and a smile , just sneers and bile , and the spectacle is nothing short of refreshing .</td>
<td>[(positive, 0.9982783794403076), (negative, 0.0017216014675796032)]</td>
<td>negative</td>
</tr>
<tr>
<th>7</th>
<td>before pulling the plug on the conspirators and averting an american-russian armageddon</td>
<td>[(positive, 0.6992855072021484), (negative, 0.30071452260017395)]</td>
<td>negative</td>
</tr>
<tr>
<th>8</th>
<td>in tight pants and big tits</td>
<td>[(positive, 0.7850217819213867), (negative, 0.2149781733751297)]</td>
<td>negative</td>
</tr>
<tr>
<th>9</th>
<td>that it certainly does n't feel like a film that strays past the two and a half mark</td>
<td>[(positive, 0.6591460108757019), (negative, 0.3408539891242981)]</td>
<td>negative</td>
</tr>
<tr>
<th>10</th>
<td>actress-producer and writer</td>
<td>[(positive, 0.8167378306388855), (negative, 0.1832621842622757)]</td>
<td>negative</td>
</tr>
<tr>
<th>11</th>
<td>gives devastating testimony to both people 's capacity for evil and their heroic capacity for good .</td>
<td>[(positive, 0.8960123062133789), (negative, 0.10398765653371811)]</td>
<td>negative</td>
</tr>
<tr>
<th>12</th>
<td>deep into the girls ' confusion and pain as they struggle tragically to comprehend the chasm of knowledge that 's opened between them</td>
<td>[(positive, 0.9729612469673157), (negative, 0.027038726955652237)]</td>
<td>negative</td>
</tr>
<tr>
<th>13</th>
<td>a younger lad in zen and the art of getting laid in this prickly indie comedy of manners and misanthropy</td>
<td>[(positive, 0.9875985980033875), (negative, 0.012401451356709003)]</td>
<td>negative</td>
</tr>
<tr>
<th>14</th>
<td>get on a board and , uh , shred ,</td>
<td>[(positive, 0.5352609753608704), (negative, 0.46473899483680725)]</td>
<td>negative</td>
</tr>
<tr>
<th>15</th>
<td>so preachy-keen and</td>
<td>[(positive, 0.9644021391868591), (negative, 0.035597823560237885)]</td>
<td>negative</td>
</tr>
<tr>
<th>16</th>
<td>there 's an admirable rigor to jimmy 's relentless anger , and to the script 's refusal of a happy ending ,</td>
<td>[(positive, 0.9928517937660217), (negative, 0.007148175034672022)]</td>
<td>negative</td>
</tr>
<tr>
<th>17</th>
<td>` christian bale 's quinn ( is ) a leather clad grunge-pirate with a hairdo like gandalf in a wind-tunnel and a simply astounding cor-blimey-luv-a-duck cockney accent . '</td>
<td>[(positive, 0.9713286757469177), (negative, 0.028671346604824066)]</td>
<td>negative</td>
</tr>
<tr>
<th>18</th>
<td>passion , grief and fear</td>
<td>[(positive, 0.9849751591682434), (negative, 0.015024829655885696)]</td>
<td>negative</td>
</tr>
<tr>
<th>19</th>
<td>to keep the extremes of screwball farce and blood-curdling family intensity on one continuum</td>
<td>[(positive, 0.8838250637054443), (negative, 0.11617499589920044)]</td>
<td>negative</td>
</tr>
</tbody>
</table>
</div>
```python
# Get dataset slice with wrong predictions
df = rb.load("sst2", query="predicted:ko and score:{0.99 TO *}").to_pandas()
# display first 20 examples
with pd.option_context('display.max_colwidth', None):
display(df[["text", "prediction", "annotation"]].head(20))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>
<td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>
<td>negative</td>
</tr>
<tr>
<th>1</th>
<td>accept it as life and</td>
<td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>
<td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>
<td>negative</td>
</tr>
<tr>
<th>3</th>
<td>will no doubt rally to its cause , trotting out threadbare standbys like ` masterpiece ' and ` triumph ' and all that malarkey ,</td>
<td>[(negative, 0.9936562180519104), (positive, 0.006343740504235029)]</td>
<td>positive</td>
</tr>
<tr>
<th>4</th>
<td>we do n't get williams ' usual tear and a smile , just sneers and bile , and the spectacle is nothing short of refreshing .</td>
<td>[(positive, 0.9982783794403076), (negative, 0.0017216014675796032)]</td>
<td>negative</td>
</tr>
<tr>
<th>5</th>
<td>somehow manages to bring together kevin pollak , former wrestler chyna and dolly parton</td>
<td>[(negative, 0.9979034662246704), (positive, 0.002096540294587612)]</td>
<td>positive</td>
</tr>
<tr>
<th>6</th>
<td>there 's an admirable rigor to jimmy 's relentless anger , and to the script 's refusal of a happy ending ,</td>
<td>[(positive, 0.9928517937660217), (negative, 0.007148175034672022)]</td>
<td>negative</td>
</tr>
<tr>
<th>7</th>
<td>the bottom line with nemesis is the same as it has been with all the films in the series : fans will undoubtedly enjoy it , and the uncommitted need n't waste their time on it</td>
<td>[(positive, 0.995850682258606), (negative, 0.004149340093135834)]</td>
<td>negative</td>
</tr>
<tr>
<th>8</th>
<td>is genial but never inspired , and little</td>
<td>[(negative, 0.9921030402183533), (positive, 0.007896988652646542)]</td>
<td>positive</td>
</tr>
<tr>
<th>9</th>
<td>heaped upon a project of such vast proportions need to reap more rewards than spiffy bluescreen technique and stylish weaponry .</td>
<td>[(negative, 0.9958089590072632), (positive, 0.004191054962575436)]</td>
<td>positive</td>
</tr>
<tr>
<th>10</th>
<td>than recommended -- as visually bland as a dentist 's waiting room , complete with soothing muzak and a cushion of predictable narrative rhythms</td>
<td>[(negative, 0.9988711476325989), (positive, 0.0011287889210507274)]</td>
<td>positive</td>
</tr>
<tr>
<th>11</th>
<td>spectacle and</td>
<td>[(positive, 0.9941601753234863), (negative, 0.005839805118739605)]</td>
<td>negative</td>
</tr>
<tr>
<th>12</th>
<td>groan and</td>
<td>[(negative, 0.9987359642982483), (positive, 0.0012639997294172645)]</td>
<td>positive</td>
</tr>
<tr>
<th>13</th>
<td>'re not likely to have seen before , but beneath the exotic surface ( and exotic dancing ) it 's surprisingly old-fashioned .</td>
<td>[(positive, 0.9908103942871094), (negative, 0.009189637377858162)]</td>
<td>negative</td>
</tr>
<tr>
<th>14</th>
<td>its metaphors are opaque enough to avoid didacticism , and</td>
<td>[(negative, 0.990602970123291), (positive, 0.00939704105257988)]</td>
<td>positive</td>
</tr>
<tr>
<th>15</th>
<td>by kevin bray , whose crisp framing , edgy camera work , and wholesale ineptitude with acting , tone and pace very obviously mark him as a video helmer making his feature debut</td>
<td>[(positive, 0.9973387122154236), (negative, 0.0026612314395606518)]</td>
<td>negative</td>
</tr>
<tr>
<th>16</th>
<td>evokes the frustration , the awkwardness and the euphoria of growing up , without relying on the usual tropes .</td>
<td>[(positive, 0.9989104270935059), (negative, 0.0010896018939092755)]</td>
<td>negative</td>
</tr>
<tr>
<th>17</th>
<td>, incoherence and sub-sophomoric</td>
<td>[(negative, 0.9962475895881653), (positive, 0.003752368036657572)]</td>
<td>positive</td>
</tr>
<tr>
<th>18</th>
<td>seems intimidated by both her subject matter and the period trappings of this debut venture into the heritage business .</td>
<td>[(negative, 0.9923072457313538), (positive, 0.007692818529903889)]</td>
<td>positive</td>
</tr>
<tr>
<th>19</th>
<td>despite downplaying her good looks , carries a little too much ai n't - she-cute baggage into her lead role as a troubled and determined homicide cop to quite pull off the heavy stuff .</td>
<td>[(negative, 0.9948075413703918), (positive, 0.005192441400140524)]</td>
<td>positive</td>
</tr>
</tbody>
</table>
</div>
```python
# Get dataset slice with wrong predictions
df = rb.load("sst2", query="predicted:ko and score:{* TO 0.6}").to_pandas()
# display first 20 examples
with pd.option_context('display.max_colwidth', None):
display(df[["text", "prediction", "annotation"]].head(20))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>get on a board and , uh , shred ,</td>
<td>[(positive, 0.5352609753608704), (negative, 0.46473899483680725)]</td>
<td>negative</td>
</tr>
<tr>
<th>1</th>
<td>is , truly and thankfully , a one-of-a-kind work</td>
<td>[(positive, 0.5819814801216125), (negative, 0.41801854968070984)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>starts as a tart little lemon drop of a movie and</td>
<td>[(negative, 0.5641832947731018), (positive, 0.4358167052268982)]</td>
<td>positive</td>
</tr>
<tr>
<th>3</th>
<td>between flaccid satire and what</td>
<td>[(negative, 0.5532692074775696), (positive, 0.44673076272010803)]</td>
<td>positive</td>
</tr>
<tr>
<th>4</th>
<td>it certainly does n't feel like a film that strays past the two and a half mark</td>
<td>[(negative, 0.5386656522750854), (positive, 0.46133431792259216)]</td>
<td>positive</td>
</tr>
<tr>
<th>5</th>
<td>who liked there 's something about mary and both american pie movies</td>
<td>[(negative, 0.5086333751678467), (positive, 0.4913666248321533)]</td>
<td>positive</td>
</tr>
<tr>
<th>6</th>
<td>many good ideas as bad is the cold comfort that chin 's film serves up with style and empathy</td>
<td>[(positive, 0.557632327079773), (negative, 0.44236767292022705)]</td>
<td>negative</td>
</tr>
<tr>
<th>7</th>
<td>about its ideas and</td>
<td>[(positive, 0.518638551235199), (negative, 0.48136141896247864)]</td>
<td>negative</td>
</tr>
<tr>
<th>8</th>
<td>of a sick and evil woman</td>
<td>[(negative, 0.5554516315460205), (positive, 0.4445483684539795)]</td>
<td>positive</td>
</tr>
<tr>
<th>9</th>
<td>though this rude and crude film does deliver a few gut-busting laughs</td>
<td>[(positive, 0.5045541524887085), (negative, 0.4954459071159363)]</td>
<td>negative</td>
</tr>
<tr>
<th>10</th>
<td>to squeeze the action and our emotions into the all-too-familiar dramatic arc of the holocaust escape story</td>
<td>[(negative, 0.5050069093704224), (positive, 0.49499306082725525)]</td>
<td>positive</td>
</tr>
<tr>
<th>11</th>
<td>that throws a bunch of hot-button items in the viewer 's face and asks to be seen as hip , winking social commentary</td>
<td>[(negative, 0.5873904228210449), (positive, 0.41260960698127747)]</td>
<td>positive</td>
</tr>
<tr>
<th>12</th>
<td>'s soulful and unslick</td>
<td>[(positive, 0.5931627750396729), (negative, 0.40683719515800476)]</td>
<td>negative</td>
</tr>
</tbody>
</table>
</div>
```python
from rubrix.metrics.commons import *
```
```python
text_length("sst2", query="predicted:ko").visualize()
```
 |
nthngdy | null | @inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{\'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{\'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{\"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
} | The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\ | false | 57 | false | nthngdy/oscar-mini | 2022-10-25T08:56:37.000Z | oscar | false | e41c9a32ae582f42bbb1fa2858e850f75bb7e9fe | [] | [
"arxiv:2010.14571",
"annotations_creators:no-annotation",
"language_creators:found",
"language:af",
"language:am",
"language:ar",
"language:arz",
"language:as",
"language:az",
"language:azb",
"language:ba",
"language:be",
"language:bg",
"language:bn",
"language:bo",
"language:br",
"l... | https://huggingface.co/datasets/nthngdy/oscar-mini/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- am
- ar
- arz
- as
- az
- azb
- ba
- be
- bg
- bn
- bo
- br
- ca
- ce
- ceb
- ckb
- cs
- cv
- cy
- da
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gl
- gu
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mhr
- mk
- ml
- mn
- mr
- ms
- mt
- my
- nds
- ne
- nl
- nn
- 'no'
- or
- os
- pa
- pl
- pnb
- ps
- pt
- ro
- ru
- sa
- sah
- sd
- sh
- si
- sk
- sl
- sq
- sr
- sv
- sw
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- yi
- zh
license:
- cc0-1.0
multilinguality:
- multilingual
source_datasets:
- oscar
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: oscar
pretty_name: OSCAR
---
## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts and debug codebases that would eventually use the original OSCAR dataset.
Using this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly intended to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the [Go programming language](https://golang.org/) so it lets the [Go runtime](https://golang.org/src/runtime/mprof.go) handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
laion | null | null | null | false | 3 | false | laion/laion1B-nolang | 2022-03-09T15:04:35.000Z | null | false | 2ecab88787cb57c38f3c2ddf1da94a9351538769 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/laion/laion1B-nolang/resolve/main/README.md | ---
license: cc-by-4.0
---
|
drAbreu | null | @article{Krallinger2015TheCC,
title={The CHEMDNER corpus of chemicals and drugs and its annotation principles},
author={Martin Krallinger and Obdulia Rabal and Florian Leitner and Miguel Vazquez and David Salgado and Zhiyong Lu and Robert Leaman and Yanan Lu and Dong-Hong Ji and Daniel M. Lowe and Roger A. Sayle and Riza Theresa Batista-Navarro and Rafal Rak and Torsten Huber and Tim Rockt{\"a}schel and S{\'e}rgio Matos and David Campos and Buzhou Tang and Hua Xu and Tsendsuren Munkhdalai and Keun Ho Ryu and S. V. Ramanan and P. Senthil Nathan and Slavko Zitnik and Marko Bajec and Lutz Weber and Matthias Irmer and Saber Ahmad Akhondi and Jan A. Kors and Shuo Xu and Xin An and Utpal Kumar Sikdar and Asif Ekbal and Masaharu Yoshioka and Thaer M. Dieb and Miji Choi and Karin M. Verspoor and Madian Khabsa and C. Lee Giles and Hongfang Liu and K. E. Ravikumar and Andre Lamurias and Francisco M. Couto and Hong-Jie Dai and Richard Tzong-Han Tsai and C Ata and Tolga Can and Anabel Usie and Rui Alves and Isabel Segura-Bedmar and Paloma Mart{\'i}nez and Julen Oyarz{\'a}bal and Alfonso Valencia},
journal={Journal of Cheminformatics},
year={2015},
volume={7},
pages={S2 - S2}
} | The automatic extraction of chemical information from text requires the recognition of chemical entity mentions as one of its key steps. When developing supervised named entity recognition (NER) systems, the availability of a large, manually annotated text corpus is desirable. Furthermore, large corpora permit the robust evaluation and comparison of different approaches that detect chemicals in documents. We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a total of 84,355 chemical entity mentions labeled manually by expert chemistry literature curators, following annotation guidelines specifically defined for this task. The abstracts of the CHEMDNER corpus were selected to be representative for all major chemical disciplines. Each of the chemical entity mentions was manually labeled according to its structure-associated chemical entity mention (SACEM) class: abbreviation, family, formula, identifier, multiple, systematic and trivial. The difficulty and consistency of tagging chemicals in text was measured using an agreement study between annotators, obtaining a percentage agreement of 91. For a subset of the CHEMDNER corpus (the test set of 3,000 abstracts) we provide not only the Gold Standard manual annotations, but also mentions automatically detected by the 26 teams that participated in the BioCreative IV CHEMDNER chemical mention recognition task. In addition, we release the CHEMDNER silver standard corpus of automatically extracted mentions from 17,000 randomly selected PubMed abstracts. A version of the CHEMDNER corpus in the BioC format has been generated as well. We propose a standard for required minimum information about entity annotations for the construction of domain specific corpora on chemical and drug entities. The CHEMDNER corpus and annotation guidelines are available at: http://www.biocreative.org/resources/biocreative-iv/chemdner-corpus/ | false | 318 | false | drAbreu/bc4chemd_ner | 2022-10-25T10:02:51.000Z | bc4chemd | false | 2615416d7c8cd65fbd6b2b7094f4136d4f8d9515 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:GitHub",
"task_categories:token-classification",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/drAbreu/bc4chemd_ner/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- GitHub
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: bc4chemd
pretty_name: bc4chemd_ner
---
# Dataset Card for bc2gm_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://biocreative.bioinformatics.udel.edu/resources/biocreative-iv/chemdner-corpus/)
- **Repository:** [Github](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/BC4CHEMD)
- **Paper:** [NCBI](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4331692/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
* Token Classification
* Named Entity Recognition
### Languages
- English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `id`: Sentence identifier.
- `tokens`: Array of tokens composing a sentence.
- `ner_tags`: Array of tags, where `0` indicates no disease mentioned, `1` signals the first token of a disease and `2` the subsequent disease tokens.
### Data Splits
```python
DatasetDict({
train: Dataset({
features: ['id', 'tokens', 'ner_tags'],
num_rows: 30683
})
validation: Dataset({
features: ['id', 'tokens', 'ner_tags'],
num_rows: 30640
})
test: Dataset({
features: ['id', 'tokens', 'ner_tags'],
num_rows: 26365
})
})
```
## Dataset Creation
### Curation Rationale
The automatic extraction of chemical information from text requires the recognition of chemical
entity mentions as one of its key steps. When developing supervised named entity recognition
(NER) systems, the availability of a large, manually annotated text corpus is desirable.
Furthermore, large corpora permit the robust evaluation and comparison of different
approaches that detect chemicals in documents.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
### Annotations
#### Annotation process
We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a
total of 84,355 chemical entity mentions labeled manually by expert chemistry literature curators,
following annotation guidelines specifically defined for this task.
#### Who are the annotators?
Expert chemistry literature curators
### Personal and Sensitive Information
It does not contain this kind of information
The abstracts of the CHEMDNER corpus were selected to be representative for all
major chemical disciplines. Each of the chemical entity mentions was manually
labeled according to its structure-associated chemical entity mention (SACEM)
class: abbreviation, family, formula, identifier, multiple, systematic and
trivial. The difficulty and consistency of tagging chemicals in text was measured using an agreement study
between annotators, obtaining a percentage agreement of 91.
### Licensing Information
Unknown
### Citation Information
```latex
@article{Krallinger2015TheCC,
title={The CHEMDNER corpus of chemicals and drugs and its annotation principles},
author={Martin Krallinger and Obdulia Rabal and Florian Leitner and Miguel Vazquez and David Salgado and Zhiyong Lu and Robert Leaman and Yanan Lu and Dong-Hong Ji and Daniel M. Lowe and Roger A. Sayle and Riza Theresa Batista-Navarro and Rafal Rak and Torsten Huber and Tim Rockt{\"a}schel and S{\'e}rgio Matos and David Campos and Buzhou Tang and Hua Xu and Tsendsuren Munkhdalai and Keun Ho Ryu and S. V. Ramanan and P. Senthil Nathan and Slavko Zitnik and Marko Bajec and Lutz Weber and Matthias Irmer and Saber Ahmad Akhondi and Jan A. Kors and Shuo Xu and Xin An and Utpal Kumar Sikdar and Asif Ekbal and Masaharu Yoshioka and Thaer M. Dieb and Miji Choi and Karin M. Verspoor and Madian Khabsa and C. Lee Giles and Hongfang Liu and K. E. Ravikumar and Andre Lamurias and Francisco M. Couto and Hong-Jie Dai and Richard Tzong-Han Tsai and C Ata and Tolga Can and Anabel Usie and Rui Alves and Isabel Segura-Bedmar and Paloma Mart{\'i}nez and Julen Oyarz{\'a}bal and Alfonso Valencia},
journal={Journal of Cheminformatics},
year={2015},
volume={7},
pages={S2 - S2}
}
```
### Contributions
Thanks to [@GamalC](https://github.com/GamalC) for uploading this dataset to GitHub.
|
Non-Residual-Prompting | null | TODO | The task of C2Gen is to both generate commonsensical text which include the given words, and also have the generated text adhere to the given context. | false | 28 | false | Non-Residual-Prompting/C2Gen | 2022-10-25T10:02:58.000Z | null | false | f1cb70125a6b1ad5dd0cc97501476309cf540b3d | [] | [
"arxiv:1911.03705",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:<100K",
"task_categories:text-generation"
] | https://huggingface.co/datasets/Non-Residual-Prompting/C2Gen/resolve/main/README.md | ---
language:
- en
license:
- cc-by-sa-4.0
size_categories:
- <100K
task_categories:
- text-generation
---
# Dataset Card for Contextualized CommonGen(C2Gen)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Initial Data Collection and Normalization](#initial-cata-collection-and-normalization)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Repository:** [Non-Residual Prompting](https://github.com/FreddeFrallan/Non-Residual-Prompting)
- **Paper:** [Fine-Grained Controllable Text Generation Using Non-Residual Prompting](https://aclanthology.org/2022.acl-long.471)
- **Point of Contact:** [Fredrik Carlsson](mailto:Fredrik.Carlsson@ri.se)
### Dataset Summary
CommonGen [Lin et al., 2020](https://arxiv.org/abs/1911.03705) is a dataset for the constrained text generation task of word inclusion. But the task does not allow to include context. Therefore, to complement CommonGen, we provide an extended test set C2Gen [Carlsson et al., 2022](https://aclanthology.org/2022.acl-long.471) where an additional context is provided for each set of target words. The task is therefore reformulated to both generate commonsensical text which include the given words, and also have the generated text adhere to the given context.
### Languages
English
## Dataset Structure
### Data Instances
{"Context": "The show came on the television with people singing. The family all gathered to watch. They all became silent when the show came on.", "Words": ["follow", "series", "voice"]}
### Data Fields
- context: the generated text by the model should adhere to this text
- words: the words that should be included in the generated continuation
### Data Splits
Test
## Dataset Creation
### Curation Rationale
C2Gen was created because the authors of the paper believed that the task formulation of CommonGen is too narrow, and that it needlessly incentivizes researchers
to focus on methods that do not support context. Which is orthogonal to their belief that many application areas necessitates the consideration of surrounding context. Therefore, to complement CommonGen, they provide an extended test set where an additional context is provided for each set of target words.
### Initial Data Collection and Normalization
The dataset was constructed with the help the crowd sourcing platform MechanicalTurk. Each remaining concept set manually received a textual context. To assure the quality of the data generation, only native English speakers with a recorded high acceptance were allowed to participate. Finally, all contexts were manually verified, and fixed in terms of typos and poor quality. Furthermore we want to raise awareness that C2GEN can contain personal data or offensive content. If you would encounter such a sample, please reach out to us.
## Licensing Information
license: cc-by-sa-4.0
|
CLUTRR | null | @article{sinha2019clutrr,
Author = {Koustuv Sinha and Shagun Sodhani and Jin Dong and Joelle Pineau and William L. Hamilton},
Title = {CLUTRR: A Diagnostic Benchmark for Inductive Reasoning from Text},
Year = {2019},
journal = {Empirical Methods of Natural Language Processing (EMNLP)},
arxiv = {1908.06177}
} | CLUTRR (Compositional Language Understanding and Text-based Relational Reasoning),
a diagnostic benchmark suite, is first introduced in (https://arxiv.org/abs/1908.06177)
to test the systematic generalization and inductive reasoning capabilities of NLU systems. | false | 3 | false | CLUTRR/v1 | 2022-10-25T10:03:19.000Z | null | false | a8158d1fac10864c3424d53662fe63bf7d82dd87 | [] | [
"arxiv:1908.06177",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K"
] | https://huggingface.co/datasets/CLUTRR/v1/resolve/main/README.md | ---
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
---
# Dataset Card for CLUTRR
## Table of Contents
## Dataset Description
### Dataset Summary
**CLUTRR** (**C**ompositional **L**anguage **U**nderstanding and **T**ext-based **R**elational **R**easoning), a diagnostic benchmark suite, is first introduced in (https://arxiv.org/abs/1908.06177) to test the systematic generalization and inductive reasoning capabilities of NLU systems.
The CLUTRR benchmark allows us to test a model’s ability for **systematic generalization** by testing on stories that contain unseen combinations of logical rules, and test for the various forms of **model robustness** by adding different kinds of superfluous noise facts to the stories.
### Dataset Task
CLUTRR contains a large set of semi-synthetic stories involving hypothetical families. The task is to infer the relationship between two family members, whose relationship is not explicitly mentioned in the given story.
Join the CLUTRR community in https://www.cs.mcgill.ca/~ksinha4/clutrr/
## Dataset Structure
We show detailed information for all 14 configurations of the dataset.
### configurations:
**id**: a unique series of characters and numbers that identify each instance <br>
**story**: one semi-synthetic story involving hypothetical families<br>
**query**: the target query/relation which contains two names, where the goal is to classify the relation that holds between these two entities<br>
**target**: indicator for the correct relation for the query <br>
**target_text**: text for the correct relation for the query <br>
the indicator follows the rule as follows: <br> "aunt": 0, "son-in-law": 1, "grandfather": 2, "brother": 3,
"sister": 4,
"father": 5,
"mother": 6,
"grandmother": 7,
"uncle": 8,
"daughter-in-law": 9,
"grandson": 10,
"granddaughter": 11,
"father-in-law": 12,
"mother-in-law": 13,
"nephew": 14,
"son": 15,
"daughter": 16,
"niece": 17,
"husband": 18,
"wife": 19,
"sister-in-law": 20 <br>
**clean\_story**: the story without noise factors<br>
**proof\_state**: the logical rule of the kinship generation <br>
**f\_comb**: the kinships of the query followed by the logical rule<br>
**task\_name**: the task of the sub-dataset in a form of "task_[num1].[num2]"<br>
The first number [num1] indicates the status of noise facts added in the story: 1- no noise facts; 2- Irrelevant facts*; 3- Supporting facts*; 4- Disconnected facts*.<br>
The second number [num2] directly indicates the length of clauses for the task target.<br>
*for example:*<br>
*task_1.2 -- task requiring clauses of length 2 without adding noise facts*<br>
*task_2.3 -- task requiring clauses of length 3 with Irrelevant noise facts added in the story*<br>
**story\_edges**: all the edges in the kinship graph<br>
**edge\_types**: similar to the f\_comb, another form of the query's kinships followed by the logical rule <br>
**query\_edge**: the corresponding edge of the target query in the kinship graph<br>
**genders**: genders of names appeared in the story<br>
**task\_split**: train,test <br>
*Further explanation of Irrelevant facts, Supporting facts and Disconnected facts can be found in the 3.5 Robust Reasoning section in https://arxiv.org/abs/1908.06177
### Data Instances
An example of 'train'in Task 1.2 looks as follows.
```
{
"id": b2b9752f-d7fa-46a9-83ae-d474184c35b6,
"story": "[Lillian] and her daughter [April] went to visit [Lillian]'s mother [Ashley] last Sunday.",
"query": ('April', 'Ashley'),
"target": 7,
"target_text": "grandmother",
"clean_story": [Lillian] and her daughter [April] went to visit [Lillian]'s mother [Ashley] last Sunday.,
"proof_state": [{('April', 'grandmother', 'Ashley'): [('April', 'mother', 'Lillian'), ('Lillian', 'mother', 'Ashley')]}],
"f_comb": "mother-mother",
"task_name": "task_1.2",
"story_edges": [(0, 1), (1, 2)],
"edge_types": ['mother', 'mother'],
"query_edge": (0, 2),
"genders": "April:female,Lillian:female,Ashley:female",
"task_split": trian
}
```
### Data Splits
#### Data Split Name
(corresponding with the name used in the paper)
| task_split | split name in paper | train &validation task |test task |
| :---: | :---: | :-: | :-: |
| gen_train23_test2to10 | data_089907f8 | 1.2, 1.3 | 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 1.10 |
| gen_train234_test2to10 | data_db9b8f04 | 1.2, 1.3, 1.4| 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 1.10 |
| rob_train_clean_23_test_all_23 | data_7c5b0e70 | 1.2,1.3 | 1.2, 1.3, 2.3, 3.3, 4.3 |
| rob_train_sup_23_test_all_23 | data_06b8f2a1 | 2.2, 2.3 | 2.2, 2.3, 1.3, 3.3, 4.3 |
| rob_train_irr_23_test_all_23 | data_523348e6 | 3.2, 3.3 | 3.2, 3.3, 1.3, 2.3, 4.3 |
| rob_train_disc_23_test_all_23 | data_d83ecc3e | 4.2, 4.3 | 4.2, 4.3, 1.3, 2.3, 3.3 |
#### Data Split Summary
Number of Instances in each split
| task_split | train | validation | test |
| :-: | :---: | :---: | :---: |
| gen_train23_test2to10 | 9074 | 2020 | 1146 |
| gen_train234_test2to10 | 12064 | 3019 | 1048 |
| rob_train_clean_23_test_all_23 | 8098 | 2026 | 447 |
| rob_train_disc_23_test_all_23 | 8080 | 2020 | 445 |
| rob_train_irr_23_test_all_23 | 8079 | 2020 | 444 |
| rob_train_sup_23_test_all_23 | 8123 | 2031 | 447 |
## Citation Information
```
@article{sinha2019clutrr,
Author = {Koustuv Sinha and Shagun Sodhani and Jin Dong and Joelle Pineau and William L. Hamilton},
Title = {CLUTRR: A Diagnostic Benchmark for Inductive Reasoning from Text},
Year = {2019},
journal = {Empirical Methods of Natural Language Processing (EMNLP)},
arxiv = {1908.06177}
}
``` |
damlab | null | null | null | false | 4 | false | damlab/uniprot | 2022-03-12T12:08:29.000Z | null | false | 095f98c5853b271b00c05bbe4f2167ecdbe8951f | [] | [
"liscence:mit"
] | https://huggingface.co/datasets/damlab/uniprot/resolve/main/README.md | ---
liscence: mit
---
# Dataset Description
## Dataset Summary
This dataset is a mirror of the Uniprot/SwissProt database. It contains the names and sequences of >500K proteins.
This dataset was parsed from the FASTA file at https://ftp.uniprot.org/pub/databases/uniprot/current_release/knowledgebase/complete/uniprot_sprot.fasta.gz.
Supported Tasks and Leaderboards: None
Languages: English
## Dataset Structure
### Data Instances
Data Fields: id, description, sequence
Data Splits: None
## Dataset Creation
The dataset was downloaded and parsed into a `dataset` object and uploaded unchanged.
Initial Data Collection and Normalization: Dataset was downloaded and curated on 03/09/2022.
## Considerations for Using the Data
Social Impact of Dataset: Due to the tendency of HIV to mutate, drug resistance is a common issue when attempting to treat those infected with HIV.
Protease inhibitors are a class of drugs that HIV is known to develop resistance via mutations.
Thus, by providing a collection of protease sequences known to be resistant to one or more drugs, this dataset provides a significant collection of data that could be utilized to perform computational analysis of protease resistance mutations.
Discussion of Biases: Due to the sampling nature of this database, it is predominantly composed genes from "well studied" genomes. This may impact the "broadness" of the genes contained.
## Additional Information:
- Dataset Curators: Will Dampier
- Citation Information: TBA
|
juched | null | null | null | false | 2 | false | juched/spotifinders | 2022-03-10T01:46:51.000Z | null | false | 4887946743ee9325f7597ddadb72ece8b74a8105 | [] | [] | https://huggingface.co/datasets/juched/spotifinders/resolve/main/README.md | annotations_creators:
- Parth Parekh
languages:
- en
licenses:
- MIT
multilinguality:
- monolingual
size_categories:
- 0<n<100
source_datasets:
- original
task_categories:
- sentence-categorization
# Dataset Card for spotifinders
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
[Needs More Information]
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] |
juched | null | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. | false | 2 | false | juched/spotifinders-dataset | 2022-03-29T00:42:18.000Z | null | false | 29429c80610b9f235148694561358a1bd092c927 | [] | [
"license:mit"
] | https://huggingface.co/datasets/juched/spotifinders-dataset/resolve/main/README.md | ---
license: mit
---
|
PaddlePaddle | null | null | DureaderRobust is a chinese reading comprehension dataset, designed to evaluate the MRC models from three aspects: over-sensitivity, over-stability and generalization. | false | 858 | false | PaddlePaddle/dureader_robust | 2022-03-10T05:14:18.000Z | null | false | 142e3e33e59f6c13239b5b743f16e5bfcfbc9abf | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/PaddlePaddle/dureader_robust/resolve/main/README.md | ---
license: apache-2.0
---
|
kyleinincubated | null | null | null | false | 1 | false | kyleinincubated/autonlp-data-cat33 | 2022-10-25T10:03:04.000Z | null | false | 51f31e2aa96a98b68b3595acca660904a3ffca33 | [] | [
"language:zh",
"task_categories:text-classification"
] | https://huggingface.co/datasets/kyleinincubated/autonlp-data-cat33/resolve/main/README.md | ---
language:
- zh
task_categories:
- text-classification
---
# AutoNLP Dataset for project: cat33
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project cat33.
### Languages
The BCP-47 code for the dataset's language is zh.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "\"\u5341\u56db\u4e94\"\u65f6\u671f\uff0c\u4f9d\u6258\u6d77\u5357\u5730\u7406\u533a\u4f4d\u4f18\u52bf\u548c\u6d77\u6d0b\u8d44\u6e90\u4f18\u52bf\uff0c\u52a0\u5feb\u57f9\u80b2\u58ee\u5927\u6d77\u6d0b\u7ecf\u6d4e\uff0c\u62d3\u5c55\u6d77\u5357\u7ecf\u6d4e\u53d1\u5c55\u84dd\u8272\u7a7a\u95f4\uff0c\u5bf9\u670d\u52a1\u6d77\u6d0b\u5f3a\u56fd\u6218\u7565\u3001\u63a8\u52a8\u6d77\u5357\u81ea\u7531\u8d38\u6613\u6e2f\u5efa\u8bbe\u53ca\u5b9e\u73b0\u81ea\u8eab\u53d1\u5c55\u5177\u6709\u91cd\u8981\u610f\u4e49",
"target": 9
},
{
"text": "\u9010\u6b65\u5b9e\u65bd\u533b\u7597\u5668\u68b0\u552f\u4e00\u6807\u8bc6\uff0c\u52a0\u5f3a\u4e0e\u533b\u7597\u7ba1\u7406\u3001\u533b\u4fdd\u7ba1\u7406\u7b49\u8854\u63a5",
"target": 8
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=32, names=['\u4e92\u8054\u7f51\u670d\u52a1', '\u4ea4\u901a\u8fd0\u8f93', '\u4f11\u95f2\u670d\u52a1', '\u4f20\u5a92', '\u4fe1\u606f\u6280\u672f', '\u516c\u7528\u4e8b\u4e1a', '\u519c\u4e1a', '\u5316\u5de5\u5236\u9020', '\u533b\u836f\u751f\u7269', '\u5546\u4e1a\u8d38\u6613', '\u56fd\u9632\u519b\u5de5', '\u5bb6\u7528\u7535\u5668', '\u5efa\u7b51\u4e1a', '\u623f\u5730\u4ea7', '\u6559\u80b2', '\u6587\u5316', '\u6709\u8272\u91d1\u5c5e', '\u673a\u68b0\u88c5\u5907\u5236\u9020', '\u6797\u4e1a', '\u6c7d\u8f66\u5236\u9020', '\u6e14\u4e1a', '\u7535\u5b50\u5236\u9020', '\u7535\u6c14\u8bbe\u5907', '\u755c\u7267\u4e1a', '\u7eba\u7ec7\u670d\u88c5\u5236\u9020', '\u8f7b\u5de5\u5236\u9020', '\u901a\u4fe1', '\u91c7\u77ff\u4e1a', '\u94a2\u94c1', '\u94f6\u884c', '\u975e\u94f6\u91d1\u878d', '\u98df\u54c1\u996e\u6599'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1836 |
| valid | 460 |
|
Georgii | null | null | null | false | 8 | false | Georgii/poetry-genre | 2022-03-10T08:12:23.000Z | null | false | ad1f65afa83d161c5860ad126ab75c4287fb6cbe | [] | [] | https://huggingface.co/datasets/Georgii/poetry-genre/resolve/main/README.md | en poems and genres test |
ai4bharat | null | @inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
} | This is the new headline generation dataset released as part of IndicNLG Suite. Each
input document is paired an output title. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 1.43M. | false | 3 | false | ai4bharat/IndicHeadlineGeneration | 2022-10-13T06:08:20.000Z | null | false | d9845634dc0f9cb48d4a26c9f6d8986fb87d2027 | [] | [
"arxiv:2203.05437",
"annotations_creators:no-annotation",
"language_creators:found",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"multilingua... | https://huggingface.co/datasets/ai4bharat/IndicHeadlineGeneration/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
pretty_name: IndicHeadlineGeneration
size_categories:
- 27K<n<341K
source_datasets:
- original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.
task_categories:
- conditional-text-generation
task_ids:
- conditional-text-generation-other-headline-generation
---
# Dataset Card for "IndicHeadlineGeneration"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicHeadlineGeneration is the news headline generation dataset released as part of IndicNLG Suite. Each
input document is paired with an output as title. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 1.4M.
### Supported Tasks and Leaderboards
**Tasks:** Headline Generation
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{'id': '14',
'input': "अमेरिकी सिंगर अरियाना ग्रांडे का नया म्यूजिक एल्बम 'थैंक यू नेक्स्ट' रिलीज हो गया है।एक दिन पहले ही रिलीज हुए इस गाने को देखने वालों की संख्या 37,663,702 पहुंच गई है।यूट्यूब पर अपलोड इस गाने को 24 घंटे के भीतर 3.8 मिलियन लोगों ने पसंद किया है।अरियाना ग्रांडे नई दिल्लीः अमेरिकी सिंगर अरियाना ग्रांडे का नया म्यूजिक एल्बम 'थैंक यू नेक्स्ट' रिलीज हो गया है।एक दिन पहले ही रिलीज हुए इस गाने को देखने वालों की संख्या 37,663,702 पहुंच गई है।यूट्यूब पर अपलोड इस गाने को 24 घंटे के भीतर 3.8 मिलियन लोगों ने पसंद किया है।वहीं इस वीडियो पर कमेंट्स की बाढ़ आ गई है।गाने में मीन गर्ल्स, ब्रिंग इट ऑन, लीगली ब्लॉंड और 13 गोइंग 30 के कुछ फेमस सीन्स को दिखाया गया है।गाने में क्रिस जैनर का कैमियो भी है।बता दें अभी कुछ महीने पहले ही अरियाना के एक्स ब्वॉयफ्रेंड मैक मिलर का 26 साल की उम्र में निधन हो गया था।इस खबर को सुनकर अरियाना टूट सी गई थीं।उन्होंने सोशल मीडिया पर पोस्ट कर कई बार अपनी भावनाएं व्यक्त की।अरियाना ग्रांडे और रैपर मैक मिलर ने करीब 2 साल तक एक दूसरे को डेट किया।मैक के निधन की वजह ड्रग्स की ओवरडोज बताई गई।दोनों की मुलाकात साल 2012 में हुई थी।दोनों ने एक कंसर्ट में साथ कई गानों पर परफॉर्म भी किया था।जिसके बाद दोनों एक दूसरे को डेट करने लगे लेकिन नशे की लत के कारण अरियाना ने उनसे ब्रेकअप कर लिया।पर देश-विदेश की ताजा और स्पेशल स्टोरी पढ़ते हुए अपने आप को रखिए अप-टू-डेट।के लिए क्लिक करें सिनेमा सेक्शन",
'target': 'अरियाना ग्रांडे का नया गाना रिलीज, सोशल मीडिया पर वायरल',
'url': 'https://www.indiatv.in/entertainment/hollywood-ariana-grande-shatters-24-hour-views-record-612835'
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `input (string)`: News article as input.
- `target (strings)`: Output as headline of the news article.
- `url (string)`: Source web link of the news article.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Dev | Test |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 29,631 | 14,592 | 14,808 |
Bengali | bn | 113,424 | 14,739 | 14,568 |
Gujarati | gu | 199,972 | 31,270 | 31,215 |
Hindi | hi | 208,221 | 44,738 | 44,514 |
Kannada | kn | 132,380 | 19,416 | 3,261 |
Malayalam | ml | 10,358 | 5,388 | 5,220 |
Marathi | mr | 114,042 | 14,253 | 14,340 |
Oriya | or | 58,225 | 7,484 | 7,137 |
Punjabi | pa | 48,441 | 6,108 | 6,086 |
Tamil | ta | 60,650 | 7,616 | 7,688 |
Telugu | te | 21,352 | 2,690 | 2,675 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
For hindi, web sources like [Dainik Bhaskar](https://www.bhaskar.com), [Naidunia](https://www.naidunia.com/), [NDTV](https://ndtv.in/), [Business Standard](https://hindi.business-standard.com/) and [IndiaTV](https://www.indiatv.in/). For other languages, modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) dataset.
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437) |
ai4bharat | null | @inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
} | This is the sentence summarization dataset released as part of IndicNLG Suite. Each
input sentence is paired with an output summary. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta and te. The total
size of the dataset is 431K. | false | 16 | false | ai4bharat/IndicSentenceSummarization | 2022-10-13T06:08:31.000Z | null | false | 53cfce5e0ca8da828ee1b6223dcf3ea986582812 | [] | [
"arxiv:2203.05437",
"annotations_creators:no-annotation",
"language_creators:found",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"multilingua... | https://huggingface.co/datasets/ai4bharat/IndicSentenceSummarization/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
pretty_name: IndicSentenceSummarization
size_categories:
- 5K<n<112K
source_datasets:
- original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.
task_categories:
- conditional-text-generation
task_ids:
- conditional-text-generation-other-sentence-summarization
---
# Dataset Card for "IndicSentenceSummarization"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicSentenceSummarization is the sentence summarization dataset released as part of IndicNLG Suite. Each
input sentence is paired with an output as summary. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 431K.
### Supported Tasks and Leaderboards
**Tasks:** Sentence Summarization
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{'id': '5',
'input': 'जम्मू एवं कश्मीर के अनंतनाग जिले में शनिवार को सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादियों को मार गिराया गया।',
'target': 'जम्मू-कश्मीर : सुरक्षाबलों के साथ मुठभेड़ में 2 आतंकवादी ढेर',
'url': 'https://www.indiatv.in/india/national-jammu-kashmir-two-millitant-killed-in-encounter-with-security-forces-574529'
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `input (string)`: Input sentence.
- `target (strings)`: Output summary.
- `url (string)`: Source web link of the sentence.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Dev | Test |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 10,812 | 5,232 | 5,452 |
Bengali | bn | 17,035 | 2,355 | 2,384 |
Gujarati | gu | 54,788 | 8,720 | 8,460 |
Hindi | hi | 78,876 | 16,935 | 16,835 |
Kannada | kn | 61,220 | 9,024 | 1,485 |
Malayalam | ml | 2,855 | 1,520 | 1,580 |
Marathi | mr | 27,066 | 3,249 | 3,309 |
Oriya | or | 12,065 | 1,539 | 1,440 |
Punjabi | pa | 31,630 | 4,004 | 3,967 |
Tamil | ta | 23,098 | 2,874 | 2,948 |
Telugu | te | 7,119 | 878 | 862 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
It is a modified subset of [IndicHeadlineGeneration](https://huggingface.co/datasets/ai4bharat/IndicHeadlineGeneration) dataset.
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437) |
ai4bharat | null | @inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
} | This is the WikiBio dataset released as part of IndicNLG Suite. Each
example has four fields: id, infobox, serialized infobox and summary. We create this dataset in nine
languages including as, bn, hi, kn, ml, or, pa, ta, te. The total
size of the dataset is 57,426. | false | 2 | false | ai4bharat/IndicWikiBio | 2022-10-13T06:08:34.000Z | null | false | 9b177ff8d3eeaf8d07d2918546e9b79ee655e29b | [] | [
"arxiv:2203.05437",
"annotations_creators:no-annotation",
"language_creators:found",
"language:as",
"language:bn",
"language:hi",
"language:kn",
"language:ml",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"multilinguality:multilingual",
"size_catego... | https://huggingface.co/datasets/ai4bharat/IndicWikiBio/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- as
- bn
- hi
- kn
- ml
- or
- pa
- ta
- te
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
pretty_name: IndicWikiBio
size_categories:
- 1960<n<11,502
source_datasets:
- none. Originally generated from www.wikimedia.org.
task_categories:
- conditional-text-generation
task_ids:
- conditional-text-generation-other-wikibio
---
# Dataset Card for "IndicWikiBio"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
The WikiBio dataset released as part of IndicNLG Suite. Each
example has four fields: id, infobox, serialized infobox and summary. We create this dataset in nine
languages including as, bn, hi, kn, ml, or, pa, ta, te. The total
size of the dataset is 57,426.
### Supported Tasks and Leaderboards
**Tasks:** WikiBio
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{
"id": 26,
"infobox": "name_1:सी॰\tname_2:एल॰\tname_3:रुआला\toffice_1:सांसद\toffice_2:-\toffice_3:मिजोरम\toffice_4:लोक\toffice_5:सभा\toffice_6:निर्वाचन\toffice_7:क्षेत्र\toffice_8:।\toffice_9:मिजोरम\tterm_1:2014\tterm_2:से\tterm_3:2019\tnationality_1:भारतीय",
"serialized_infobox": "<TAG> name </TAG> सी॰ एल॰ रुआला <TAG> office </TAG> सांसद - मिजोरम लोक सभा निर्वाचन क्षेत्र । मिजोरम <TAG> term </TAG> 2014 से 2019 <TAG> nationality </TAG> भारतीय",
"summary": "सी॰ एल॰ रुआला भारत की सोलहवीं लोक सभा के सांसद हैं।"
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `infobox (string)`: Raw Infobox.
- `serialized_infobox (string)`: Serialized Infobox as input.
- `summary (string)`: Summary of Infobox/First line of Wikipedia page.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Test | Val |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 1,300 | 391 | 381 |
Bengali | bn | 4,615 | 1,521 | 1,567 |
Hindi | hi | 5,684 | 1,919 | 1,853 |
Kannada | kn | 1,188 | 389 | 383 |
Malayalam | ml | 5,620 | 1,835 | 1,896 |
Oriya | or | 1,687 | 558 | 515 |
Punjabi | pa | 3,796 | 1,227 | 1,331 |
Tamil | ta | 8,169 | 2,701 | 2,632 |
Telugu | te | 2,594 | 854 | 820 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
None
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
|
ai4bharat | null | @inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
} | This is the Question Generation dataset released as part of IndicNLG Suite. Each
example has five fields: id, squad_id, answer, context and question. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. This is a translated data. The examples in each language are exactly similar but in different languages.
The number of examples in each language is 98,027. | false | 13 | false | ai4bharat/IndicQuestionGeneration | 2022-10-13T06:08:25.000Z | null | false | 3c9cfa7c513097aa3e475ad34d8578c52b48514f | [] | [
"arxiv:2203.05437",
"annotations_creators:no-annotation",
"language_creators:found",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"multilingua... | https://huggingface.co/datasets/ai4bharat/IndicQuestionGeneration/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
pretty_name: IndicQuestionGeneration
size_categories:
- 98K<n<98K
source_datasets:
- we start with the SQuAD question answering dataset repurposed to serve as a question generation dataset. We translate this dataset into different Indic languages.
task_categories:
- conditional-text-generation
task_ids:
- conditional-text-generation-other-question-generation
---
# Dataset Card for "IndicQuestionGeneration"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicQuestionGeneration is the question generation dataset released as part of IndicNLG Suite. Each
example has five fields: id, squad_id, answer, context and question. We create this dataset in eleven
languages, including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. This is translated data. The examples in each language are exactly similar but in different languages.
The number of examples in each language is 98,027.
### Supported Tasks and Leaderboards
**Tasks:** Question Generation
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{
"id": 8,
"squad_id": "56be8e613aeaaa14008c90d3",
"answer": "अमेरिकी फुटबॉल सम्मेलन",
"context": "अमेरिकी फुटबॉल सम्मेलन (एएफसी) के चैंपियन डेनवर ब्रोंकोस ने नेशनल फुटबॉल कांफ्रेंस (एनएफसी) की चैंपियन कैरोलिना पैंथर्स को 24-10 से हराकर अपना तीसरा सुपर बाउल खिताब जीता।",
"question": "एएफसी का मतलब क्या है?"
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `squad_id (string)`: Unique identifier in Squad dataset.
- `answer (strings)`: Answer as one of the two inputs.
- `context (string)`: Context, the other input.
- `question (string)`: Question, the output.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Dev | Test |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 69,979 | 17,495 | 10,553 |
Bengali | bn | 69,979 | 17,495 | 10,553 |
Gujarati | gu | 69,979 | 17,495 | 10,553 |
Hindi | hi | 69,979 | 17,495 | 10,553 |
Kannada | kn | 69,979 | 17,495 | 10,553 |
Malayalam | ml | 69,979 | 17,495 | 10,553 |
Marathi | mr | 69,979 | 17,495 | 10,553 |
Oriya | or | 69,979 | 17,495 | 10,553 |
Punjabi | pa | 69,979 | 17,495 | 10,553 |
Tamil | ta | 69,979 | 17,495 | 10,553 |
Telugu | te | 69,979 | 17,495 | 10,553 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
Squad Dataset(https://rajpurkar.github.io/SQuAD-explorer/)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437) |
aasd291809733 | null | null | null | false | 2 | false | aasd291809733/myself | 2022-03-10T13:46:37.000Z | null | false | bb3d15353a87a2b256ffb6abc5fa0436b4333b30 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/aasd291809733/myself/resolve/main/README.md | ---
license: apache-2.0
---
|
Mulin | null | null | This news dataset is a holiday information of singapore from 2017 to 2022. | false | 1 | false | Mulin/sg-holiday | 2022-03-14T10:44:11.000Z | null | false | 9e3533eec643aebede8aaa7ea781c9b58f721dd8 | [] | [
"license:mit"
] | https://huggingface.co/datasets/Mulin/sg-holiday/resolve/main/README.md | ---
license: mit
---
Singapore's holiday data from 2017 to 2022. |
Biomedical-TeMU | null | null | null | false | 3 | false | Biomedical-TeMU/ProfNER_corpus_classification | 2022-03-10T21:24:30.000Z | null | false | f5ee87052fbba38c7e0a49a4dad24724ed97302f | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/Biomedical-TeMU/ProfNER_corpus_classification/resolve/main/README.md | ---
license: cc-by-4.0
---
|
Biomedical-TeMU | null | null | null | false | 3 | false | Biomedical-TeMU/ProfNER_corpus_NER | 2022-03-10T21:50:30.000Z | null | false | de9bf1404880f4b7225e1cc0e9268192e57fefca | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/Biomedical-TeMU/ProfNER_corpus_NER/resolve/main/README.md | ---
license: cc-by-4.0
---
## Description
**Gold standard annotations for profession detection in Spanish COVID-19 tweets**
The entire corpus contains 10,000 annotated tweets. It has been split into training, validation, and test (60-20-20). The current version contains the training and development set of the shared task with Gold Standard annotations. In addition, it contains the unannotated test, and background sets will be released.
For Named Entity Recognition, profession detection, annotations are distributed in 2 formats: Brat standoff and TSV. See the Brat webpage for more information about the Brat standoff format (https://brat.nlplab.org/standoff.html).
The TSV format follows the format employed in SMM4H 2019 Task 2:
tweet_id | begin | end | type | extraction
In addition, we provide a tokenized version of the dataset. It follows the BIO format (similar to CONLL). The files were generated with the brat_to_conll.py script (included), which employs the es_core_news_sm-2.3.1 Spacy model for tokenization.
## Files of Named Entity Recognition subtask.
Content:
- One TSV file per corpus split (train and valid).
- brat: folder with annotations in Brat format. One sub-directory per corpus split (train and valid)
- BIO: folder with corpus in BIO tagging. One file per corpus split (train and valid)
- train-valid-txt-files: folder with training and validation text files. One text file per tweet. One sub-- directory per corpus split (train and valid)
- train-valid-txt-files-english: folder with training and validation text files Machine Translated to English.
- test-background-txt-files: folder with the test and background text files. You must make your predictions for these files and upload them to CodaLab. |
McGill-NLP | null | FeedbackQA is a retrieval-based QA dataset that contains interactive feedback from users. It has two parts: the first part contains a conventional RQA dataset, whilst this repo contains the second part, which contains feedback(ratings and natural language explanations) for QA pairs. | false | 2 | false | McGill-NLP/feedbackQA | 2022-07-01T15:40:36.000Z | null | false | 413f7e57035e5610593b51c74a9a21364cc29498 | [] | [
"arxiv:2204.03025",
"license:apache-2.0"
] | https://huggingface.co/datasets/McGill-NLP/feedbackQA/resolve/main/README.md | ---
license: apache-2.0
---
# Dataset Card for FeedbackQA
[📄 Read](https://arxiv.org/abs/2204.03025)<br>
[💾 Code](https://github.com/McGill-NLP/feedbackqa)<br>
[🔗 Webpage](https://mcgill-nlp.github.io/feedbackqa/)<br>
[💻 Demo](http://206.12.100.48:8080/)<br>
[🤗 Huggingface Dataset](https://huggingface.co/datasets/McGill-NLP/feedbackQA)<br>
[💬 Discussions](https://github.com/McGill-NLP/feedbackqa/discussions)
## Dataset Description
- **Homepage: https://mcgill-nlp.github.io/feedbackqa-data/**
- **Repository: https://github.com/McGill-NLP/feedbackqa-data/**
- **Paper:**
- **Leaderboard:**
- **Tasks: Question Answering**
### Dataset Summary
FeedbackQA is a retrieval-based QA dataset that contains interactive feedback from users.
It has two parts: the first part contains a conventional RQA dataset,
whilst this repo contains the second part, which contains feedback(ratings and natural language explanations) for QA pairs.
### Languages
English
## Dataset Creation
For each question-answer pair, we collected multiple feedback, each of which consists of a rating, selected
from excellent, good, could be improved, bad, and a natural language explanation
elaborating on the strengths and/or weaknesses of the answer.
#### Initial Data Collection and Normalization
We scraped Covid-19-related content from official websites.
### Annotations
#### Who are the annotators?
Crowd-workers
### Licensing Information
Apache 2.0
### Contributions
[McGill-NLP](https://github.com/McGill-NLP)
| |
Biomedical-TeMU | null | null | null | false | 3 | false | Biomedical-TeMU/SPACCC_Sentence-Splitter | 2022-03-11T02:09:00.000Z | null | false | 393badffe34773d1536cfedfdc2abe14317d38e7 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/Biomedical-TeMU/SPACCC_Sentence-Splitter/resolve/main/README.md | ---
license: cc-by-4.0
---
# The Sentence Splitter (SS) for Clinical Cases Written in Spanish
## Introduction
This repository contains the sentence splitting model trained using the SPACCC_SPLIT corpus (https://github.com/PlanTL-SANIDAD/SPACCC_SPLIT). The model was trained using the 90% of the corpus (900 clinical cases) and tested against the 10% (100 clinical cases). This model is a great resource to split sentences in biomedical documents, specially clinical cases written in Spanish. This model obtains a F-Measure of 98.75%.
This model was created using the Apache OpenNLP machine learning toolkit (https://opennlp.apache.org/), with the release number 1.8.4, released in December 2017.
This repository contains the model, training set, testing set, Gold Standard, executable file, and the source code.
## Prerequisites
This software has been compiled with Java SE 1.8 and it should work with recent versions. You can download Java from the following website: https://www.java.com/en/download
The executable file already includes the Apache OpenNLP dependencies inside, so the download of this toolkit is not necessary. However, you may download the latest version from this website: https://opennlp.apache.org/download.html
The library file we have used to compile is "opennlp-tools-1.8.4.jar". The source code should be able to compile with the latest version of OpenNLP, "opennlp-tools-*RELEASE_NUMBER*.jar". In case there are compilation or execution errors, please let us know and we will make all the necessary updates.
## Directory structure
<pre>
exec/
An executable file that can be used to apply the sentence splitter to your documents.
You can find the notes about its execution below in section "Usage".
gold_standard/
The clinical cases used as gold standard to evaluate the model's performance.
model/
The sentence splitting model, "es-sentence-splitter-model-spaccc.bin", a binary file.
src/
The source code to create the model (CreateModelSS.java) and evaluate it (EvaluateModelSS.java).
The directory includes an example about how to use the model inside your code (SentenceSplitter.java).
File "abbreviations.dat" contains a list of abbreviations, essential to build the model.
test_set/
The clinical cases used as test set to evaluate the model's performance.
train_set/
The clinical cases used to build the model. We use a single file with all documents present in
directory "train_set_docs" concatented.
train_set_docs/
The clinical cases used to build the model. For each record the sentences are already splitted.
</pre>
## Usage
The executable file *SentenceSplitter.jar* is the program you need to split the sentences of the document. For this program, two arguments are needed: (1) the text file to split the sentences, and (2) the model file (*es-sentence-splitter-model-spaccc.bin*). The program will display all sentences splitted in the terminal, with one sentence per line.
From the `exec` folder, type the following command in your terminal:
<pre>
$ java -jar SentenceSplitter.jar INPUT_FILE MODEL_FILE
</pre>
## Examples
Assuming you have the executable file, the input file and the model file in the same directory:
<pre>
$ java -jar SentenceSplitter.jar file_with_sentences_not_splitted.txt es-sentence-splitter-model-spaccc.bin
</pre>
## Model creation
To create this sentence splitting model, we used the following training parameters (class *TrainingParameters* in OpenNLP) to get the best performance:
- Number of iterations: 4000.
- Cutoff parameter: 3.
- Trainer type parameter: *EventTrainer.EVENT_VALUE*.
- Algorithm: Maximum Entropy (*ModelType.MAXENT.name()*).
Meanwhile, we used the following parameters for the sentence split builder (class *SentenceDetectorFactory* in OpenNLP) to get the best performance:
- Subclass name: null value.
- Language code: *es* (for Spanish).
- Use token end: true.
- Abbreviation dictionary: file "abbreviations.dat" (included in the `src/` directory).
- End of file characters: ".", "?" and "!".
## Model evaluation
After tuning the model using different values for each parameter mentioned above, we got the best performance with the values mentioned above.
| | Value |
| ----------------------------------------: | :------ |
| Number of sentences in the gold standard | 1445 |
| Number of sentences generated | 1447 |
| Number of sentences correctly splitted | 1428 |
| Number of sentences wrongly splitted | 12 |
| Number of sentences missed | 5 |
| **Precision** | **98.69%** |
| **Recall** | **98.82%** |
| **F-Measure** | **98.75%**|
Table 1: Evaluation statistics for the sentence splitting model.
## Contact
Ander Intxaurrondo (ander.intxaurrondo@bsc.es)
## License
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
Copyright (c) 2018 Secretaría de Estado para el Avance Digital (SEAD)
|
Biomedical-TeMU | null | null | null | false | 2 | false | Biomedical-TeMU/SPACCC_Tokenizer | 2022-03-11T02:18:16.000Z | null | false | b80bc1594c34c07cee7888a0c741ae41ac06b274 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/Biomedical-TeMU/SPACCC_Tokenizer/resolve/main/README.md | ---
license: cc-by-4.0
---
# The Tokenizer for Clinical Cases Written in Spanish
## Introduction
This repository contains the tokenization model trained using the SPACCC_TOKEN corpus (https://github.com/PlanTL-SANIDAD/SPACCC_TOKEN). The model was trained using the 90% of the corpus (900 clinical cases) and tested against the 10% (100 clinical cases). This model is a great resource to tokenize biomedical documents, specially clinical cases written in Spanish.
This model was created using the Apache OpenNLP machine learning toolkit (https://opennlp.apache.org/), with the release number 1.8.4, released in December 2017.
This repository contains the training set, testing set, Gold Standard.
## Prerequisites
This software has been compiled with Java SE 1.8 and it should work with recent versions. You can download Java from the following website: https://www.java.com/en/download
The executable file already includes the Apache OpenNLP dependencies inside, so the download of this toolkit is not necessary. However, you may download the latest version from this website: https://opennlp.apache.org/download.html
The library file we have used to compile is "opennlp-tools-1.8.4.jar". The source code should be able to compile with the latest version of OpenNLP, "opennlp-tools-*RELEASE_NUMBER*.jar". In case there are compilation or execution errors, please let us know and we will make all the necessary updates.
## Directory structure
<pre>
exec/
An executable file that can be used to apply the tokenization to your documents.
You can find the notes about its execution below in section "Usage".
gold_standard/
The clinical cases used as gold standard to evaluate the model's performance.
model/
The tokenizationint model, "es-tokenization-model-spaccc.bin", a binary file.
src/
The source code to create the model (CreateModelTok.java) and evaluate it (EvaluateModelTok.java).
The directory includes an example about how to use the model inside your code (Tokenization.java).
File "abbreviations.dat" contains a list of abbreviations, essential to build the model.
test_set/
The clinical cases used as test set to evaluate the model's performance.
train_set/
The clinical cases used to build the model. We use a single file with all documents present in
directory "train_set_docs" concatented.
train_set_docs/
The clinical cases used to build the model. For each record the sentences are already splitted.
</pre>
## Usage
The executable file *Tokenizer.jar* is the program you need to tokenize the text in your document. For this program, two arguments are needed: (1) the text file to tokenize, and (2) the model file (*es-tokenization-model-spaccc.bin*). The program will display all tokens in the terminal, with one token per line.
From the `exec` folder, type the following command in your terminal:
<pre>
$ java -jar Tokenizer.jar INPUT_FILE MODEL_FILE
</pre>
## Examples
Assuming you have the executable file, the input file and the model file in the same directory:
<pre>
$ java -jar Tokenizer.jar file.txt es-tokenizer-model-spaccc.bin
</pre>
## Model creation
To create this tokenization model, we used the following training parameters (class *TrainingParameters* in OpenNLP) to get the best performance:
- Number of iterations: 1500.
- Cutoff parameter: 4.
- Trainer type parameter: *EventTrainer.EVENT_VALUE*.
- Algorithm: Maximum Entropy (*ModelType.MAXENT.name()*).
Meanwhile, we used the following parameters for the tokenizer builder (class *TokenizerFactory* in OpenNLP) to get the best performance:
- Language code: *es* (for Spanish).
- Abbreviation dictionary: file "abbreviations.dat" (included in the `src/` directory).
- Use alphanumeric optimization: false
- Alphanumeric pattern: null
## Model evaluation
After tuning the model using different values for each parameter mentioned above, we got the best performance with the values mentioned above.
| | Value |
| ----------------------------------------: | :------ |
| Number of tokens in the gold standard | 38247 |
| Number of tokens generated | 38227 |
| Number of words correctly tokenized | 38182 |
| Number of words wrongly tokenized | 35 |
| Number of tokens missed | 30 |
| **Precision** | **99.88%** |
| **Recall** | **99.83%** |
| **F-Measure** | **99.85%**|
Table 1: Evaluation statistics for the tokenization model.
## Contact
Ander Intxaurrondo (ander.intxaurrondo@bsc.es)
## License
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
Copyright (c) 2018 Secretaría de Estado para el Avance Digital (SEAD)
|
Biomedical-TeMU | null | null | null | false | 3 | false | Biomedical-TeMU/CodiEsp_corpus | 2022-03-11T02:24:53.000Z | null | false | 5ff2b006ea74699eccd393a5a0f3b99396d01e0c | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/Biomedical-TeMU/CodiEsp_corpus/resolve/main/README.md | ---
license: cc-by-4.0
---
## Introduction
These are the train, development, test and background sets of the CodiEsp corpus. Train and development have gold standard annotations. The unannotated background and test sets are distributed together. All documents are released in the context of the CodiEsp track for CLEF ehealth 2020 (http://temu.bsc.es/codiesp/).
The CodiEsp corpus contains manually coded clinical cases. All documents are in Spanish language and CIE10 is the coding terminology (it is the Spanish version of ICD10-CM and ICD10-PCS). The CodiEsp corpus has been randomly sampled into three subsets: the train, the development, and the test set. The train set contains 500 clinical cases, and the development and test set 250 clinical cases each. The test set contains 250 clinical cases and it is released together with the background set (with 2751 clinical cases). CodiEsp participants must submit predictions for the test and background set, but they will only be evaluated on the test set.
## Structure
Three folders: train, dev and test. Each one of them contains the files for the train, development and test corpora, respectively.
+ train and dev folders have:
+ 3 tab-separated files with the annotation information relevant for each of the 3 sub-tracks of CodiEsp.
+ A subfolder named text_files with the plain text files of the clinical cases.
+ A subfolder named text_files_en with the plain text files machine-translated to English. Due to the translation process, the text files are sentence-splitted.
+ The test folder has only text_files and text_files_en subfolders with the plain text files.
## Corpus format description
The CodiEsp corpus is distributed in plain text in UTF8 encoding, where each clinical case is stored as a single file whose name is the clinical case identifier. Annotations are released in a tab-separated file. Since the CodiEsp track has 3 sub-tracks, every set of documents (train and test) has 3 tab-separated files associated with it.
For the sub-tracks CodiEsp-D and CodiEsp-P, the file has the following fields:
articleID ICD10-code
Tab-separated files for the sub-track CodiEsp-X contain extra fields that provide the text-reference and its position:
articleID label ICD10-code text-reference reference-position
## Corpus summary statistics
The final collection of 1000 clinical cases that make up the corpus had a total of 16504 sentences, with an average of 16.5 sentences per clinical case. It contains a total of 396,988 words, with an average of 396.2 words per clinical case.
For more information, visit the track webpage: http://temu.bsc.es/codiesp/ |
Mulin | null | null | null | false | 3 | false | Mulin/weather-data | 2022-03-11T06:41:03.000Z | null | false | b80b8e1442d843ab1f02050ef297b13be4fb4a72 | [] | [
"license:mit"
] | https://huggingface.co/datasets/Mulin/weather-data/resolve/main/README.md | ---
license: mit
---
|
lstynerl | null | null | null | false | 2 | false | lstynerl/M1a1d | 2022-03-11T03:32:56.000Z | null | false | 7e37d9d97bbdc47fbd710913a75c355e878b343e | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/lstynerl/M1a1d/resolve/main/README.md | ---
license: apache-2.0
---
|
Khedesh | null | null | null | false | 3 | false | Khedesh/ArmanNER | 2022-03-11T10:42:30.000Z | null | false | 38ccb945600346d52580891d6d77f5c2bfaae069 | [] | [] | https://huggingface.co/datasets/Khedesh/ArmanNER/resolve/main/README.md | # PersianNER
Named-Entity Recognition in Persian Language
## ArmanPersoNERCorpus
This is the first manually-annotated Persian named-entity (NE) dataset (ISLRN 399-379-640-828-6). We are releasing it only for academic research use.
The dataset includes 250,015 tokens and 7,682 Persian sentences in total. It is available in 3 folds to be used in turn as training and test sets. Each file contains one token, along with its manually annotated named-entity tag, per line. Each sentence is separated with a newline. The NER tags are in IOB format.
According to the instructions provided to the annotators, NEs are categorized into six classes: person, organization (such as banks, ministries, embassies, teams, nationalities, networks and publishers), location (such as cities, villages, rivers, seas, gulfs, deserts and mountains), facility (such as schools, universities, research centers, airports, railways, bridges, roads, harbors, stations, hospitals, parks, zoos and cinemas), product (such as books, newspapers, TV shows, movies, airplanes, ships, cars, theories, laws, agreements and religions), and event (such as wars, earthquakes, national holidays, festivals and conferences); other are the remaining tokens.
|
gigant | null | @inproceedings{CycleGAN2017,
title={Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks},
author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A},
booktitle={Computer Vision (ICCV), 2017 IEEE International Conference on},
year={2017}
} | Two unpaired sets of photos of respectively horses and zebras, designed for unpaired image-to-image translation, as seen in the paper introducing CycleGAN | false | 3 | false | gigant/horse2zebra | 2022-10-24T17:37:53.000Z | null | false | 04bb1414d14d63bffc026c6f12d047b7a3232930 | [] | [
"arxiv:1703.10593",
"license:cc",
"task_categories:image-to-image",
"tags:GAN",
"tags:unpaired-image-to-image-translation"
] | https://huggingface.co/datasets/gigant/horse2zebra/resolve/main/README.md | ---
license: cc
task_categories:
- image-to-image
task_ids: []
pretty_name: Horse2Zebra
tags:
- GAN
- unpaired-image-to-image-translation
---
## Dataset Description
- **Homepage:** https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
- **Paper:** https://arxiv.org/abs/1703.10593
### Dataset Summary
This dataset was obtained from the original CycleGAN Datasets directory available on [Berkeley's website](https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/).
For more details about the dataset you can refer to the [original CycleGAN publication](https://arxiv.org/abs/1703.10593).
### How to use
You can easily load the dataset with the following lines :
```python
from datasets import load_dataset
data_horses = load_dataset("gigant/horse2zebra", name="horse", split="train")
data_zebras = load_dataset("gigant/horse2zebra", name="zebra", split="train")
```
Two splits are available, `"train"` and `"test"`
### Citation Information
```
@inproceedings{CycleGAN2017,
title={Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks},
author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A},
booktitle={Computer Vision (ICCV), 2017 IEEE International Conference on},
year={2017}
}
``` |
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/ratishsp__macro__1646998904 | 2022-03-11T11:41:47.000Z | null | false | f90b0fced2b6b7d1fb3fcdb04cb5b754eafab378 | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:Macro",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/ratishsp__macro__1646998904/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: Macro
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: Macro
|
Zeel | null | null | null | false | 1 | false | Zeel/common | 2022-10-25T10:22:40.000Z | null | false | b8e66595f3f7e20f5c2a6f69be3504d2e97d790b | [] | [
"language:en"
] | https://huggingface.co/datasets/Zeel/common/resolve/main/README.md | ---
language:
- en
pretty_name: common
---
# Dataset Card for Zeel/common
|
microsoft | null | null | null | false | 2 | false | microsoft/CLUES | 2022-03-25T22:05:58.000Z | null | false | ce7b8f1a30bfae5184e554a5bf44b76b9e8fc011 | [] | [
"license:mit"
] | https://huggingface.co/datasets/microsoft/CLUES/resolve/main/README.md | ---
license: mit
---
# CLUES: Few-Shot Learning Evaluation in Natural Language Understanding
This repo contains the data for the NeurIPS 2021 benchmark [Constrained Language Understanding Evaluation Standard (CLUES)](https://openreview.net/pdf?id=VhIIQBm00VI).
## Leaderboard
We maintain a [Leaderboard](https://github.com/microsoft/CLUES) allowing researchers to submit their results as entries.
### Submission Instructions
- Each submission must be submitted as a pull request modifying the markdown file underlying the leaderboard.
- The submission must attach an accompanying public paper and public source code for reproducing their results on our dataset.
- A submission can be toward any subset of tasks in our benchmark, or toward the aggregate leaderboard.
- For any task targeted by the submission, we require evaluation on (1) 10, 20, *and* 30 shots, and (2) all 5 splits of the corresponding dataset and a report of their mean and standard deviation.
- Each leaderboard will be sorted by the 30-shot mean S1 score (where S1 score is a variant of F1 score defined in our paper).
- The submission should not use data from the 4 other splits during few-shot finetuning of any 1 split, either as extra training set or as validation set for hyperparameter tuning.
- However, we allow external data, labeled or unlabeled, to be used for such purposes.
Each submission using external data must mark the corresponding columns "external labeled" and/or "external unlabeled".
Note, in this context, "external data" refers to data used *after pretraining* (e.g., for task-specific tuning); in particular, methods using existing pretrained models only, without extra data, should not mark either column. For obvious reasons, models cannot be trained on the original labeled datasets from where we sampled the few-shot CLUES data.
- In the table entry, the submission should include a method name and a citation, hyperlinking to their publicly released source code reproducing the results. See the last entry of the table below for an example.
### Abbreviations
- FT = (classic) finetuning
- PT = prompt based tuning
- ICL = in-context learning, in the style of GPT-3
- μ±σ = mean μ and standard deviation σ across our 5 splits. Aggregate standard deviation is calculated using the sum-of-variance formula from individual tasks' standard deviations.
### Benchmarking CLUES for Aggregate 30-shot Evaluation
| Shots (K=30) | external labeled | external unlabeled | Average ▼ | SST-2 | MNLI | CoNLL03 | WikiANN | SQuAD-v2 | ReCoRD |
|-----------------------------------------------------------|-------------|---------------|-----------|-----------|----------|----------|----------|----------|----------|
| **Human** | N | N | 81.4 | 83.7 | 69.4 | 87.4 | 82.6 | 73.5 | 91.9 |
| T5-Large-770M-FT | N | N | 43.1±6.7 | 52.3±2.9 | 36.8±3.8 | 51.2±0.1 | 62.4±0.6 | 43.7±2.7 | 12±3.8 |
| BERT-Large-336M-FT | N | N | 42.1±7.8 | 55.4±2.5 | 33.3±1.4 | 51.3±0 | 62.5±0.6 | 35.3±6.4 | 14.9±3.4 |
| BERT-Base-110M-FT | N | N | 41.5±9.2 | 53.6±5.5 | 35.4±3.2 | 51.3±0 | 62.8±0 | 32.6±5.8 | 13.1±3.3 |
| DeBERTa-Large-400M-FT | N | N | 40.1±17.8 | 47.7±9.0 | 26.7±11 | 48.2±2.9 | 58.3±6.2 | 38.7±7.4 | 21.1±3.6 |
| RoBERTa-Large-355M-FT | N | N | 40.0±10.6 | 53.2±5.6 | 34.0±1.1 | 44.7±2.6 | 48.4±6.7 | 43.5±4.4 | 16±2.8 |
| RoBERTa-Large-355M-PT | N | N | | 90.2±1.8 | 61.6±3.5 | | | | |
| DeBERTa-Large-400M-PT | N | N | | 88.4±3.3 | 62.9±3.1 | | | | |
| BERT-Large-336M-PT | N | N | | 82.7±4.1 | 45.3±2.0 | | | | |
| GPT3-175B-ICL | N | N | | 91.0±1.6 | 33.2±0.2 | | | | |
| BERT-Base-110M-PT | N | N | | 79.4±5.6 | 42.5±3.2 | | | | |
| [LiST (Wang et al.)](https://github.com/microsoft/LiST) | N | Y | | 91.3 ±0.7 | 67.9±3.0 | | | | |
| [Example (lastname et al.)](link2code) | Y/N | Y/N | 0±0 | 0±0 | 0±0 | 0±0 | 0±0 | 0±0 | 0±0 |
### Individual Task Performance over Multiple Shots
#### SST-2
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|----------------------------------------|------------------|--------------------|-----------|-----------|----------|------|
| GPT-3 (175B) ICL | N | N | 85.9±3.7 | 92.0±0.7 | 91.0±1.6 | - |
| RoBERTa-Large PT | N | N | 88.8±3.9 | 89.0±1.1 | 90.2±1.8 | 93.8 |
| DeBERTa-Large PT | N | N | 83.4±5.3 | 87.8±3.5 | 88.4±3.3 | 91.9 |
| **Human** | N | N | 79.8 | 83 | 83.7 | - |
| BERT-Large PT | N | N | 63.2±11.3 | 78.2±9.9 | 82.7±4.1 | 91 |
| BERT-Base PT | N | N | 63.9±10.0 | 76.7±6.6 | 79.4±5.6 | 91.9 |
| BERT-Large FT | N | N | 46.3±5.5 | 55.5±3.4 | 55.4±2.5 | 99.1 |
| BERT-Base FT | N | N | 46.2±5.6 | 54.0±2.8 | 53.6±5.5 | 98.1 |
| RoBERTa-Large FT | N | N | 38.4±21.7 | 52.3±5.6 | 53.2±5.6 | 98.6 |
| T5-Large FT | N | N | 51.2±1.8 | 53.4±3.2 | 52.3±2.9 | 97.6 |
| DeBERTa-Large FT | N | N | 43.0±11.9 | 40.8±22.6 | 47.7±9.0 | 100 |
| [Example (lastname et al.)](link2code) | Y/N | Y/N | 0±0 | 0±0 | 0±0 | - |
#### MNLI
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|---------------------------------------------------------|------------------|--------------------|-----------|-----------|-----------|------|
| **Human** | N | Y | 78.1 | 78.6 | 69.4 | - |
| [LiST (wang et al.)](https://github.com/microsoft/LiST) | N | N | 60.5±8.3 | 67.2±4.5 | 67.9±3.0 | - |
| DeBERTa-Large PT | N | N | 44.5±8.2 | 60.7±5.3 | 62.9±3.1 | 88.1 |
| RoBERTa-Large PT | N | N | 57.7±3.6 | 58.6±2.9 | 61.6±3.5 | 87.1 |
| BERT-Large PT | N | N | 41.7±1.0 | 43.7±2.1 | 45.3±2.0 | 81.9 |
| BERT-Base PT | N | N | 40.4±1.8 | 42.1±4.4 | 42.5±3.2 | 81 |
| T5-Large FT | N | N | 39.8±3.3 | 37.9±4.3 | 36.8±3.8 | 85.9 |
| BERT-Base FT | N | N | 37.0±5.2 | 35.2±2.7 | 35.4±3.2 | 81.6 |
| RoBERTa-Large FT | N | N | 34.3±2.8 | 33.4±0.9 | 34.0±1.1 | 85.5 |
| BERT-Large FT | N | N | 33.7±0.4 | 28.2±14.8 | 33.3±1.4 | 80.9 |
| GPT-3 (175B) ICL | N | N | 33.5±0.7 | 33.1±0.3 | 33.2±0.2 | - |
| DeBERTa-Large FT | N | N | 27.4±14.1 | 33.6±2.5 | 26.7±11.0 | 87.6 |
#### CoNLL03
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|------------------|------------------|--------------------|----------|----------|----------|------|
| **Human** | N | N | 87.7 | 89.7 | 87.4 | - |
| BERT-Base FT | N | N | 51.3±0 | 51.3±0 | 51.3±0 | - |
| BERT-Large FT | N | N | 51.3±0 | 51.3±0 | 51.3±0 | 89.3 |
| T5-Large FT | N | N | 46.3±6.9 | 50.0±0.7 | 51.2±0.1 | 92.2 |
| DeBERTa-Large FT | N | N | 50.1±1.2 | 47.8±2.5 | 48.2±2.9 | 93.6 |
| RoBERTa-Large FT | N | N | 50.8±0.5 | 44.6±5.1 | 44.7±2.6 | 93.2 |
#### WikiANN
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|------------------|------------------|--------------------|----------|----------|----------|------|
| **Human** | N | N | 81.4 | 83.5 | 82.6 | - |
| BERT-Base FT | N | N | 62.8±0 | 62.8±0 | 62.8±0 | 88.8 |
| BERT-Large FT | N | N | 62.8±0 | 62.6±0.4 | 62.5±0.6 | 91 |
| T5-Large FT | N | N | 61.7±0.7 | 62.1±0.2 | 62.4±0.6 | 87.4 |
| DeBERTa-Large FT | N | N | 58.5±3.3 | 57.9±5.8 | 58.3±6.2 | 91.1 |
| RoBERTa-Large FT | N | N | 58.5±8.8 | 56.9±3.4 | 48.4±6.7 | 91.2 |
#### SQuAD v2
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|------------------|------------------|--------------------|----------|-----------|----------|------|
| **Human** | N | N | 71.9 | 76.4 | 73.5 | - |
| T5-Large FT | N | N | 43.6±3.5 | 28.7±13.0 | 43.7±2.7 | 87.2 |
| RoBERTa-Large FT | N | N | 38.1±7.2 | 40.1±6.4 | 43.5±4.4 | 89.4 |
| DeBERTa-Large FT | N | N | 41.4±7.3 | 44.4±4.5 | 38.7±7.4 | 90 |
| BERT-Large FT | N | N | 42.3±5.6 | 35.8±9.7 | 35.3±6.4 | 81.8 |
| BERT-Base FT | N | N | 46.0±2.4 | 34.9±9.0 | 32.6±5.8 | 76.3 |
#### ReCoRD
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|------------------|------------------|--------------------|----------|----------|----------|------|
| **Human** | N | N | 94.1 | 94.2 | 91.9 | - |
| DeBERTa-Large FT | N | N | 15.7±5.0 | 16.8±5.7 | 21.1±3.6 | 80.7 |
| RoBERTa-Large FT | N | N | 12.0±1.9 | 9.9±6.2 | 16.0±2.8 | 80.3 |
| BERT-Large FT | N | N | 9.9±5.2 | 11.8±4.9 | 14.9±3.4 | 66 |
| BERT-Base FT | N | N | 10.3±1.8 | 11.7±2.4 | 13.1±3.3 | 54.4 |
| T5-Large FT | N | N | 11.9±2.7 | 11.7±1.5 | 12.0±3.8 | 77.3 |
## How do I cite CLUES?
```
@article{cluesteam2021,
title={Few-Shot Learning Evaluation in Natural Language Understanding},
author={Mukherjee, Subhabrata and Liu, Xiaodong and Zheng, Guoqing and Hosseini, Saghar and Cheng, Hao and Yang, Greg and Meek, Christopher and Awadallah, Ahmed Hassan and Gao, Jianfeng},
booktitle = {NeurIPS 2021},
year = {2021},
month = {December},
url = {https://www.microsoft.com/en-us/research/publication/clues-few-shot-learning-evaluation-in-natural-language-understanding/},
}
```
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.
|
rakkaalhazimi | null | null | null | false | 2 | false | rakkaalhazimi/hotel-review | 2022-03-12T07:23:47.000Z | null | false | 3c70f2fe25f7c73d2460f77a4c3f8b1aa8a6e819 | [] | [
"license:gpl-3.0"
] | https://huggingface.co/datasets/rakkaalhazimi/hotel-review/resolve/main/README.md | ---
license: gpl-3.0
---
# Review Hotel in Indonesia
### Dataset Summary
Data about reviews of hotels in Indonesia
### Languages
Indonesia
## Dataset Structure
### Data Fields
- review_id : unique identification code of each review
- review_text : the main review of text
- category : label for each review, positive (1) or negative (0)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.