author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sumedh | null | null | null | false | 3 | false | sumedh/MeQSum | 2022-03-24T20:20:43.000Z | null | false | abdbddf991d8dbc29adc013f26970ba3232fd712 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/sumedh/MeQSum/resolve/main/README.md | ---
license: apache-2.0
---
- Problem type: Summarization
languages:
- en
multilinguality:
- monolingual
task_ids:
- summarization
# MeQSum
Dataset for medical question summarization introduced in the ACL 2019 paper "On the Summarization of Consumer Health Questions": https://www.aclweb.org/anthology/P19-1215
### Citation Information
```bibtex
@Inproceedings{MeQSum,
author = {Asma {Ben Abacha} and Dina Demner-Fushman},
title = {On the Summarization of Consumer Health Questions},
booktitle = {Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28th - August 2},
year = {2019},
abstract = {Question understanding is one of the main challenges in question answering. In real world applications, users often submit natural language questions that are longer than needed and include peripheral information that increases the complexity of the question, leading to substantially more false positives in answer retrieval. In this paper, we study neural abstractive models for medical question summarization. We introduce the MeQSum corpus of 1,000 summarized consumer health questions. We explore data augmentation methods and evaluate state-of-the-art neural abstractive models on this new task. In particular, we show that semantic augmentation from question datasets improves the overall performance, and that pointer-generator networks outperform sequence-to-sequence attentional models on this task, with a ROUGE-1 score of 44.16%. We also present a detailed error analysis and discuss directions for improvement that are specific to question summarization. }}
``` |
nthngdy | null | @inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{\'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{\'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{\"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
} | The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\ | false | 12 | false | nthngdy/oscar-small | 2022-10-25T08:56:54.000Z | oscar | false | 1a260fa35059bea769b1fb08eba4331621bc3c44 | [] | [
"arxiv:2010.14571",
"annotations_creators:no-annotation",
"language_creators:found",
"language:af",
"language:am",
"language:ar",
"language:arz",
"language:as",
"language:az",
"language:azb",
"language:ba",
"language:be",
"language:bg",
"language:bn",
"language:bo",
"language:br",
"l... | https://huggingface.co/datasets/nthngdy/oscar-small/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- am
- ar
- arz
- as
- az
- azb
- ba
- be
- bg
- bn
- bo
- br
- ca
- ce
- ceb
- ckb
- cs
- cv
- cy
- da
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gl
- gu
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mhr
- mk
- ml
- mn
- mr
- ms
- mt
- my
- nds
- ne
- nl
- nn
- 'no'
- or
- os
- pa
- pl
- pnb
- ps
- pt
- ro
- ru
- sa
- sah
- sd
- sh
- si
- sk
- sl
- sq
- sr
- sv
- sw
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- yi
- zh
license:
- cc0-1.0
multilinguality:
- multilingual
source_datasets:
- oscar
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: oscar
pretty_name: OSCAR
---
## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts.
Using this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly inteded to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the [Go programming language](https://golang.org/) so it lets the [Go runtime](https://golang.org/src/runtime/mprof.go) handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
10zinten | null | null | null | false | 1 | false | 10zinten/op_classical_corpus_bo | 2022-10-23T05:21:37.000Z | null | false | f2d4e2258fe51ace7062bbeb2a55ad1e890d1c72 | [] | [
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"language:bo",
"license:other",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|other",
"task_ids:language-modeling"
] | https://huggingface.co/datasets/10zinten/op_classical_corpus_bo/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- bo
license:
- other
multilinguality:
- monolingual
pretty_name: Tibetan Classical Buddhist Text Corpus
size_categories:
- unknown
source_datasets:
- extended|other
task_categories:
- sequence-modeling
task_ids:
- language-modeling
---
# Licensing information
Apple MIT License (AML). |
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/lewtun__this-is-a-test-name__1648048960 | 2022-03-23T15:22:42.000Z | null | false | 50da653240bbb86afedf9d408eae3c2f80aa646f | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is a test name",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test-name__1648048960/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is a test name
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test name
|
huggan | null | null | null | false | 1 | false | huggan/edges2shoes | 2022-04-12T14:18:05.000Z | null | false | 38dfa1ca1c9df28c02152e1ef34a5866014f7853 | [] | [] | https://huggingface.co/datasets/huggan/edges2shoes/resolve/main/README.md | # Citation
```
@article{pix2pix2017,
title={Image-to-Image Translation with Conditional Adversarial Networks},
author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
journal={CVPR},
year={2017}
}
``` |
huggan | null | null | null | false | 1 | false | huggan/facades | 2022-04-12T13:57:03.000Z | null | false | f5cb9a55f7c4c9e07fa812b2dc21846fc3ffeb78 | [] | [
"arxiv:1703.10593"
] | https://huggingface.co/datasets/huggan/facades/resolve/main/README.md | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 7 | false | huggan/night2day | 2022-04-12T14:18:51.000Z | null | false | 9f145cf7a60b15416a426add7fc62fbed8f94326 | [] | [] | https://huggingface.co/datasets/huggan/night2day/resolve/main/README.md | # Citation
```
@article{pix2pix2017,
title={Image-to-Image Translation with Conditional Adversarial Networks},
author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
journal={CVPR},
year={2017}
}
``` |
huggan | null | null | null | false | 1 | false | huggan/maps | 2022-04-12T13:54:14.000Z | null | false | 0d523b5d1a1ab77e4a3a4d86b9cbb3b432dc804d | [] | [] | https://huggingface.co/datasets/huggan/maps/resolve/main/README.md | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/. |
huggan | null | null | null | false | 3 | false | huggan/cityscapes | 2022-04-12T13:56:44.000Z | null | false | 0eae7d40a244b73068f68bb8f0fd7e456fceb66b | [] | [
"arxiv:1703.10593"
] | https://huggingface.co/datasets/huggan/cityscapes/resolve/main/README.md | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 1 | false | huggan/ae_photos | 2022-04-12T13:56:12.000Z | null | false | 199e90c44cdbc4f8323367513796e07f75df272f | [] | [
"arxiv:1703.10593"
] | https://huggingface.co/datasets/huggan/ae_photos/resolve/main/README.md | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
RUC-DataLab | null | null | null | false | 1 | false | RUC-DataLab/ER-dataset | 2022-07-05T07:58:55.000Z | null | false | 33b406a36ec2927e14c7afd65c598cc57ba77701 | [] | [] | https://huggingface.co/datasets/RUC-DataLab/ER-dataset/resolve/main/README.md | ### dataset-list
The datasets in this dataset repository are from public datasets DeepMatcher,Magellan and WDC, which cover a variety of domains, such as product, citation and restaurant. Each dataset contains entities from two relational tables with multiple attributes, and a set of labeled matching/non-matching entity pairs.
| dataset_name | domain |
| -------------- | ----------- |
| abt_buy | Product |
| amazon_google | Product |
| anime | Anime |
| beer | Product |
| books2 | Book |
| books4 | Book |
| cameras | WDC-Product |
| computers | WDC-Product |
| cosmetics | Cosmetics |
| dblp_acm | Citation |
| dblp_scholar | Citation |
| ebooks1 | eBook |
| fodors_zagat | Restaurant |
| itunes_amazon | Music |
| movies1 | Movie |
| restaurants1 | Restaurant |
| restaurants3 | Restaurant |
| restaurants4 | Restaurant |
| shoes | WDC-Product |
| walmart_amazon | Product |
| watches | WDC-Product |
|
tartuNLP | null | @inproceedings{rikters-etal-2022,
title = "Machine Translation for Livonian: Catering for 20 Speakers",
author = "Rikters, Matīss and
Tomingas, Marili and
Tuisk, Tuuli and
Valts, Ernštreits and
Fishel, Mark",
booktitle = "Proceedings of ACL 2022",
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics"
} | Livonian is one of the most endangered languages in Europe with just a tiny handful of speakers and virtually no publicly available corpora.
In this paper we tackle the task of developing neural machine translation (NMT) between Livonian and English, with a two-fold aim: on one hand,
preserving the language and on the other – enabling access to Livonian folklore, lifestories and other textual intangible heritage as well as
making it easier to create further parallel corpora. We rely on Livonian's linguistic similarity to Estonian and Latvian and collect parallel
and monolingual data for the four languages for translation experiments. We combine different low-resource NMT techniques like zero-shot translation,
cross-lingual transfer and synthetic data creation to reach the highest possible translation quality as well as to find which base languages are
empirically more helpful for transfer to Livonian. The resulting NMT systems and the collected monolingual and parallel data, including a manually
translated and verified translation benchmark, are publicly released.
Fields:
- source: source of the data
- en: sentence in English
- liv: sentence in Livonian | false | 1 | false | tartuNLP/liv4ever | 2022-10-25T12:30:49.000Z | null | false | 11391706f44d60008a984b20fbc2a16ce392fa87 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"language:liv",
"license:cc-by-nc-sa-4.0",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:original",
"task_categories:text2text-generation",
"task_categories:translation",
"l... | https://huggingface.co/datasets/tartuNLP/liv4ever/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
- liv
license:
- cc-by-nc-sa-4.0
multilinguality:
- translation
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text2text-generation
- translation
task_ids: []
pretty_name: Liv4ever
language_bcp47:
- en-US
- liv
tags:
- conditional-text-generation
---
# liv4ever v1
This is the Livonian 4-lingual parallel corpus. Livonian is a Uralic / Finnic language with just about 20 fluent speakers and no native speakers (as of 2021). The texts and translations in this corpus were collected from all the digital text resources that could be found by the authors; scanned and printed materials are left for future work.
The corpus includes parallel data for Livonian-Latvian, Livonian-Estonian and Livonian-English; the data has been collected in 2021. After retrieval it was normalized in terms of different orthographies of Livonian and manually sentence-aligned where needed. It was collected from the following sources, with sentence counts per language pair:
* Dictionary - example sentences from the Livonian-Latvian-Estonian dictionary;
* liv-lv: 10'388,
* liv-et: 10'378
* Stalte - the alphabet book by Kōrli Stalte, translated into Estonian and Latvian;
* liv-lv: 842,
* liv-et: 685
* Poetry - the poetry collection book "Ma võtan su õnge, tursk / Ma akūb sīnda vizzõ, tūrska", with Estonian translations;
* liv-et: 770
* Vääri - the book by Eduard Vääri about Livonian language and culture;
* liv-et: 592
* Satversme - translations of the Latvian Constitution into Livonian, Estonian and English;
* liv-en: 380,
* liv-lv: 414,
* liv-et: 413
* Facebook - social media posts by the Livonian Institute and Livonian Days with original translations;
* liv-en: 123,
* liv-lv: 124,
* liv-et: 7
* JEFUL - article abstracts from the Journal of Estonian and Finno-Ugric Linguistics, special issues dedicated to Livonian studies, translated into Estonian and English;
* liv-en: 36,
* liv-et: 49
* Trilium - the book with a collection of Livonian poetry, foreword and afterword translated into Estonian and Latvian;
* liv-lv: 51,
* liv-et: 53
* Songs - material crawled off lyricstranslate.com;
* liv-en: 54,
* liv-lv: 54,
* liv-fr: 31 |
sentence-transformers | null | null | null | false | 6 | false | sentence-transformers/NQ-retrieval | 2022-03-24T08:18:36.000Z | null | false | 639b92ed9b6f2c613185744d5e0d145e24b070b4 | [] | [] | https://huggingface.co/datasets/sentence-transformers/NQ-retrieval/resolve/main/README.md | #NQ-retrieval
This is a nicely formatted version of the [Natural Questions](https://ai.google.com/research/NaturalQuestions/) dataset, formatted to train and evaluate retrieval systems.
Each row contains the following entries:
- **question**: Original question send for Google Search Engine
- **title**: Title of Wikipedia article
- **candidates**: A list with the passages from the original Wikipedia HTML document
- **passage_types**: Types (text, table, list) of the candidate passages
- **long_answers**: IDs which candidate passages where selected as relevant from annotators. Might be empty if no relevant passage has been identified
- **document_url** |
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/lewtun__this-is-a-test-name__1648111972 | 2022-03-24T08:52:55.000Z | null | false | 1c3412bf9133897681719c058d1dbcc9221e89f1 | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is a test name",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test-name__1648111972/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is a test name
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test name
|
huggan | null | null | null | false | 306 | false | huggan/CelebA-HQ | 2022-04-12T14:10:49.000Z | null | false | 8ce4cef364c585c2d63a6b0ae7fc178995c9a34a | [] | [
"arxiv:1710.10196"
] | https://huggingface.co/datasets/huggan/CelebA-HQ/resolve/main/README.md | # Citation
```
@article{DBLP:journals/corr/abs-1710-10196,
author = {Tero Karras and
Timo Aila and
Samuli Laine and
Jaakko Lehtinen},
title = {Progressive Growing of GANs for Improved Quality, Stability, and Variation},
journal = {CoRR},
volume = {abs/1710.10196},
year = {2017},
url = {http://arxiv.org/abs/1710.10196},
eprinttype = {arXiv},
eprint = {1710.10196},
timestamp = {Mon, 13 Aug 2018 16:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1710-10196.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
Jira | null | null | null | false | 1 | false | Jira/mao | 2022-03-24T10:10:27.000Z | null | false | 44f81ea0f9562e2b49e02af7a98c77bd977341ad | [] | [
"license:gpl"
] | https://huggingface.co/datasets/Jira/mao/resolve/main/README.md | ---
license: gpl
---
|
Gare | null | null | null | false | 1 | false | Gare/Classical_Chinese_to_Modern_Chinese | 2022-03-26T07:47:40.000Z | null | false | b76e14dda40adde0cc5a58831ece800723c1a29a | [] | [
"license:mit"
] | https://huggingface.co/datasets/Gare/Classical_Chinese_to_Modern_Chinese/resolve/main/README.md | ---
license: mit
---
|
Vipitis | null | null | null | false | 1 | false | Vipitis/Shadertoys-bimodal | 2022-04-13T20:58:38.000Z | null | false | b2a66f7f4949d6c43576b43a29ae452b29de11a6 | [] | [] | https://huggingface.co/datasets/Vipitis/Shadertoys-bimodal/resolve/main/README.md | all public data from https://www.shadertoy.com/ (some license conflics may occur)
22250 data points with title, description and code.
Code is concatenaed from all buffers, comments and docstrings stripped |
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/lewtun__this-is-a-test-name__1648137608 | 2022-03-24T16:00:11.000Z | null | false | c5589b86803b9ab1a3747272f6a729f5c22b1e1e | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is a test name",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test-name__1648137608/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is a test name
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test name
|
wesamhaddad14 | null | null | null | false | 1 | false | wesamhaddad14/spanishNLP | 2022-03-24T16:46:39.000Z | null | false | 5a3e6f4f6abdc2088363b7d4ececec81b3c8a053 | [] | [] | https://huggingface.co/datasets/wesamhaddad14/spanishNLP/resolve/main/README.md | # Dataset Card for SpanishNLP
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Spanish Poems and their Authors and titles
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
Openmindedness | null | null | null | false | 1 | false | Openmindedness/mc_chat_scraped_from_toxigon_anarchy | 2022-03-24T17:13:13.000Z | null | false | 9242e8cb6ce1f497794c1728838700bb182cc435 | [] | [
"license:cc"
] | https://huggingface.co/datasets/Openmindedness/mc_chat_scraped_from_toxigon_anarchy/resolve/main/README.md | ---
license: cc
---
|
DFKI-SLT | null | @article{yang2018scidtb,
title={Scidtb: Discourse dependency treebank for scientific abstracts},
author={Yang, An and Li, Sujian},
journal={arXiv preprint arXiv:1806.03653},
year={2018}
} | Annotation corpus for discourse relations benefits NLP tasks such as machine translation and question
answering. SciDTB is a domain-specific discourse treebank annotated on scientific articles.
Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is
flexible and simplified to some extent but do not sacrifice structural integrity. We discuss the labeling framework,
annotation workflow and some statistics about SciDTB. Furthermore, our treebank is made as a benchmark for evaluating
discourse dependency parsers, on which we provide several baselines as fundamental work. | false | 2 | false | DFKI-SLT/scidtb | 2022-10-25T06:38:25.000Z | null | false | 0e7601c463d3048563ea8017d7162279f56333b1 | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language:en",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"task_categories:token-classification",
"task_ids:parsing",
"language_bcp47:en-US"
] | https://huggingface.co/datasets/DFKI-SLT/scidtb/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license: []
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- parsing
pretty_name: Scientific Dependency Tree Bank
language_bcp47:
- en-US
---
# Dataset Card for SciDTB
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/PKU-TANGENT/SciDTB
- **Repository:** https://github.com/PKU-TANGENT/SciDTB
- **Paper:** https://aclanthology.org/P18-2071/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
SciDTB is a domain-specific discourse treebank annotated on scientific articles written in English-language. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. Furthermore, this treebank is made as a benchmark for evaluating discourse dependency parsers. This dataset can benefit many downstream NLP tasks such as machine translation and automatic summarization.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English.
## Dataset Structure
### Data Instances
A typical data point consist of `root` which is a list of nodes in dependency tree. Each node in the list has four fields: `id` containing id for the node, `parent` contains id of the parent node, `text` refers to the span that is part of the current node and finally `relation` represents relation between current node and parent node.
An example from SciDTB train set is given below:
```
{
"root": [
{
"id": 0,
"parent": -1,
"text": "ROOT",
"relation": "null"
},
{
"id": 1,
"parent": 0,
"text": "We propose a neural network approach ",
"relation": "ROOT"
},
{
"id": 2,
"parent": 1,
"text": "to benefit from the non-linearity of corpus-wide statistics for part-of-speech ( POS ) tagging . <S>",
"relation": "enablement"
},
{
"id": 3,
"parent": 1,
"text": "We investigated several types of corpus-wide information for the words , such as word embeddings and POS tag distributions . <S>",
"relation": "elab-aspect"
},
{
"id": 4,
"parent": 5,
"text": "Since these statistics are encoded as dense continuous features , ",
"relation": "cause"
},
{
"id": 5,
"parent": 3,
"text": "it is not trivial to combine these features ",
"relation": "elab-addition"
},
{
"id": 6,
"parent": 5,
"text": "comparing with sparse discrete features . <S>",
"relation": "comparison"
},
{
"id": 7,
"parent": 1,
"text": "Our tagger is designed as a combination of a linear model for discrete features and a feed-forward neural network ",
"relation": "elab-aspect"
},
{
"id": 8,
"parent": 7,
"text": "that captures the non-linear interactions among the continuous features . <S>",
"relation": "elab-addition"
},
{
"id": 9,
"parent": 10,
"text": "By using several recent advances in the activation functions for neural networks , ",
"relation": "manner-means"
},
{
"id": 10,
"parent": 1,
"text": "the proposed method marks new state-of-the-art accuracies for English POS tagging tasks . <S>",
"relation": "evaluation"
}
]
}
```
More such raw data instance can be found [here](https://github.com/PKU-TANGENT/SciDTB/tree/master/dataset)
### Data Fields
- id: an integer identifier for the node
- parent: an integer identifier for the parent node
- text: a string containing text for the current node
- relation: a string representing discourse relation between current node and parent node
### Data Splits
Dataset consists of three splits: `train`, `dev` and `test`.
| Train | Valid | Test |
| ------ | ----- | ---- |
| 743 | 154 | 152|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
More information can be found [here](https://aclanthology.org/P18-2071/)
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{yang-li-2018-scidtb,
title = "{S}ci{DTB}: Discourse Dependency {T}ree{B}ank for Scientific Abstracts",
author = "Yang, An and
Li, Sujian",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-2071",
doi = "10.18653/v1/P18-2071",
pages = "444--449",
abstract = "Annotation corpus for discourse relations benefits NLP tasks such as machine translation and question answering. In this paper, we present SciDTB, a domain-specific discourse treebank annotated on scientific articles. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. We discuss the labeling framework, annotation workflow and some statistics about SciDTB. Furthermore, our treebank is made as a benchmark for evaluating discourse dependency parsers, on which we provide several baselines as fundamental work.",
}
``` |
pietrolesci | null | null | null | false | 7 | false | pietrolesci/nli_fever | 2022-04-25T09:03:28.000Z | null | false | 1eddac63112eee1fdf1966e0bca27a5ff248c772 | [] | [] | https://huggingface.co/datasets/pietrolesci/nli_fever/resolve/main/README.md | ## Overview
The original dataset can be found [here](https://www.dropbox.com/s/hylbuaovqwo2zav/nli_fever.zip?dl=0)
while the Github repo is [here](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md).
This dataset has been proposed in [Combining fact extraction and verification with neural semantic matching networks](https://dl.acm.org/doi/abs/10.1609/aaai.v33i01.33016859). This dataset has been created as a modification
of FEVER.
In the original FEVER setting, the input is a claim from Wikipedia and the expected output is a label.
However, this is different from the standard NLI formalization which is basically a *pair-of-sequence to label* problem.
To facilitate NLI-related research to take advantage of the FEVER dataset, the authors pair the claims in the FEVER dataset
with the textual evidence and make it a *pair-of-sequence to label* formatted dataset.
## Dataset curation
The label mapping follows the paper and is the following
```python
mapping = {
"SUPPORTS": 0, # entailment
"NOT ENOUGH INFO": 1, # neutral
"REFUTES": 2, # contradiction
}
```
Also, the "verifiable" column has been encoded as follows
```python
mapping = {"NOT VERIFIABLE": 0, "VERIFIABLE": 1}
```
Finally, a consistency check with the labels reported in the original FEVER dataset is performed.
NOTE: no label is available for the "test" split.
NOTE: there are 3 instances in common between `dev` and `train` splits.
## Code to generate the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, load_dataset, Value, Features, DatasetDict
import json
# download data from https://www.dropbox.com/s/hylbuaovqwo2zav/nli_fever.zip?dl=0
paths = {
"train": "<some_path>/nli_fever/train_fitems.jsonl",
"validation": "<some_path>/nli_fever/dev_fitems.jsonl",
"test": "<some_path>/nli_fever/test_fitems.jsonl",
}
# parsing code from https://github.com/facebookresearch/anli/blob/main/src/utils/common.py
registered_jsonabl_classes = {}
def register_class(cls):
global registered_jsonabl_classes
if cls not in registered_jsonabl_classes:
registered_jsonabl_classes.update({cls.__name__: cls})
def unserialize_JsonableObject(d):
global registered_jsonabl_classes
classname = d.pop("_jcls_", None)
if classname:
cls = registered_jsonabl_classes[classname]
obj = cls.__new__(cls) # Make instance without calling __init__
for key, value in d.items():
setattr(obj, key, value)
return obj
else:
return d
def load_jsonl(filename, debug_num=None):
d_list = []
with open(filename, encoding="utf-8", mode="r") as in_f:
print("Load Jsonl:", filename)
for line in in_f:
item = json.loads(line.strip(), object_hook=unserialize_JsonableObject)
d_list.append(item)
if debug_num is not None and 0 < debug_num == len(d_list):
break
return d_list
def get_original_fever() -> pd.DataFrame:
"""Get original fever datasets."""
fever_v1 = load_dataset("fever", "v1.0")
fever_v2 = load_dataset("fever", "v2.0")
columns = ["id", "label"]
splits = ["paper_test", "paper_dev", "labelled_dev", "train"]
list_dfs = [fever_v1[split].to_pandas()[columns] for split in splits]
list_dfs.append(fever_v2["validation"].to_pandas()[columns])
dfs = pd.concat(list_dfs, ignore_index=False)
dfs = dfs.drop_duplicates()
dfs = dfs.rename(columns={"label": "fever_gold_label"})
return dfs
def load_and_process(path: str, fever_df: pd.DataFrame) -> pd.DataFrame:
"""Load data split and merge with fever."""
df = pd.DataFrame(load_jsonl(path))
df = df.rename(columns={"query": "premise", "context": "hypothesis"})
# adjust dtype
df["cid"] = df["cid"].astype(int)
# merge with original fever to get labels
df = pd.merge(df, fever_df, left_on="cid", right_on="id", how="inner").drop_duplicates()
return df
def encode_labels(df: pd.DataFrame) -> pd.DataFrame:
"""Encode labels using the mapping used in SNLI and MultiNLI"""
mapping = {
"SUPPORTS": 0, # entailment
"NOT ENOUGH INFO": 1, # neutral
"REFUTES": 2, # contradiction
}
df["label"] = df["fever_gold_label"].map(mapping)
# verifiable
df["verifiable"] = df["verifiable"].map({"NOT VERIFIABLE": 0, "VERIFIABLE": 1})
return df
if __name__ == "__main__":
fever_df = get_original_fever()
dataset_splits = {}
for split, path in paths.items():
# from json to dataframe and merge with fever
df = load_and_process(path, fever_df)
if not len(df) > 0:
print(f"Split `{split}` has no matches")
continue
if split == "train":
# train must have same labels
assert sum(df["fever_gold_label"] != df["label"]) == 0
# encode labels using the default mapping used by other nli datasets
# i.e, entailment: 0, neutral: 1, contradiction: 2
df = df.drop(columns=["label"])
df = encode_labels(df)
# cast to dataset
features = Features(
{
"cid": Value(dtype="int64", id=None),
"fid": Value(dtype="string", id=None),
"id": Value(dtype="int32", id=None),
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"verifiable": Value(dtype="int64", id=None),
"fever_gold_label": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
}
)
if "test" in path:
# no features for test set
df["label"] = -1
df["verifiable"] = -1
df["fever_gold_label"] = "not available"
dataset = Dataset.from_pandas(df, features=features)
dataset_splits[split] = dataset
nli_fever = DatasetDict(dataset_splits)
nli_fever.push_to_hub("pietrolesci/nli_fever", token="<your token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(dataset_splits.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
dataset_splits[i].to_pandas(),
dataset_splits[j].to_pandas(),
on=["premise", "hypothesis", "label"],
how="inner",
).shape[0],
)
#> train - dev: 3
#> train - test: 0
#> dev - test: 0
``` |
pietrolesci | null | null | null | false | 1 | false | pietrolesci/conj_nli | 2022-04-25T13:27:25.000Z | null | false | a3923be8c49a8c8b5025737e64919faecc7576a7 | [] | [] | https://huggingface.co/datasets/pietrolesci/conj_nli/resolve/main/README.md | ## Overview
The original dataset can be found [here](https://github.com/swarnaHub/ConjNLI). It has been
proposed in [ConjNLI: Natural Language Inference Over Conjunctive Sentences](https://aclanthology.org/2020.emnlp-main.661/).
This dataset is a stress test for natural language inference over conjunctive sentences,
where the premise differs from the hypothesis by conjuncts removed, added, or replaced.
## Dataset curation
The label mapping is the usual `{"entailment": 0, "neutral": 1, "contradiction": 2}`
used in NLI datasets. Note that labels for `test` split are not available.
Also, the `train` split is originally named `adversarial_train_15k`.
There are 2 instances (join on "premise", "hypothesis", "label") present both in `train` and `dev`.
The `test` split does not have labels.
Finally, in the `train` set there are a few instances without a label, they are removed.
## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features, DatasetDict
# download data from repo https://github.com/swarnaHub/ConjNLI
paths = {
"train": "<path_to_folder>/ConjNLI-master/data/NLI/adversarial_train_15k.tsv",
"dev": "<path_to_folder>/ConjNLI-master/data/NLI/conj_dev.tsv",
"test": "<path_to_folder>/ConjNLI-master/data/NLI/conj_test.tsv",
}
dataset_splits = {}
for split, path in paths.items():
# load data
df = pd.read_csv(paths[split], sep="\t")
# encode labels using the default mapping used by other nli datasets
# i.e, entailment: 0, neutral: 1, contradiction: 2
df.columns = df.columns.str.lower()
if "test" in path:
df["label"] = -1
else:
# remove empty labels
df = df.loc[~df["label"].isna()]
# encode labels
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
})
dataset = Dataset.from_pandas(df, features=features)
dataset_splits[split] = dataset
conj_nli = DatasetDict(dataset_splits)
conj_nli.push_to_hub("pietrolesci/conj_nli", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(conj_nli.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
conj_nli[i].to_pandas(),
conj_nli[j].to_pandas(),
on=["premise", "hypothesis", "label"], how="inner"
).shape[0],
)
#> train - dev: 2
#> train - test: 0
#> dev - test: 0
``` |
Fatima-Gh | null | null | null | false | 1 | false | Fatima-Gh/GLARE | 2022-06-09T14:00:29.000Z | null | false | 9cbf1e972349722deae84615618ee6ad4d41e36e | [] | [] | https://huggingface.co/datasets/Fatima-Gh/GLARE/resolve/main/README.md | [![CC BY 4.0][cc-by-shield]][cc-by]
[](https://doi.org/10.5281/zenodo.6457824)
# GLARE: Google Apps Arabic Reviews
Dataset and Code of "GLARE: Google Apps Arabic Reviews" paper.
You can download the paper via: [[Github]](GLARE.pdf)
## Paper Summary
We introduce GLARE: Google Apps Arabic Reviews dataset. A collection of 76M reviews from 9,980 Android apps collected from Google PlayStore Saudi store.
## Preparation
#### Below is details about each file, please ensure that you have enough storage before downloading the data.
| Data Type | File Name | File Size | File Type |
| ------------------ |---------------- | -------------- |-------------- |
| raw | apps | 4.1 MB | CSV |
| raw | reviews | 17 GB | CSV |
| raw | categories/ | 4.3 MB | CSV
| engineered | apps | 3.8 MB | CSV
| engineered | reviews | 21.9 GB | CSV
| engineered | vocabulary | 530.5 MB | CSV
## File Specifications
- **apps.csv**: File that contains apps metadata.
- **reviews.csv**: File that contains reviews and reviews metadata.
- **categories/**: Folder that contains 59 CSV files, each file corresponds to one category with apps and apps metadata scrapped from top 200 free apps for that category.
- **vocabulary.csv**: File that contains vocabulary set generated from reviews with additional engineered features (word length, word frequency, has noise or digits, ..etc.)
### Raw Data
#### Apps Metadata
```
{
"title":"application name/title",
"app_id":"application unique identifier",
"url":"application url at Google PlayStore",
"icon":"url for image object",
"developer":"developer name",
"developer_id":"developer unique identifier",
"summary":"short description of the application",
"rating":"application accumulated rating"
}
```
#### Reviews Metadata
```
{
"at":"review datetime",
"content":"review text",
"replied_at":"developer reply datetime",
"reply_content":"developer reply content",
"review_created_version":"user application version during the time of review",
"review_id":"review unique identifier",
"rating":"user rating",
"thumbs_up_count":"number of users that agree with the reviewer",
"user_name":"user display name",
"app_id":"application unique identifier"
}
```
### Engineered Data
#### Apps Metadata
Same as apps.csv in raw data with the following additions:
```
{
"reviews_count":"number of reviews for the application",
"categories":"list of application categories",
"categories_count":"number of application categories"
}
```
#### Reviews Metadata
Same as reviews.csv in raw data with the following additions:
```
{
"tokenized_review":"list of review words tokenized on white-space",
"words_count":"number of words in review"
}
```
#### Vocabulary
```
{
"word":"term text",
"length":"word characters count",
"frequency":"word occurrences in the reviews dataset",
"has_noise":"true or false if word contains anything non-arabic alphanumeric",
"noise":"list of noise (anything non-arabic alphanumeric) in the word",
"has_digits":"true or false if word contains arabic or hindi digits",
"digits":"list of digits in the word"
}
```
### Folders Structure
- Data are prepared as raw data or engineered data.
- Download the dataset files: [Google Drive](https://drive.google.com/drive/folders/1Cb61K3wFdVlIQfKouchsUpn5oXdJbhyg?usp=sharing) | [Zenodo](https://zenodo.org/record/6457824#.Ylv-gX9Bz8w) | [Alternative Google Drive](https://drive.google.com/drive/folders/1jWCCyJPKFf6Q-1zDuGRUBi6XtlmkyHlt?usp=sharing)
- The directory structure is as follow:
```
data
└── raw
├── apps.csv
├── reviews.csv
└── categories/
└── engineered
├── apps.csv
├── reviews.csv
└── vocabulary.csv
```
## Citation
If you use this dataset please cite as:
```
@dataset{alghamdi_fatima_2022_6457824,
author = {AlGhamdi, Fatima and
Mohammed, Reem and
Al-Khalifa, Hend and
Alowisheq, Areeb},
title = {GLARE: Google Apps Arabic Reviews Dataset},
month = apr,
year = 2022,
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.6457824},
url = {https://doi.org/10.5281/zenodo.6457824}
}
```
## License
This work is licensed under a
[Creative Commons Attribution 4.0 International License][cc-by].
[![CC BY 4.0][cc-by-image]][cc-by]
[cc-by]: http://creativecommons.org/licenses/by/4.0/
[cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png
[cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg
|
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/lewtun__this-is-a-test-name__1648220072 | 2022-03-25T14:54:37.000Z | null | false | 34554c2cebd75f8104b5a8128d3685802793558c | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:This is a test name",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/lewtun__this-is-a-test-name__1648220072/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: This is a test name
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test name
|
null | null | @inproceedings{rizwan2020hate,
title={Hate-speech and offensive language detection in roman Urdu},
author={Rizwan, Hammad and Shakeel, Muhammad Haroon and Karim, Asim},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
pages={2512--2522},
year={2020}
} | The Roman Urdu Hate-Speech and Offensive Language Detection (RUHSOLD) dataset is a Roman Urdu dataset of tweets annotated by experts in the relevant language. The authors develop the gold-standard for two sub-tasks. First sub-task is based on binary labels of Hate-Offensive content and Normal content (i.e., inoffensive language). These labels are self-explanatory. The authors refer to this sub-task as coarse-grained classification. Second sub-task defines Hate-Offensive content with four labels at a granular level. These labels are the most relevant for the demographic of users who converse in RU and are defined in related literature. The authors refer to this sub-task as fine-grained classification. The objective behind creating two gold-standards is to enable the researchers to evaluate the hate speech detection approaches on both easier (coarse-grained) and challenging (fine-grained) scenarios. \ | false | 11 | false | roman_urdu_hate_speech | 2022-11-03T15:51:00.000Z | null | false | db1f3de69ae20c2e0360ca6fc5aa1de4f0caa7c1 | [] | [
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language:ur",
"license:mit",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:multi-class-classification",
"tags:binary classification"... | https://huggingface.co/datasets/roman_urdu_hate_speech/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- ur
license:
- mit
multilinguality:
- monolingual
pretty_name: roman_urdu_hate_speech
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
tags:
- binary classification
dataset_info:
- config_name: Coarse_Grained
features:
- name: tweet
dtype: string
- name: label
dtype:
class_label:
names:
0: Abusive/Offensive
1: Normal
splits:
- name: test
num_bytes: 218087
num_examples: 2002
- name: train
num_bytes: 725719
num_examples: 7208
- name: validation
num_bytes: 79759
num_examples: 800
download_size: 927937
dataset_size: 1023565
- config_name: Fine_Grained
features:
- name: tweet
dtype: string
- name: label
dtype:
class_label:
names:
0: Abusive/Offensive
1: Normal
2: Religious Hate
3: Sexism
4: Profane/Untargeted
splits:
- name: test
num_bytes: 219359
num_examples: 2002
- name: train
num_bytes: 723670
num_examples: 7208
- name: validation
num_bytes: 723670
num_examples: 7208
download_size: 1519423
dataset_size: 1666699
---
# Dataset Card for roman_urdu_hate_speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [roman_urdu_hate_speech homepage](https://aclanthology.org/2020.emnlp-main.197/)
- **Repository:** [roman_urdu_hate_speech repository](https://github.com/haroonshakeel/roman_urdu_hate_speech)
- **Paper:** [Hate-Speech and Offensive Language Detection in Roman Urdu](https://aclanthology.org/2020.emnlp-main.197.pdf)
- **Leaderboard:** [N/A]
- **Point of Contact:** [M. Haroon Shakeel](mailto:m.shakeel@lums.edu.pk)
### Dataset Summary
The Roman Urdu Hate-Speech and Offensive Language Detection (RUHSOLD) dataset is a Roman Urdu dataset of tweets annotated by experts in the relevant language. The authors develop the gold-standard for two sub-tasks. First sub-task is based on binary labels of Hate-Offensive content and Normal content (i.e., inoffensive language). These labels are self-explanatory. The authors refer to this sub-task as coarse-grained classification. Second sub-task defines Hate-Offensive content with four labels at a granular level. These labels are the most relevant for the demographic of users who converse in RU and are defined in related literature. The authors refer to this sub-task as fine-grained classification. The objective behind creating two gold-standards is to enable the researchers to evaluate the hate speech detection approaches on both easier (coarse-grained) and challenging (fine-grained) scenarios.
### Supported Tasks and Leaderboards
- 'multi-class-classification', 'text-classification-other-binary classification': The dataset can be used for both multi class classification as well as for binary classification as it contains both coarse grained and fine grained labels.
### Languages
The text of this dataset is Roman Urdu. The associated BCP-47 code is 'ur'.
## Dataset Structure
### Data Instances
The dataset consists of two parts divided as a set of two types, Coarse grained examples and Fine Grained examples. The difference is that in the coarse grained example the tweets are labelled as abusive or normal whereas in the fine grained version there are several classes of hate associated with a tweet.
For the Coarse grained segment of the dataset the label mapping is:-
Task 1: Coarse-grained Classification Labels
0: Abusive/Offensive
1: Normal
Whereas for the Fine Grained segment of the dataset the label mapping is:-
Task 2: Fine-grained Classification Labels
0: Abusive/Offensive
1: Normal
2: Religious Hate
3: Sexism
4: Profane/Untargeted
An example from Roman Urdu Hate Speech looks as follows:
```
{
'tweet': 'there are some yahodi daboo like imran chore zakat khore'
'label': 0
}
```
### Data Fields
-tweet:a string denoting the tweet which has been selected by using a random sampling from a tweet base of 50000 tweets to select 10000 tweets and annotated for the dataset.
-label:An annotation manually labeled by three independent annotators, during the annotation process, all conflicts are resolved by a majority vote among three annotators.
### Data Splits
The data of each of the segments, Coarse Grained and Fine Grained is further split into training, validation and test set. The data is split in train, test, and validation sets with 70,20,10 split ratio using stratification based on fine-grained labels.
The use of stratified sampling is deemed necessary to preserve the same labels ratio across all splits.
The Final split sizes are as follows:
Train Valid Test
7209 2003 801
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by Hammad Rizwan, Muhammad Haroon Shakeel, Asim Karim during work done at Department of Computer Science, Lahore University of Management Sciences (LUMS), Lahore, Pakistan.
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [Roman Urdu Hate Speech Dataset Repository](https://github.com/haroonshakeel/roman_urdu_hate_speech) which is under MIT License.
### Citation Information
```bibtex
@inproceedings{rizwan2020hate,
title={Hate-speech and offensive language detection in roman Urdu},
author={Rizwan, Hammad and Shakeel, Muhammad Haroon and Karim, Asim},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
pages={2512--2522},
year={2020}
}
```
### Contributions
Thanks to [@bp-high](https://github.com/bp-high), for adding this dataset. |
nndhung | null | null | null | false | 1 | false | nndhung/garlic | 2022-03-26T02:27:35.000Z | null | false | 0c8f5b621e9809eda4a0c0ff337eaefc5409a635 | [] | [] | https://huggingface.co/datasets/nndhung/garlic/resolve/main/README.md | |
benjamin | null | null | null | false | 1 | false | benjamin/ner-uk | 2022-10-26T11:47:43.000Z | null | false | 551df3187d04f5f7ea4d6ebf062d016a72a2680c | [] | [
"language:uk",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"task_categories:token-classification",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/benjamin/ner-uk/resolve/main/README.md | ---
language:
- uk
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# lang-uk's ner-uk dataset
A dataset for Ukrainian Named Entity Recognition.
The original dataset is located at https://github.com/lang-uk/ner-uk. All credit for creation of the dataset goes to the contributors of https://github.com/lang-uk/ner-uk.
# License
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" href="http://purl.org/dc/dcmitype/Dataset" property="dct:title" rel="dct:type">"Корпус NER-анотацій українських текстів"</span> by <a xmlns:cc="http://creativecommons.org/ns#" href="https://github.com/lang-uk" property="cc:attributionName" rel="cc:attributionURL">lang-uk</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.<br />Based on a work at <a xmlns:dct="http://purl.org/dc/terms/" href="https://github.com/lang-uk/ner-uk" rel="dct:source">https://github.com/lang-uk/ner-uk</a>. |
laion | null | null | null | false | 2 | false | laion/laion2B-en-safety | 2022-03-26T11:59:21.000Z | null | false | f014e9cf2ffff783ac0602c562d8e43cd32fd37f | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/laion/laion2B-en-safety/resolve/main/README.md | ---
license: cc-by-4.0
---
|
laion | null | null | null | false | 1 | false | laion/laion2B-multi-safety | 2022-03-26T12:27:00.000Z | null | false | 75f15407ce97cff25abfc8291ccf6cea23af4fec | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/laion/laion2B-multi-safety/resolve/main/README.md | ---
license: cc-by-4.0
---
|
laion | null | null | null | false | 1 | false | laion/laion1B-nolang-safety | 2022-03-26T11:47:42.000Z | null | false | 914b2c5024f2c90a74ad86e8d0f58e7c26f94e0b | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/laion/laion1B-nolang-safety/resolve/main/README.md | ---
license: cc-by-4.0
---
|
laion | null | null | null | false | 14 | false | laion/laion5B-index | 2022-03-27T13:59:52.000Z | null | false | 7a92d8664a418ba05eef49dd467b8209b2f1b9af | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/laion/laion5B-index/resolve/main/README.md | ---
license: cc-by-4.0
---
|
Marmoot | null | null | null | false | 1 | false | Marmoot/Fake_News_jpposadas | 2022-03-26T13:51:48.000Z | null | false | 7f6ffb530d7e1b220a1b87b006450452a3b5e1af | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/Marmoot/Fake_News_jpposadas/resolve/main/README.md | ---
license: cc-by-4.0
---
|
Georgii | null | null | null | false | 1 | false | Georgii/russianPoetry | 2022-03-26T16:32:30.000Z | null | false | 1d387cb76539715be292ce6cabc052efb0e79918 | [] | [
"license:mit"
] | https://huggingface.co/datasets/Georgii/russianPoetry/resolve/main/README.md | ---
license: mit
---
|
MorVentura | null | null | null | false | 1 | false | MorVentura/TRBLLmaker | 2022-03-26T18:44:51.000Z | null | false | 264197c1d45d2aa4c8dc4e992e89e432a6e889c4 | [] | [
"TODO:Add YAML tags here."
] | https://huggingface.co/datasets/MorVentura/TRBLLmaker/resolve/main/README.md | ---
TODO: Add YAML tags here.
---
name: **TRBLLmaker**
annotations_creators: found
language_creators: found
languages: en-US
licenses: Genius-Ventura-Toker
multilinguality: monolingual
source_datasets: original
task_categories: sequence-modeling
task_ids: sequence-modeling-seq2seq_generate
# Dataset Card for TRBLLmaker Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Split info](#Split-info)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/venturamor/TRBLLmaker-NLP
- **Paper:** in git
### Dataset Summary
TRBLLmaker - To Read Between Lyrics Lines.
Dataset used in order to train a model to get as an input - several lines of song's lyrics and generate optional interpretation / meaning of them or use the songs' metdata for various tasks such as classification.
This dataset is based on 'Genius' website's data, which contains global collection of songs lyrics and provides annotations and interpretations to songs lyrics and additional music knowledge.
We used 'Genius' API, created private client and extracted the relevant raw data from Genius servers.
We extracted the songs by the most popular songs in each genre - pop, rap, rock, country and r&b. Afterwards, we created a varied pool of 150 artists that associated with different music styles and periods, and extracted maximum of 100 samples from each.
We combined all the data, without repetitions, into one final database. After preforming a cleaning of non-English lyrics, we got our final corpus that contains 8,808 different songs with over all of 60,630 samples, while each sample is a specific sentence from the song's lyrics and its top rated annotation.
### Supported Tasks and Leaderboards
Seq2Seq
### Languages
[En] - English
## Dataset Structure
### Data Fields
We stored each sample in a 'SongInfo' structure with the following attributes: title, genre, annotations and song's meta data.
The meta data contains the artist's name, song id in the server, lyrics and statistics such page views.
### Data Splits
train
train_songs
test
test_songs
validation
validation songs
## Split info
- songs
- samples
train [0.64 (0.8 * 0.8)], test[0.2], validation [0.16 (0.8 * 0.2)]
## Dataset Creation
### Source Data
Genius - https://genius.com/
### Annotations
#### Who are the annotators?
top-ranked annotations by users in Genoius websites / Official Genius annotations
## Considerations for Using the Data
### Social Impact of Dataset
We are excited about the future of applying attention-based models on task such as meaning generation.
We hope this dataset will encourage more NLP researchers to improve the way we understand and enjoy songs, since
achieving artistic comprehension is another step that progress us to the goal of robust AI.
### Other Known Limitations
The artists list can be found here.
## Additional Information
### Dataset Curators
This Dataset created by Mor Ventura and Michael Toker.
### Licensing Information
All source of data belongs to Genius.
### Contributions
Thanks to [@venturamor, @tokeron](https://github.com/venturamor/TRBLLmaker-NLP) for adding this dataset. |
jglaser | null | @InProceedings{huggingface:dataset,
title = {jglaser/pdbbind_complexes},
author={Jens Glaser, ORNL
},
year={2022}
} | A dataset to fine-tune language models on protein-ligand binding affinity and contact prediction. | false | 1 | false | jglaser/pdbbind_complexes | 2022-05-14T20:15:20.000Z | null | false | e7b37332d07b614d95d1dd7c99904f825180f08a | [] | [
"tags:molecules",
"tags:chemistry",
"tags:SMILES"
] | https://huggingface.co/datasets/jglaser/pdbbind_complexes/resolve/main/README.md | ---
tags:
- molecules
- chemistry
- SMILES
---
## How to use the data sets
This dataset contains more than 16,000 unique pairs of protein sequences and ligand SMILES, and the coordinates
of their complexes.
SMILES are assumed to be tokenized by the regex from P. Schwaller
Every (x,y,z) ligand coordinate maps onto a SMILES token, and is *nan* if the token does not represent an atom
Every receptor coordinate maps onto the Calpha coordinate of that residue.
The dataset can be used to fine-tune a language model, all data comes from PDBind-cn.
### Use the already preprocessed data
Load a test/train split using
```
from datasets import load_dataset
train = load_dataset("jglaser/pdbbind_complexes",split='train[:90%]')
validation = load_dataset("jglaser/pdbbind_complexes",split='train[90%:]')
```
### Pre-process yourself
To manually perform the preprocessing, download the data sets from P.DBBind-cn
Register for an account at <https://www.pdbbind.org.cn/>, confirm the validation
email, then login and download
- the Index files (1)
- the general protein-ligand complexes (2)
- the refined protein-ligand complexes (3)
Extract those files in `pdbbind/data`
Run the script `pdbbind.py` in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 pdbbind.py`).
|
Jiangjie | null | null | null | false | 11 | false | Jiangjie/ekar_chinese | 2022-09-18T14:11:53.000Z | null | false | c47ef98d8e371d36831d0e2885659a4820f10da1 | [] | [
"language:zh",
"license:afl-3.0",
"size_categories:1K<n<2K",
"source_datasets:original",
"task_categories:question-answering",
"task_categories:text-generation",
"task_ids:analogical-qa",
"task_ids:explanation-generation"
] | https://huggingface.co/datasets/Jiangjie/ekar_chinese/resolve/main/README.md | ---
language:
- zh
license:
- afl-3.0
size_categories:
- 1K<n<2K
source_datasets:
- original
task_categories:
- question-answering
- text-generation
task_ids:
- analogical-qa
- explanation-generation
---
# Dataset Card for ekar_chinese
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ekar-leaderboard.github.io
- **Paper:** [E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning](https://aclanthology.org/2022.findings-acl.311)
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1671/overview
- **Point of Contact:** jjchen19@fudan.edu.cn
### Dataset Summary
***New!***(9/18/2022) E-KAR `v1.1` is officially released (at the `main` branch), **with a higher-quality English dataset!** In `v1.1`, we further improve the Chinese-to-English translation quality of the English E-KAR, with over 600 problems and over 1,000 explanations manually adjusted. You can still find previous version (as in the paper) in the `v1.0` branch in the repo. For more information please refer to https://ekar-leaderboard.github.io.
The ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area.
### Supported Tasks and Leaderboards
- `analogical-qa`: The dataset can be used to train a model for analogical reasoning in the form of multiple-choice QA.
- `explanation-generation`: The dataset can be used to generate free-text explanations to rationalize analogical reasoning.
This dataset supports two task modes: EASY mode and HARD mode:
- `EASY mode`: where query explanation can be used as part of the input.
- `HARD mode`: no explanation is allowed as part of the input.
### Languages
This dataset is in Chinese, with its [English version](https://huggingface.co/datasets/Jiangjie/ekar_english).
## Dataset Structure
### Data Instances
```json
{
"id": "982f17-en",
"question": "plant:coal",
"choices": {
"label": [
"A",
"B",
"C",
"D"
],
"text": [
"white wine:aged vinegar",
"starch:corn",
"milk:yogurt",
"pickled cabbage:cabbage"
]
},
"answerKey": "C",
"explanation": [
"\"plant\" is the raw material of \"coal\".",
"both \"white wine\" and \"aged vinegar\" are brewed.",
"\"starch\" is made of \"corn\", and the order of words is inconsistent with the query.",
"\"yogurt\" is made from \"milk\".",
"\"pickled cabbage\" is made of \"cabbage\", and the word order is inconsistent with the query."
],
"relation": [
[["plant", "coal", "R3.7"]],
[["white wine", "aged vinegar", "R2.4"]],
[["corn", "starch", "R3.7"]],
[["milk", "yogurt", "R3.7"]],
[["cabbage", "pickled cabbage", "R3.7"]]
]
}
```
### Data Fields
- id: a string identifier for each example.
- question: query terms.
- choices: candidate answer terms.
- answerKey: correct answer.
- explanation: explanations for query (1st) and candidate answers (2nd-5th).
- relation: annotated relations for terms in the query (1st) and candidate answers (2nd-5th).
### Data Splits
| name |train|validation|test|
|:-----:|:---:|:--------:|:--:|
|default| 1155 | 165 | 335 |
|description| | | blinded |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop analogical reasoning systems that are right for the right reasons.
### Discussion of Biases
This dataset is sourced and translated from the Civil Service Examinations of China. Therefore, it may contain information biased to Chinese culture.
### Other Known Limitations
1. The explanation annotation process in E-KAR (not the EG task) is mostly post-hoc and reflects only the result of reasoning. Humans solve the analogy problems in a trial-and-error manner, i.e., adjusting the abduced source structure and trying to find the most suited one for all candidate answers. Therefore, such explanations cannot offer supervision for intermediate reasoning.
2. E-KAR only presents one feasible explanation for each problem, whereas there may be several.
## Additional Information
### Dataset Curators
The dataset was initially created and curated by Jiangjie Chen (Fudan University, ByteDance), Rui Xu (Fudan University), Ziquan Fu (Brain Technologies, Inc.), Wei Shi (South China University of Technology), Xinbo Zhang (ByteDance), Changzhi Sun (ByteDance) and other colleagues at ByteDance and Fudan University.
### Licensing Information
[Needs More Information]
### Citation Information
```latex
@inproceedings{chen-etal-2022-e,
title = "{E}-{KAR}: A Benchmark for Rationalizing Natural Language Analogical Reasoning",
author = "Chen, Jiangjie and
Xu, Rui and
Fu, Ziquan and
Shi, Wei and
Li, Zhongqiao and
Zhang, Xinbo and
Sun, Changzhi and
Li, Lei and
Xiao, Yanghua and
Zhou, Hao",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.311",
pages = "3941--3955",
}
```
|
Jiangjie | null | null | null | false | 3 | false | Jiangjie/ekar_english | 2022-09-18T14:10:44.000Z | null | false | 1f97699e221e2341244cf8327013956005305cf8 | [] | [
"language:en",
"license:afl-3.0",
"size_categories:1K<n<2K",
"source_datasets:original",
"task_categories:question-answering",
"task_categories:text-generation",
"task_ids:analogical-qa",
"task_ids:explanation-generation"
] | https://huggingface.co/datasets/Jiangjie/ekar_english/resolve/main/README.md | ---
language:
- en
license:
- afl-3.0
size_categories:
- 1K<n<2K
source_datasets:
- original
task_categories:
- question-answering
- text-generation
task_ids:
- analogical-qa
- explanation-generation
---
# Dataset Card for ekar_english
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ekar-leaderboard.github.io
- **Paper:** [E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning](https://aclanthology.org/2022.findings-acl.311)
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1671/overview
- **Point of Contact:** jjchen19@fudan.edu.cn
### Dataset Summary
***New!***(9/18/2022) E-KAR `v1.1` is officially released (at the `main` branch), **with a higher-quality English dataset!** In `v1.1`, we further improve the Chinese-to-English translation quality of the English E-KAR, with over 600 problems and over 1,000 explanations manually adjusted. You can still find previous version (as in the paper) in the `v1.0` branch in the repo. For more information please refer to https://ekar-leaderboard.github.io.
The ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area.
### Supported Tasks and Leaderboards
- `analogical-qa`: The dataset can be used to train a model for analogical reasoning in the form of multiple-choice QA.
- `explanation-generation`: The dataset can be used to generate free-text explanations to rationalize analogical reasoning.
This dataset supports two task modes: EASY mode and HARD mode:
- `EASY mode`: where query explanation can be used as part of the input.
- `HARD mode`: no explanation is allowed as part of the input.
### Languages
This dataset is in English, which is translated from [its Chinese version](https://huggingface.co/datasets/Jiangjie/ekar_chinese/)
## Dataset Structure
### Data Instances
```json
{
"id": "982f17-en",
"question": "plant:coal",
"choices": {
"label": [
"A",
"B",
"C",
"D"
],
"text": [
"white wine:aged vinegar",
"starch:corn",
"milk:yogurt",
"pickled cabbage:cabbage"
]
},
"answerKey": "C",
"explanation": [
"\"plant\" is the raw material of \"coal\".",
"both \"white wine\" and \"aged vinegar\" are brewed.",
"\"starch\" is made of \"corn\", and the order of words is inconsistent with the query.",
"\"yogurt\" is made from \"milk\".",
"\"pickled cabbage\" is made of \"cabbage\", and the word order is inconsistent with the query."
],
"relation": [
[["plant", "coal", "R3.7"]],
[["white wine", "aged vinegar", "R2.4"]],
[["corn", "starch", "R3.7"]],
[["milk", "yogurt", "R3.7"]],
[["cabbage", "pickled cabbage", "R3.7"]]
]
}
```
### Data Fields
- id: a string identifier for each example.
- question: query terms.
- choices: candidate answer terms.
- answerKey: correct answer.
- explanation: explanations for query (1st) and candidate answers (2nd-5th).
- relation: annotated relations for terms in the query (1st) and candidate answers (2nd-5th).
### Data Splits
| name |train|validation|test|
|:-----:|:---:|:--------:|:--:|
|default| 870| 119| 262|
|description| | | blinded |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop analogical reasoning systems that are right for the right reasons.
### Discussion of Biases
This dataset is sourced and translated from the Civil Service Examinations of China. Therefore, despite the effort that the authors try to remove or rewrite such problems, it may still contain information biased to Chinese culture.
### Other Known Limitations
1. The explanation annotation process in E-KAR (not the EG task) is mostly post-hoc and reflects only the result of reasoning. Humans solve the analogy problems in a trial-and-error manner, i.e., adjusting the abduced source structure and trying to find the most suited one for all candidate answers. Therefore, such explanations cannot offer supervision for intermediate reasoning.
2. E-KAR only presents one feasible explanation for each problem, whereas there may be several.
3. The English version of E-KAR is machine-translated and post-edited by humans. Although the authors have tried their best to maintain the translation quality, there could be some unsatisfying samples in the English dataset, e.g., culture-specific ones, ambiguous ones after translation, etc.
## Additional Information
### Dataset Curators
The dataset was initially created and curated by Jiangjie Chen (Fudan University, ByteDance), Rui Xu (Fudan University), Ziquan Fu (Brain Technologies, Inc.), Wei Shi (South China University of Technology), Xinbo Zhang (ByteDance), Changzhi Sun (ByteDance) and other colleagues at ByteDance and Fudan University.
### Licensing Information
[Needs More Information]
### Citation Information
```latex
@inproceedings{chen-etal-2022-e,
title = "{E}-{KAR}: A Benchmark for Rationalizing Natural Language Analogical Reasoning",
author = "Chen, Jiangjie and
Xu, Rui and
Fu, Ziquan and
Shi, Wei and
Li, Zhongqiao and
Zhang, Xinbo and
Sun, Changzhi and
Li, Lei and
Xiao, Yanghua and
Zhou, Hao",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.311",
pages = "3941--3955",
}
``` |
atenglens | null | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | false | 1 | false | atenglens/taiwanese_english_translation | 2022-10-24T19:51:45.000Z | null | false | a1a0b053c5c1fdfb9a26a5557be62272b1582b2c | [] | [
"language_creators:other",
"language:tw",
"language:en",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:extended|other",
"task_categories:question-answering",
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_categories:translation",
"ta... | https://huggingface.co/datasets/atenglens/taiwanese_english_translation/resolve/main/README.md | ---
annotations_creators: []
language_creators:
- other
language:
- tw
- en
license: []
multilinguality:
- translation
size_categories:
- unknown
source_datasets:
- extended|other
task_categories:
- question-answering
- text2text-generation
- text-generation
- translation
task_ids:
- language-modeling
pretty_name: taiwanese_english_translation
tags:
- conditional-text-generation
---
# Dataset Card for taiwanese_english_translation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://taigi.fhl.net/list.html**
### Dataset Summary
[More Information Needed]
### Languages
Source Language: Taiwanese (Tailo romanization system)
Target Language: English
## Dataset Structure
csv: Tailo,English
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@atenglens](https://github.com/atenglens) for adding this dataset. |
nadhifikbarw | null | null | null | false | 1 | false | nadhifikbarw/id_ohsuhmed | 2022-10-25T10:03:35.000Z | null | false | e3a39e3d6c1ff7e58cbeac653518a300a875adfd | [] | [
"language:id",
"task_categories:text-classification",
"source:http://disi.unitn.it/moschitti/corpora.htm"
] | https://huggingface.co/datasets/nadhifikbarw/id_ohsuhmed/resolve/main/README.md | ---
language:
- id
task_categories:
- text-classification
source:
- http://disi.unitn.it/moschitti/corpora.htm
---
Machine translated Ohsumed collection (EN to ID)
Original corpora: http://disi.unitn.it/moschitti/corpora.htm
Translated using: https://huggingface.co/Helsinki-NLP/opus-mt-en-id
Compatible with HuggingFace text-classification script (Tested in 4.17)
https://github.com/huggingface/transformers/tree/v4.17.0/examples/pytorch/text-classification
[Moschitti, 2003a]. Alessandro Moschitti, Natural Language Processing and Text Categorization: a study on the reciprocal beneficial interactions, PhD thesis, University of Rome Tor Vergata, Rome, Italy, May 2003. |
stjokerli | null | null | null | false | 1 | false | stjokerli/TextToText_DocNLI_seqio | 2022-03-27T14:46:59.000Z | null | false | dcfc1a380a19b9f4b30cec04a4387be24da0b2b3 | [] | [] | https://huggingface.co/datasets/stjokerli/TextToText_DocNLI_seqio/resolve/main/README.md | text to text implementation basing on https://github.com/salesforce/DocNLI
DatasetDict({
train: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 942314
})
validation: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 234258
})
test: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 267086
})
}) |
IIC | null | null | null | false | 1 | false | IIC/spanish_biomedical_crawled_corpus_splitted | 2022-10-23T05:25:14.000Z | null | false | 4739b6841d7137b528e471361de472e0d6b6ca99 | [] | [
"arxiv:2109.07765",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"language:es",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:IIC/spanish_biomedical_crawled_corpus",
"task_ids:language-modeling"
] | https://huggingface.co/datasets/IIC/spanish_biomedical_crawled_corpus_splitted/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- es
multilinguality:
- monolingual
pretty_name: Spanish_Biomedical_Crawled_Corpus_Splitted
size_categories:
- 1M<n<10M
source_datasets:
- IIC/spanish_biomedical_crawled_corpus
task_categories:
- sequence-modeling
task_ids:
- language-modeling
---
# Spanish_Biomedical_Crawled_Corpus_Splitted
This is a dataset retrieved directly from [this link](https://zenodo.org/record/5510033#.Ykho3-hByUk), which was originally developed by [BSC](https://temu.bsc.es/). This is a direct copy-paste of the usage, limitations and license of the original dataset:
```
Description
The largest Spanish biomedical and heath corpus to date gathered from a massive Spanish health domain crawler over more than 3,000 URLs were downloaded and preprocessed. The collected data have been preprocessed to produce the CoWeSe (Corpus Web Salud Español) resource, a large-scale and high-quality corpus intended for biomedical and health NLP in Spanish.
Directory structure
CoWeSe.txt: the CoWeSe corpus; an empty line separates each document
License
The corpus is released under this licensing scheme:
- We do not own any of the text from which these data has been extracted and preprocessed to be ready for use for language modeling tasks.
- We license the actual packaging of these data under a CC0 1.0 Universal License
Notice and take down policy
Notice: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
Clearly identify the copyrighted work claimed to be infringed.
Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate
Copyright (c) 2021 Text Mining Unit at BSC
```
License, distribution and usage conditions of the original dataset apply.
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this dataset.
### Citation
```
@misc{carrino2021spanish,
title={Spanish Biomedical Crawled Corpus: A Large, Diverse Dataset for Spanish Biomedical Language Models},
author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Ona de Gibert Bonet and Asier Gutiérrez-Fandiño and Aitor Gonzalez-Agirre and Martin Krallinger and Marta Villegas},
year={2021},
eprint={2109.07765},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
IIC | null | null | null | false | 1 | false | IIC/ms_marco_es | 2022-10-23T05:26:06.000Z | null | false | b2f5ef77a9a850de5db162da8fd9d76c88ecc7f9 | [] | [
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"language:es",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:ms_marco",
"task_ids:language-modeling"
] | https://huggingface.co/datasets/IIC/ms_marco_es/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- es
multilinguality:
- monolingual
pretty_name: MSMARCO_ES
size_categories:
- 100K<n<1M
source_datasets:
- ms_marco
task_categories:
- sequence-modeling
task_ids:
- language-modeling
---
# MSMARCO_ES
This is an automatically translated version of the [msmarco v1 dataset](https://huggingface.co/datasets/ms_marco) , a dataset used for text similarity tasks.
The translation was performed for the queries and passages using the [marianMT english-spanish](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) . A posterior processing was required to sample the querys because there was some of them with more or less positive and negative labels than the recommended (4 neg and 1 pos).
License, distribution and usage conditions of the original dataset apply.
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this dataset. |
stjokerli | null | null | null | false | 1 | false | stjokerli/TextToText_squad_seqio | 2022-03-27T22:39:25.000Z | null | false | 1938d356e66bcdcdd662dd7b9285d0c4a0bc9c6b | [] | [] | https://huggingface.co/datasets/stjokerli/TextToText_squad_seqio/resolve/main/README.md | squad_v010_allanswers in T5 paper https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/data/tasks.py
DatasetDict({
squad: DatasetDict({
train: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 87599
})
validation: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 10570
})
})
}) |
sac3tf | null | null | null | false | 1 | false | sac3tf/roman_urdu | 2022-03-28T04:50:30.000Z | null | false | 3f1a89e7d89662e16fa2e0f1b9ce0af57eabdc35 | [] | [] | https://huggingface.co/datasets/sac3tf/roman_urdu/resolve/main/README.md | |
null | null | @article{Wang2021AdversarialGA,
title={Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models},
author={Boxin Wang and Chejian Xu and Shuohang Wang and Zhe Gan and Yu Cheng and Jianfeng Gao and Ahmed Hassan Awadallah and B. Li},
journal={ArXiv},
year={2021},
volume={abs/2111.02840}
} | Adversarial GLUE Benchmark (AdvGLUE) is a comprehensive robustness evaluation benchmark
that focuses on the adversarial robustness evaluation of language models. It covers five
natural language understanding tasks from the famous GLUE tasks and is an adversarial
version of GLUE benchmark. | false | 687 | false | adv_glue | 2022-11-03T15:51:00.000Z | null | false | ed6ef2792d65ac86be1d06d3fa387fe23331b96e | [] | [
"arxiv:2111.02840",
"annotations_creators:other",
"language_creators:machine-generated",
"language:en",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:extended|glue",
"task_categories:text-classification",
"task_ids:natural-language-inference",
"t... | https://huggingface.co/datasets/adv_glue/resolve/main/README.md | ---
annotations_creators:
- other
language_creators:
- machine-generated
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: Adversarial GLUE
size_categories:
- n<1K
source_datasets:
- extended|glue
task_categories:
- text-classification
task_ids:
- natural-language-inference
- sentiment-classification
configs:
- adv_mnli
- adv_mnli_mismatched
- adv_qnli
- adv_qqp
- adv_rte
- adv_sst2
tags:
- paraphrase-identification
- qa-nli
dataset_info:
- config_name: adv_sst2
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
0: negative
1: positive
- name: idx
dtype: int32
splits:
- name: validation
num_bytes: 16595
num_examples: 148
download_size: 40662
dataset_size: 16595
- config_name: adv_qqp
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype:
class_label:
names:
0: not_duplicate
1: duplicate
- name: idx
dtype: int32
splits:
- name: validation
num_bytes: 9926
num_examples: 78
download_size: 40662
dataset_size: 9926
- config_name: adv_mnli
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
0: entailment
1: neutral
2: contradiction
- name: idx
dtype: int32
splits:
- name: validation
num_bytes: 23736
num_examples: 121
download_size: 40662
dataset_size: 23736
- config_name: adv_mnli_mismatched
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
0: entailment
1: neutral
2: contradiction
- name: idx
dtype: int32
splits:
- name: validation
num_bytes: 40982
num_examples: 162
download_size: 40662
dataset_size: 40982
- config_name: adv_qnli
features:
- name: question
dtype: string
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
0: entailment
1: not_entailment
- name: idx
dtype: int32
splits:
- name: validation
num_bytes: 34877
num_examples: 148
download_size: 40662
dataset_size: 34877
- config_name: adv_rte
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
0: entailment
1: not_entailment
- name: idx
dtype: int32
splits:
- name: validation
num_bytes: 25998
num_examples: 81
download_size: 40662
dataset_size: 25998
---
# Dataset Card for Adversarial GLUE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://adversarialglue.github.io/
- **Repository:**
- **Paper:** [arXiv](https://arxiv.org/pdf/2111.02840.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Adversarial GLUE Benchmark (AdvGLUE) is a comprehensive robustness evaluation benchmark that focuses on the adversarial robustness evaluation of language models. It covers five natural language understanding tasks from the famous GLUE tasks and is an adversarial version of GLUE benchmark.
AdvGLUE considers textual adversarial attacks from different perspectives and hierarchies, including word-level transformations, sentence-level manipulations, and human-written adversarial examples, which provide comprehensive coverage of various adversarial linguistic phenomena.
### Supported Tasks and Leaderboards
Leaderboard available on the homepage: [https://adversarialglue.github.io/](https://adversarialglue.github.io/).
### Languages
AdvGLUE deviates from the GLUE dataset, which has a base language of English.
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 198 KB
- **Example**:
```python
>>> datasets.load_dataset('adv_glue', 'adv_sst2')['validation'][0]
{'sentence': "it 's an uneven treat that bores fun at the democratic exercise while also examining its significance for those who take part .", 'label': 1, 'idx': 0}
```
### Data Fields
The data fields are the same as in the GLUE dataset, which differ by task.
The data fields are the same among all splits.
#### adv_mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### adv_mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### adv_mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### adv_qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### adv_qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### adv_rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### adv_sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
Adversarial GLUE provides only a 'dev' split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is distributed under the [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/legalcode) license.
### Citation Information
```bibtex
@article{Wang2021AdversarialGA,
title={Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models},
author={Boxin Wang and Chejian Xu and Shuohang Wang and Zhe Gan and Yu Cheng and Jianfeng Gao and Ahmed Hassan Awadallah and B. Li},
journal={ArXiv},
year={2021},
volume={abs/2111.02840}
}
```
### Contributions
Thanks to [@jxmorris12](https://github.com/jxmorris12) for adding this dataset. |
carolina-c4ai | null | null | Carolina is an Open Corpus for Linguistics and Artificial Intelligence with a
robust volume of texts of varied typology in contemporary Brazilian Portuguese
(1970-2021). | false | 55 | false | carolina-c4ai/corpus-carolina | 2022-10-24T18:45:28.000Z | null | false | 1133b05834af3db6b84a177a206d96713904a0b8 | [] | [
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"language:pt",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:1B<n<10B",
"source_datasets:original",
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modelin... | https://huggingface.co/datasets/carolina-c4ai/corpus-carolina/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- pt
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1B<n<10B
source_datasets:
- original
task_categories:
- fill-mask
- text-generation
task_ids:
- masked-language-modeling
- language-modeling
pretty_name: Carolina
language_bcp47:
- pt-BR
---
# Dataset Card for Corpus Carolina
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [sites.usp.br/corpuscarolina](https://sites.usp.br/corpuscarolina/)
- **Current Version:** 1.1 (Ada)
- **Point of Contact:** [Guilherme L. Mello](mailto:guilhermelmello@ime.usp.br)
### Dataset Summary
Carolina is an Open Corpus for Linguistics and Artificial Intelligence with a
robust volume of texts of varied typology in contemporary Brazilian Portuguese
(1970-2021). This corpus contains documents and texts extracted from the web
and includes information (metadata) about its provenance and tipology.
The documents are clustered into taxonomies and the corpus can be loaded in complete or taxonomy modes. To load a single taxonomy, it is possible to pass a code as a parameter to the loading script (see the example bellow). Codes are 3-letters string and possible values are:
- `dat` : datasets and other corpora;
- `jud` : judicial branch;
- `leg` : legislative branch;
- `pub` : public domain works;
- `soc` : social media;
- `uni` : university domains;
- `wik` : wikis.
Usage Example:
```python
from datasets import load_dataset
# to load all taxonomies
corpus_carolina = load_dataset("carolina-c4ai/corpus-carolina")
# to load social media documents
social_media = load_dataset("carolina-c4ai/corpus-carolina", taxonomy="soc")
```
### Supported Tasks
Carolina corpus was compiled for academic purposes,
namely linguistic and computational analysis.
### Languages
Contemporary Brazilian Portuguese (1970-2021).
## Dataset Structure
Files are stored inside `corpus` folder with a subfolder
for each taxonomy. Every file folows a XML structure
(TEI P5) and contains multiple extracted documents. For
each document, the text and metadata are exposed as
`text` and `meta` features, respectively.
### Data Instances
Every instance have the following structure.
```
{
"meta": datasets.Value("string"),
"text": datasets.Value("string")
}
```
| Code | Taxonomy | Instances | Size |
|:----:|:---------------------------|----------:|-------:|
| | **Total** | 1745187 | 7.3 GB |
| dat | Datasets and other Corpora | 1098717 | 3.3 GB |
| wik | Wikis | 603968 | 2.6 GB |
| jud | Judicial Branch | 38068 | 1.4 GB |
| leg | Legislative Branch | 14 | 20 MB |
| soc | Social Media | 3449 | 15 MB |
| uni | University Domains | 945 | 9.4 MB |
| pub | Public Domain Works | 26 | 3.6 MB |
||
### Data Fields
- `meta`: a XML string with a TEI conformant `teiHeader` tag. It is exposed as text and needs to be parsed in order to access the actual metada;
- `text`: a string containing the extracted document.
### Data Splits
As a general corpus, Carolina does not have splits. In order to load the dataset, it is used `corpus` as its single split.
## Additional Information
### Dataset Curators
The Corpus Carolina is developed by a multidisciplinary
team of linguists and computer scientists, members of the
Virtual Laboratory of Digital Humanities - LaViHD and the Artificial Intelligence Center of the University of São Paulo - C4AI.
### Licensing Information
The Open Corpus for Linguistics and Artificial Intelligence (Carolina) was
compiled for academic purposes, namely linguistic and computational analysis.
It is composed of texts assembled in various digital repositories, whose
licenses are multiple and therefore should be observed when making use of the
corpus. The Carolina headers are licensed under Creative Commons
Attribution-NonCommercial-ShareAlike 4.0 International."
### Citation Information
```
@misc{corpusCarolinaV1.1,
title={
Carolina:
The Open Corpus for Linguistics and Artificial Intelligence
},
author={
Finger, Marcelo and
Paixão de Sousa, Maria Clara and
Namiuti, Cristiane and
Martins do Monte, Vanessa and
Costa, Aline Silva and
Serras, Felipe Ribas and
Sturzeneker, Mariana Lourenço and
Guets, Raquel de Paula and
Mesquita, Renata Morais and
Mello, Guilherme Lamartine de and
Crespo, Maria Clara Ramos Morales and
Rocha, Maria Lina de Souza Jeannine and
Brasil, Patrícia and
Silva, Mariana Marques da and
Palma, Mayara Feliciano
},
howpublished={\url{
https://sites.usp.br/corpuscarolina/corpus}},
year={2022},
note={Version 1.1 (Ada)},
}
```
|
laion | null | null | null | false | 1 | false | laion/laion-synthetic-115m | 2022-04-03T16:43:14.000Z | null | false | df65fa1e2348cd883bb4925e3abc7f9724f2a04a | [] | [] | https://huggingface.co/datasets/laion/laion-synthetic-115m/resolve/main/README.md | # laion-synthetic-115m

This dataset is a version of [laion-400m](https://laion.ai/laion-400-open-dataset/) with filtering/replacement of noisy/inaccurate captions with captions generated via the BLIP model. Provided by salesforce in [BLIP](https://github.com/salesforce/BLIP). Modified to be compatible with `img2dataset` tool.
## Download captioned images
Note: you may want to change some of the keyword arguments depending on your specific needs.
```sh
# Download parquet file containing mapping of image-URL's -> captions
wget -c https://huggingface.co/datasets/laion/laion-synthetic-115m/resolve/main/laion_synthetic_filtered_large.parquet
pip install img2dataset
# Download as many URL's as possible into webdataset (tars of txt/jpg files). Can also specify `files` instead.
img2dataset laion_synthetic_filtered_large.parquet --image_size 320 --resize_mode 'keep_ratio' --caption_col 'caption'--input_format parquet --output_format webdataset
> Downloading starting now, check your bandwidth speed (with bwm-ng)your cpu (with htop), and your disk usage (with iotop)!
``` |
IIC | null | null | null | false | 3 | false | IIC/msmarco_es | 2022-10-23T05:27:30.000Z | null | false | 548e5240a850261a8d25cde57b1d662efb7a0ce1 | [] | [
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"language:es",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:ms_marco",
"task_ids:language-modeling"
] | https://huggingface.co/datasets/IIC/msmarco_es/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- es
multilinguality:
- monolingual
pretty_name: MSMARCO_ES
size_categories:
- 100K<n<1M
source_datasets:
- ms_marco
task_categories:
- sequence-modeling
task_ids:
- language-modeling
---
# MSMARCO_ES
This is an automatically translated version of the [msmarco v1 dataset](https://huggingface.co/datasets/ms_marco) , a dataset used for text similarity tasks.
The translation was performed for the queries and passages using the [marianMT english-spanish](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) . A posterior processing was required to sample the querys because there was some of them with more or less positive and negative labels than the recommended (4 neg and 1 pos).
License, distribution and usage conditions of the original dataset apply.
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this dataset. |
laion | null | null | null | false | 1 | false | laion/laion2B-en-watermark | 2022-03-28T21:20:25.000Z | null | false | 2897f6ad3611ffad489ec8678b0e240a4aba7aea | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/laion/laion2B-en-watermark/resolve/main/README.md | ---
license: cc-by-4.0
---
|
KeithHorgan98 | null | null | null | false | 1 | false | KeithHorgan98/autotrain-data-TweetClimateAnalysis | 2022-03-28T22:27:22.000Z | null | false | 01540a66ded66626baf224072d8faf05b5d329d0 | [] | [
"task_categories:text-classification"
] | https://huggingface.co/datasets/KeithHorgan98/autotrain-data-TweetClimateAnalysis/resolve/main/README.md | ---
task_categories:
- text-classification
---
# AutoTrain Dataset for project: TweetClimateAnalysis
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project TweetClimateAnalysis.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "What do you do if you are a global warming alarmist and real-world temperatures do not warm as much [...]",
"target": 16
},
{
"text": "(2.) A sun-blocking volcanic aerosols component to explain the sudden but temporary cooling of globa[...]",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=18, names=['0_0', '1_1', '1_2', '1_3', '1_4', '1_6', '1_7', '2_1', '2_3', '3_1', '3_2', '3_3', '4_1', '4_2', '4_4', '4_5', '5_1', '5_2'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 23436 |
| valid | 2898 |
|
hackathon-pln-es | null | null | null | false | 11 | false | hackathon-pln-es/Dataset-Acoso-Twitter-Es | 2022-03-31T00:03:51.000Z | null | false | dc42b11d642dd1b4985d30f98ec68f63363b1141 | [] | [
"license:gpl-3.0",
"languaje:es"
] | https://huggingface.co/datasets/hackathon-pln-es/Dataset-Acoso-Twitter-Es/resolve/main/README.md | ---
license: gpl-3.0
languaje:
- es
---
# UNL: Universidad Nacional de Loja
### Miembros del equipo:
- Anderson Quizhpe <br>
- Luis Negrón <br>
- David Pacheco <br>
- Bryan Requenes <br>
- Paul Pasaca
<br><br>
|
huggan | null | null | null | false | 2 | false | huggan/horse2zebra | 2022-04-12T13:57:34.000Z | null | false | 67e0da2e44e860e37857edf17af7b2656b3be221 | [] | [
"arxiv:1703.10593"
] | https://huggingface.co/datasets/huggan/horse2zebra/resolve/main/README.md | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 6 | false | huggan/monet2photo | 2022-04-12T13:58:04.000Z | null | false | 26e882f90a286672b7dd46e603b3dd6b9c6c007e | [] | [
"arxiv:1703.10593"
] | https://huggingface.co/datasets/huggan/monet2photo/resolve/main/README.md | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 2 | false | huggan/cezanne2photo | 2022-04-12T13:56:27.000Z | null | false | b9a1024774140d73f8272bf1158b4a8b4ef7abfe | [] | [
"arxiv:1703.10593"
] | https://huggingface.co/datasets/huggan/cezanne2photo/resolve/main/README.md | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 1 | false | huggan/ukiyoe2photo | 2022-04-12T13:58:34.000Z | null | false | 7d0f0b1f34034b010c6b7fc44d6b266803788448 | [] | [
"arxiv:1703.10593"
] | https://huggingface.co/datasets/huggan/ukiyoe2photo/resolve/main/README.md | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 2 | false | huggan/vangogh2photo | 2022-04-12T13:58:45.000Z | null | false | 877a11fde59bfcbbc59d508c7b00c7fa307604e6 | [] | [
"arxiv:1703.10593"
] | https://huggingface.co/datasets/huggan/vangogh2photo/resolve/main/README.md | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 1 | false | huggan/apple2orange | 2022-04-12T13:55:40.000Z | null | false | c8706b48de9deec7eeee248792fcf483b3ccf4ef | [] | [
"arxiv:1703.10593"
] | https://huggingface.co/datasets/huggan/apple2orange/resolve/main/README.md | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 1 | false | huggan/iphone2dslr_flower | 2022-04-12T13:57:46.000Z | null | false | c4f40db4563d2acebd3a92c9b968f00c95234472 | [] | [
"arxiv:1703.10593"
] | https://huggingface.co/datasets/huggan/iphone2dslr_flower/resolve/main/README.md | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 2 | false | huggan/summer2winter_yosemite | 2022-04-12T13:58:19.000Z | null | false | 7ada4dc70d20d435adc11b644ceeaff8d3b323c4 | [] | [
"arxiv:1703.10593"
] | https://huggingface.co/datasets/huggan/summer2winter_yosemite/resolve/main/README.md | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 1 | false | huggan/grumpifycat | 2022-04-12T13:57:20.000Z | null | false | 853bc3c3221dfaa41d9116b4b11ec2953cc13fa3 | [] | [
"arxiv:1703.10593"
] | https://huggingface.co/datasets/huggan/grumpifycat/resolve/main/README.md | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
marksverdhei | null | null | null | false | 86 | false | marksverdhei/clickbait_title_classification | 2022-03-29T21:25:01.000Z | null | false | bf3e4aaebd4162e0b9d31785028c43bbd6303585 | [] | [
"arxiv:1610.09786",
"license:mit"
] | https://huggingface.co/datasets/marksverdhei/clickbait_title_classification/resolve/main/README.md | ---
license: mit
---
Dataset introduced in [Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media](https://arxiv.org/abs/1610.09786)
by Abhijnan Chakraborty, Bhargavi Paranjape, Sourya Kakarla, Niloy Ganguly
Abhijnan Chakraborty, Bhargavi Paranjape, Sourya Kakarla, and Niloy Ganguly. "Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media”. In Proceedings of the 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), San Fransisco, US, August 2016.
Cite:
```
@inproceedings{chakraborty2016stop,
title={Stop Clickbait: Detecting and preventing clickbaits in online news media},
author={Chakraborty, Abhijnan and Paranjape, Bhargavi and Kakarla, Sourya and Ganguly, Niloy},
booktitle={Advances in Social Networks Analysis and Mining (ASONAM), 2016 IEEE/ACM International Conference on},
pages={9--16},
year={2016},
organization={IEEE}
}
```
|
laion | null | null | null | false | 26 | false | laion/laion2B-en-joined | 2022-03-31T07:44:37.000Z | null | false | e1164c43bfbe528e5806c38a468e8016c67f9f9b | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/laion/laion2B-en-joined/resolve/main/README.md | ---
license: cc-by-4.0
---
|
laion | null | null | null | false | 1 | false | laion/laion2B-multi-joined | 2022-04-01T01:23:57.000Z | null | false | aa41d602856506cbe6ad93732ed78e715e199eb2 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/laion/laion2B-multi-joined/resolve/main/README.md | ---
license: cc-by-4.0
---
|
laion | null | null | null | false | 4 | false | laion/laion2B-multi-watermark | 2022-03-29T22:50:20.000Z | null | false | a512e2fac24f5ad09f0841c5510b6ae35f6e6961 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/laion/laion2B-multi-watermark/resolve/main/README.md | ---
license: cc-by-4.0
---
|
laion | null | null | null | false | 1 | false | laion/laion1B-nolang-watermark | 2022-03-30T18:18:02.000Z | null | false | bd36e36eed484f426979b95dabbe23f32cfd4400 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/laion/laion1B-nolang-watermark/resolve/main/README.md | ---
license: cc-by-4.0
---
|
hackathon-pln-es | null | null | null | false | 1 | false | hackathon-pln-es/nli-es | 2022-04-04T03:30:59.000Z | null | false | 76866995461bd07841e9ef7b08751da46c7eb9f4 | [] | [
"arxiv:1809.05053"
] | https://huggingface.co/datasets/hackathon-pln-es/nli-es/resolve/main/README.md | annotations_creators:
- crowdsourced
- other
language_creators:
- other
- crowdsourced
languages:
- es
licenses:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: ESnli
size_categories:
- unknown
source_datasets:
- extended|snli
- extended|xnli
- extended|multi_nli
task_categories:
- text-classification
task_ids:
- natural-language-inference
# Dataset Card for nli-es
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://huggingface.co/datasets/hackathon-pln-es/nli-es/
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
A Spanish Natural Language Inference dataset put together from the sources:
- the Spanish slice of the XNLI dataset;
- machine-translated Spanish version of the SNLI dataset
- machine-translated Spanish version of the Multinli dataset
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
A small percentage of the dataset contains original Spanish text by human speakers. The rest was generated by automatic translation.
## Dataset Structure
### Data Instances
A line includes four values: a sentence1 (the premise); a sentence2 (the hypothesis); a label specifying the relationship between the two ("gold_label") and the ID number of the pair of sentences as given in the original dataset.
Labels can be "entailment" if the premise entails the hypothesis, "contradiction" if it contradicts it or "neutral" if it neither implies it nor denies it.
{
"gold_label": "neutral",
"pairID": 1,
"sentence1": "A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos.",
"sentence2": "La huelga es el método de lucha más eficaz para conseguir mejoras en el salario."
}
### Data Fields
gold_label: A string defining the relation between the sentence pair. Labels can be "entailment" if the premise entails the hypothesis, "contradiction" if it contradicts it or "neutral" if it neither implies it nor denies it.
pairID: A string identifying a pair sentence. It was inherited from the original datasets. NOTE: For the moment we are having trouble loading this column so we replaced every string with an int 0 as a placeholder. We hope to have the pairID back up soon.
sentence1: A string containing one sentence in Spanish, the premise. (See gold_label.)
sentence2: A string containing one sentence in Spanish, the hypothesis. (See gold_label.)
### Data Splits
The whole dataset was used for training. We did not use an evaluation split as we used the SemEval-2015 Task 2.
## Dataset Creation
### Curation Rationale
This corpus was built to remedy the scarcity of annotated Spanish-language datasets for NLI. It was generated by translating from the SNLI original dataset to Spanish using Argos. While machine translation is far from an ideal source for semantic classification, it is an aid to enlarging the data available.
### Source Data
#### Initial Data Collection and Normalization
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
#### Who are the source language producers?
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
### Annotations
#### Annotation process
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
#### Who are the annotators?
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
### Personal and Sensitive Information
In general, no sensitive information is conveyed in the sentences.
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to offer new tools for semantic textual similarity analysis of Spanish sentences.
### Discussion of Biases
Please refer to the respective documentations of the original datasets:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
### Other Known Limitations
The translation of the sentences was mostly unsupervised and may introduce some noise in the corpus. Machine translation from an English-language corpus is likely to generate syntactic and lexical forms that differ from those a human Spanish speaker would produce.
For discussion on the biases and limitations of the original datasets, please refer to their respective documentations:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
## Additional Information
### Dataset Curators
The nli-es dataset was put together by Anibal Pérez, Lautaro Gesuelli, Mauricio Mazuecos and Emilio Tomás Ariza.
### Licensing Information
This corpus is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0).
Please refer to the respective documentations of the original datasets for information on their licenses:
https://nlp.stanford.edu/projects/snli/
https://arxiv.org/pdf/1809.05053.pdf
https://cims.nyu.edu/~sbowman/multinli/
### Citation Information
If you need to cite this dataset, you can link to this readme. |
vumichien | null | null | null | false | 1 | false | vumichien/pitch_japanese_data | 2022-04-04T03:05:08.000Z | null | false | fa7ca4dffe2448901592f0a8bb2ea0f0581c5951 | [] | [] | https://huggingface.co/datasets/vumichien/pitch_japanese_data/resolve/main/README.md | Japanese Pitch Dataset |
Cptburgos | null | null | null | false | 1 | false | Cptburgos/aircraft_reports | 2022-03-30T11:28:42.000Z | null | false | 699b536f42eef7b7b73f3b6dbe857ea4821a7bbf | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Cptburgos/aircraft_reports/resolve/main/README.md | ---
license: afl-3.0
---
|
andreamorgar | null | null | null | false | 1 | false | andreamorgar/spanish_poetry | 2022-03-30T12:39:22.000Z | null | false | e0f3a5567f5c8db711fce1d5dcf244000c5ab587 | [] | [
"license:gpl-3.0"
] | https://huggingface.co/datasets/andreamorgar/spanish_poetry/resolve/main/README.md | ---
license: gpl-3.0
---
# Spanish Poetry Dataset
There are not many poetry datasets, and in Spanish language is even worst! With this dataset, we want to give access to these quality Spanish data for NLP tasks.
It is a simple dataset, but its potential is huge. I'm itching to discover new literary structures within Spanish literature data, a wider analysis, and so on!
# Authors
Andrea Morales (@andreamorgar) and Miguel López (@wizmik12)
### Motivation
This dataset was built for the PyConES2020 conference with the purpose of using it for a poem generation task. More information: https://github.com/andreamorgar/poesIA
### Content
Data was acquired in July 2020 from the poetry webpage www.poemas-del-alma.com. It provides a wide amount of data involving poems in Spanish. Data was scraped using Python library BeautifulSoup. For each poem in www.poemas-del-alma.com, we collected the name of the poet, poem, and poem title. Scraping processed is available at https://github.com/andreamorgar/poesIA/blob/master/poetry-scrapper.py.
### Languages
Spanish
### Acknowledgements
We wouldn't be here without www.poemas-del-alma.com, which provides the poetry collection in this dataset. |
ntt123 | null | null | null | false | 1 | false | ntt123/infore | 2022-05-07T04:00:24.000Z | null | false | f6718eea7b91b3e3756598b20fc034d4da1c72bc | [] | [
"license:cc-by-nc-4.0"
] | https://huggingface.co/datasets/ntt123/infore/resolve/main/README.md | ---
license: cc-by-nc-4.0
---
|
omerm | null | null | null | false | 1 | false | omerm/test_dataset | 2022-03-30T15:38:48.000Z | null | false | dac22204f9694926352e6346327e111aaac1ee93 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/omerm/test_dataset/resolve/main/README.md | ---
license: apache-2.0
---
|
MLCommons | null | null | null | false | 16 | false | MLCommons/peoples_speech_v1.0 | 2022-08-10T16:41:34.000Z | null | false | a3fa98229bb3b442163ac8bda8e950076f75b589 | [] | [
"arxiv:2111.09344",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"language:en",
"license:cc-by-2.0",
"license:cc-by-2.5",
"license:cc-by-3.0",
"license:cc-by-4.0",
"license:cc-by-sa-3.0",
... | https://huggingface.co/datasets/MLCommons/peoples_speech_v1.0/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- machine-generated
language:
- en
license:
- cc-by-2.0
- cc-by-2.5
- cc-by-3.0
- cc-by-4.0
- cc-by-sa-3.0
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: People's Speech
size_categories:
- 1T<n
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids:
- speech-recognition
- robust-speech-recognition
- noisy-speech-recognition
---
# Dataset Card for People's Speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://mlcommons.org/en/peoples-speech/
- **Repository:** https://github.com/mlcommons/peoples-speech
- **Paper:** https://arxiv.org/abs/2111.09344
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [datasets@mlcommons.org](mailto:datasets@mlcommons.org)
### Dataset Summary
The People's Speech Dataset is among the world's largest English speech recognition corpus today that is licensed for academic and commercial usage under CC-BY-SA and CC-BY 4.0. It includes 30,000+ hours of transcribed speech in English languages with a diverse set of speakers. This open dataset is large enough to train speech-to-text systems and crucially is available with a permissive license.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
{
"id": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac",
"audio": {
"path": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac"
"array": array([-6.10351562e-05, ...]),
"sampling_rate": 16000
}
"duration_ms": 14490,
"text": "contends that the suspension clause requires a [...]"
}
### Data Fields
{
"id": datasets.Value("string"),
"audio": datasets.Audio(sampling_rate=16_000),
"duration_ms": datasets.Value("int32"),
"text": datasets.Value("string"),
}
### Data Splits
We provide the following configurations for the dataset: `cc-by-clean`, `cc-by-dirty`, `cc-by-sa-clean`, `cc-by-sa-dirty`, and `microset`. We don't provide splits for any of the configurations.
## Dataset Creation
### Curation Rationale
See our [paper](https://arxiv.org/abs/2111.09344).
### Source Data
#### Initial Data Collection and Normalization
Data was downloaded via the archive.org API. No data inference was done.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
No manual annotation is done. We download only source audio with already existing transcripts.
#### Who are the annotators?
For the test and dev sets, we paid native American English speakers to do transcriptions. We do not know the identities of the transcriptionists for data in the training set. For the training set, we have noticed that some transcriptions are likely to be the output of automatic speech recognition systems.
### Personal and Sensitive Information
Several of our sources are legal and government proceedings, spoken histories, speeches, and so on. Given that these were intended as public documents and licensed as such, it is natural that the involved individuals are aware of this.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset could be used for speech synthesis. However, this requires careful cleaning of the dataset, as background noise is not tolerable for speech synthesis.
The dataset could be used for keyword spotting tasks as well. In particular, this is good use case for the non-English audio in the dataset.
Our sincere hope is that the large breadth of sources our dataset incorporates reduces existing quality of service issues today, like speech recognition system’s poor understanding of non-native English accents. We cannot think of any unfair treatment that come from using this dataset at this time.
### Discussion of Biases
Our data is downloaded from archive.org. As such, the data is biased towards whatever users decide to upload there.
Almost all of our data is American accented English.
### Other Known Limitations
As of version 1.0, a portion of data in the training, test, and dev sets is poorly aligned. Specifically, some words appear in the transcript, but not the audio, or some words appear in the audio, but not the transcript. We are working on it.
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
We provide CC-BY and CC-BY-SA subsets of the dataset.
### Citation Information
Please cite:
```
@article{DBLP:journals/corr/abs-2111-09344,
author = {Daniel Galvez and
Greg Diamos and
Juan Ciro and
Juan Felipe Cer{\'{o}}n and
Keith Achorn and
Anjali Gopi and
David Kanter and
Maximilian Lam and
Mark Mazumder and
Vijay Janapa Reddi},
title = {The People's Speech: {A} Large-Scale Diverse English Speech Recognition
Dataset for Commercial Usage},
journal = {CoRR},
volume = {abs/2111.09344},
year = {2021},
url = {https://arxiv.org/abs/2111.09344},
eprinttype = {arXiv},
eprint = {2111.09344},
timestamp = {Mon, 22 Nov 2021 16:44:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09344.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
hackathon-pln-es | null | null | null | false | 1 | false | hackathon-pln-es/Axolotl-Spanish-Nahuatl | 2022-10-23T05:28:39.000Z | null | false | f23217aabeff34f1767ce8c300619f9f8da2d8fc | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:es",
"language_bcp47:es-MX",
"license:mpl-2.0",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:original",
"task_ids:machine-translation"
] | https://huggingface.co/datasets/hackathon-pln-es/Axolotl-Spanish-Nahuatl/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- es
language_bcp47:
- es-MX
license:
- mpl-2.0
multilinguality:
- translation
pretty_name: "Axolotl Spanish-Nahuatl parallel corpus , is a digital corpus that compiles several sources with parallel content in these two languages. \n\nA parallel corpus is a type of corpus that contains texts in a source language with their correspondent translation in one or more target languages.
Gutierrez-Vasques, X., Sierra, G., and Pompa, I. H. (2016). Axolotl: a web accessible parallel corpus for spanish-nahuatl. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2016), Portoro, Slovenia. European Language Resources Association (ELRA).
Grupo de Ingenieria Linguistica (GIL, UNAM). Corpus paralelo español-nahuatl. http://www.corpus.unam.mx/axolotl."
size_categories:
- unknown
source_datasets:
- original
task_categories:
- conditional-text-generation
task_ids:
- machine-translation
---
# Axolotl-Spanish-Nahuatl : Parallel corpus for Spanish-Nahuatl machine translation
## Table of Contents
- [Dataset Card for [Axolotl-Spanish-Nahuatl]](#dataset-card-for-Axolotl-Spanish-Nahuatl)
## Dataset Description
- **Source 1:** http://www.corpus.unam.mx/axolotl
- **Source 2:** http://link.springer.com/article/10.1007/s10579-014-9287-y
- **Repository:1** https://github.com/ElotlMX/py-elotl
- **Repository:2** https://github.com/christos-c/bible-corpus/blob/master/bibles/Nahuatl-NT.xml
- **Paper:** https://aclanthology.org/N15-2021.pdf
## Dataset Collection
In order to get a good translator, we collected and cleaned two of the most complete Nahuatl-Spanish parallel corpora available. Those are Axolotl collected by an expert team at UNAM and Bible UEDIN Nahuatl Spanish crawled by Christos Christodoulopoulos and Mark Steedman from Bible Gateway site.
After this, we ended with 12,207 samples from Axolotl due to misalignments and duplicated texts in Spanish in both original and nahuatl columns and 7,821 samples from Bible UEDIN for a total of 20028 utterances.
## Team members
- Emilio Morales [(milmor)](https://huggingface.co/milmor)
- Rodrigo Martínez Arzate [(rockdrigoma)](https://huggingface.co/rockdrigoma)
- Luis Armando Mercado [(luisarmando)](https://huggingface.co/luisarmando)
- Jacobo del Valle [(jjdv)](https://huggingface.co/jjdv)
## Applications
- MODEL: Spanish Nahuatl Translation Task with a T5 model in ([t5-small-spanish-nahuatl](https://huggingface.co/hackathon-pln-es/t5-small-spanish-nahuatl))
- DEMO: Spanish Nahuatl Translation in ([Spanish-nahuatl](https://huggingface.co/spaces/hackathon-pln-es/Spanish-Nahuatl-Translation)) |
MF-Rocket | null | null | null | false | 1 | false | MF-Rocket/MFRPC | 2022-03-30T19:58:37.000Z | null | false | 2b66248c092349da7db3e12eb66d7ffb692c77d9 | [] | [] | https://huggingface.co/datasets/MF-Rocket/MFRPC/resolve/main/README.md | ---
task_categories:
- conditional-text-generation
- paraphrase
- gpt-3
- crowdsourced
---
# MF Rocket Paraphrase Corpus (MFRPC) - A State of the Art Paraphrasing Solution
## Dataset Description
MF Rocket Paraphrase Corpus (MFRPC) ) is a corpus consisting of 10,000 sentence pairs. Each sentence pair contains a source sentence and the paraphrased version of the source sentence. The source sentences are created manually and are intended to represent typical sentences found in online articles. They are limited to general topics and are not restricted to a specific domain. The paraphrased sentences were created partly using GPT-3 and partly manually. In this way, we hope to investigate the performance of GPT-3 in a typical real-world setting and improve the quality of the paraphrased sentences through manual corrections.
By finetuning a model we Pegasus with this data, we create a paraphraser that performs very well. The results are indistinguishable from human parahrased sentences in a blind test.
We are currently working on a data set with complete paragraphs or articles.
For more information, our Contact form can be used at https://mf-rocket.de.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "To overcome these difficulties, you must select an activity or goal that you are enthusiastic about [...]",
"target": "To overcome these challenges, you need to find an activity or goal that you are passionate about and[...]"
},
{
"text": "If you are unsure about what to do next, seek advice from a close friend or family member you can tr[...]",
"target": "If you are feeling lost, ask a trusted friend or family member for their opinion about what you shou[...]"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 8000 |
| valid | 2000 |
|
copenlu | null | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | SufficientFacts is a diagnostic test dataset for fact checking with insufficient evidence. | false | 5 | false | copenlu/sufficient_facts | 2022-08-05T08:33:48.000Z | null | false | 58d5b361e47265cf85f2334800c66d9bb485029e | [] | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:mit",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|fever",
"source_datasets:extended|hover",
"source_datasets:extended|fever_gold_evidence",
"task_categories:text-... | https://huggingface.co/datasets/copenlu/sufficient_facts/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: sufficient_facts
size_categories:
- 1K<n<10K
source_datasets:
- extended|fever
- extended|hover
- extended|fever_gold_evidence
task_categories:
- text-classification
task_ids:
- fact-checking
---
# Dataset Card for sufficient_facts
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/copenlu/sufficient_facts
- **Repository:** https://github.com/copenlu/sufficient_facts
- **Paper:** Will be uploaded soon...
- **Leaderboard:**
- **Point of Contact:** https://apepa.github.io/
### Dataset Summary
This is the dataset SufficientFacts, introduced in the paper "Fact Checking with Insufficient Evidence", accepted at the TACL journal in 2022.
Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, **SufficientFacts**, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21% accuracy), whereas it is easiest for omitted date modifiers (63% accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score.
### Languages
English
## Dataset Structure
The dataset consists of three files, each for one of the datasets -- FEVER, HoVer, and VitaminC.
Each file consists of json lines of the format:
```json
{
"claim": "Unison (Celine Dion album) was originally released by Atlantic Records.",
"evidence": [
[
"Unison (Celine Dion album)",
"The album was originally released on 2 April 1990 ."
]
],
"label_before": "REFUTES",
"label_after": "NOT ENOUGH",
"agreement": "agree_ei",
"type": "PP",
"removed": ["by Columbia Records"],
"text_orig": "[[Unison (Celine Dion album)]] The album was originally released on 2 April 1990 <span style=\"color:red;\">by Columbia Records</span> ."
}
```
### Data Instances
* FEVER: 600 consituent-level, 400 sentence-level;
* HoVer - 600 consituent-level, 400 sentence-level;
* VitaminC - 600 consituent-level.
### Data Fields
* `claim` - the claim that is being verified
* `evidence` - the augmented evidence for the claim, i.e. the evidence with some removed information
* `label_before` - the original label for the claim-evidence pair, before information was removed from the evidence
* `label_after` - the label for the augmented claim-evidence pair, after information was removed from the evidence, as annotated by crowd-source workers
* `type` - type of the information removed from the evidence. The types are fine-grained and their mapping to the general types -- 7 constituent and 1 sentence type can be found in [types.json](types.json) file.
* `removed` - the text of the removed information from the evidence
* `text_orig` - the original text of the evidence, as presented to crowd-source workers, the text of the removed information is inside `<span style=\"color:red;\"></span>` tags.
### Data Splits
| name |test_fever|test_hover|test_vitaminc|
|----------|-------:|-----:|-------:|
|test| 1000| 1000| 600|
Augmented from the test splits of the corresponding datasets.
### Annotations
#### Annotation process
The workers were provided with the following task description:
For each evidence text, some facts have been removed (marked in <span style="color:red;">red</span>).
You should annotate whether, <b>given the remaining facts in the evidence text, the evidence is still enough for verifying the claim.</b> <br></br>
<ul>
<li>You should select <i><b>'ENOUGH -- IRRELEVANT'</b></i>, if the <b>remaining information is still <i>enough</i></b> for verifying the claim because the <b>removed information is irrelevant</b> for identifying the evidence as SUPPORTS or REFUTES. See examples 1 and 2.</li>
<li>You should select <i><b>'ENOUGH -- REPEATED'</b></i>, if the <b>remaining information is still <i>enough</i></b> for verifying the claim because the <b>removed information is relevant but is also present (repeated) in the remaining (not red) text.</b> See example 3.</li>
<li>You should select <i><b>'NOT ENOUGH'</b></i> -- when <b>1) the removed information is <i>relevant</i></b> for verifying the claim <b> AND 2) it is <i>not present (repeated)</i> in the remaining text.</b> See examples 4, 5, and 6.</li>
<!--<li>You should select <i><b>'CHANGED INFO'</b></i> in the rare cases when the remaining evidence has <b>changed the support for the claim</b></li>-->
</ul>
<b>Note: You should not incorporate your own knowledge or beliefs! You should rely only on the evidence provided for the claim.</b>
The annotators were then given example instance annotations.
Finally, annotators were asked to complete a qualification test in order to be allowed to annotate instances for the task.
The resulting inter-annotator agreement for SufficientFacts is 0.81 Fleiss'k from three annotators.
#### Who are the annotators?
The annotations were performed by workers at Amazon Mechanical Turk.
## Additional Information
### Licensing Information
MIT
### Citation Information
```
@article{10.1162/tacl_a_00486,
author = {Atanasova, Pepa and Simonsen, Jakob Grue and Lioma, Christina and Augenstein, Isabelle},
title = "{Fact Checking with Insufficient Evidence}",
journal = {Transactions of the Association for Computational Linguistics},
volume = {10},
pages = {746-763},
year = {2022},
month = {07},
abstract = "{Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, SufficientFacts1, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21\\% accuracy), whereas it is easiest for omitted date modifiers (63\\% accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score.}",
issn = {2307-387X},
doi = {10.1162/tacl_a_00486},
url = {https://doi.org/10.1162/tacl\_a\_00486},
eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00486/2037141/tacl\_a\_00486.pdf},
}
```
### Contributions
Thanks to [@apepa](https://github.com/apepa) for adding this dataset. |
vlsb | null | null | null | false | 4 | false | vlsb/autotrain-data-security-texts-classification-distilroberta | 2022-03-30T20:48:56.000Z | null | false | 7e960327b88e961ca35bee0a6bed94c42b0ac0d8 | [] | [
"task_categories:text-classification"
] | https://huggingface.co/datasets/vlsb/autotrain-data-security-texts-classification-distilroberta/resolve/main/README.md | ---
task_categories:
- text-classification
---
# AutoTrain Dataset for project: security-texts-classification-distilroberta
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project security-texts-classification-distilroberta.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Netgear launches Bug Bounty Program for Hacker; Offering up to $15,000 in Rewards It might be the ea[...]",
"target": 0
},
{
"text": "Popular Malware Families Using 'Process Doppelg\u00e4nging' to Evade Detection The fileless code injectio[...]",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['irrelevant', 'relevant'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 780 |
| valid | 196 |
|
hackathon-pln-es | null | null | null | false | 8 | false | hackathon-pln-es/disco_spanish_poetry | 2022-03-30T21:50:28.000Z | null | false | 608c2e9f00eacfdb0932301e45fe2b420a0559a0 | [] | [] | https://huggingface.co/datasets/hackathon-pln-es/disco_spanish_poetry/resolve/main/README.md | # DISCO: Diachronic Spanish Sonnet Corpus
[](https://zenodo.org/badge/latestdoi/103841064)
The Diachronic Spanish Sonnet Corpus (DISCO) contains sonnets in Spanish in CSV, between the 15th and the 20th centuries (4303 sonnets by 1215 authors from 22 different countries). It includes well-known authors, but also less canonized ones.
This is a CSV compilation taken from the plain text corpus v4 published on git https://github.com/pruizf/disco/tree/v4. It includes the title, author, age and text metadata.
<br><br> |
nateraw | null | null | null | false | 1 | false | nateraw/test-imagefolder-dataset | 2022-03-30T22:19:04.000Z | null | false | 88fa19e471d13eae3c2903011908b1a8bbccb46a | [] | [] | https://huggingface.co/datasets/nateraw/test-imagefolder-dataset/resolve/main/README.md | # test-imagefolder-dataset
This dataset shows that you can upload image folders (with an accompanying info.csv file within) to share and visualize multiple splits of a dataset. Cheers 🍻 |
DioLiu | null | null | null | false | 1 | false | DioLiu/Test1 | 2022-04-09T04:11:46.000Z | null | false | 7408d8e0848a237692cfd597574ced7762ea42c9 | [] | [] | https://huggingface.co/datasets/DioLiu/Test1/resolve/main/README.md | basic text |
benwoodyear | null | null | null | false | 1 | false | benwoodyear/guardian_crosswords | 2022-04-02T11:41:59.000Z | null | false | 3ed1ac857230642d208eede613bc5194e187c0b4 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/benwoodyear/guardian_crosswords/resolve/main/README.md | ---
license: afl-3.0
---
|
LeoFeng | null | null | null | false | 12 | false | LeoFeng/MLHW_6 | 2022-03-31T12:35:46.000Z | null | false | 81e35ae03e23e02427bb4ce8f2089af2049dd00a | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/LeoFeng/MLHW_6/resolve/main/README.md | ---
license: afl-3.0
---
|
Samip | null | @inproceedings{
dahal2022scotch,
title={Scotch: A Semantic Code Search Engine for {IDE}s},
author={Samip Dahal and Adyasha Maharana and Mohit Bansal},
booktitle={Deep Learning for Code Workshop},
year={2022},
url={https://openreview.net/forum?id=rSxfCiOZk-c}
} | Scotch is a dataset of about 19 million functions collected from open-source repositiories from GitHub with permissive licenses. Each function has its corresponding code context and about 4 million functions have corresponding docstrings. The dataset includes functions written in programming languages Python, Java, Javascript, and Go. | false | 10 | false | Samip/Scotch | 2022-04-29T14:19:23.000Z | null | false | 966942abb87e2e57c5b357342d7bc2f4177e0ba4 | [] | [] | https://huggingface.co/datasets/Samip/Scotch/resolve/main/README.md |
## Dataset Summary
Scotch is a dataset of about 19 million functions collected from open-source repositiories from GitHub with permissive licenses. Each function has its corresponding code context and about 4 million functions have corresponding docstrings.
### Languages
The dataset includes functions written in programming languages Python, Java, Javascript, and Go.
## Statistics
### Split
The functions with docstrings is splitted into train, valid, and test set of 3200626, 400077, 400080 functions respectively.
## Features
Each function consists of following features:
* repository_name: Name of the repository the function belongs to.
* function_path: Path of the function within the repository.
* function_identifier: Function name/identifier.
* language: Programming language the function is written in.
* function: Function string.
* docstring: Function docstring.
* function_url: URL to the function code.
* context: Code context.
* license: License info of the repository (includes only repositories with permissive licenses).
## Data Collection
The dataset is collected from GitHub repositories of respective languages with 5 or more stars. Such repositories are listed using [SEART](https://seart-ghs.si.usi.ch/). Functions are parsed using a lightweight parser build on top of function parser from [CodeSearchNet dataset](https://github.com/github/CodeSearchNet/tree/master/function_parser) and repositories were collected with help of [github-downloader from EleutherAI](https://github.com/EleutherAI/github-downloader).
### Data Processing
All the code without permissive licenses are removed and deduplication is performed on the remaining set of functions. Afterwards, all the functions with single line of code, whose docstring contains non-English characters are removed. Files with multiple same functions are excluded. This results in about 19M functions. To obtain a dataset of NL-Code pairs, functions with no docstrings or doctrings less than 3 tokens separated by white-space are excluded. Following CodeSearchNet, functions with 'test' keyword in their name are excluded.
## License
This dataset is under MIT License. However, the repositories the functions are collected from may have several permissive licenses. Those licenses include MIT License, Apache License 2.0, BSD 3-Clause “New” or “Revised” License, BSD 2-Clause “Simplified” License, and ISC License. |
tomekkorbak | null | null | null | false | 26 | false | tomekkorbak/pile-toxicity-balanced | 2022-04-06T11:07:05.000Z | null | false | 630a24d3e902f49f89ba5b835410ad2cbb3f0059 | [] | [] | https://huggingface.co/datasets/tomekkorbak/pile-toxicity-balanced/resolve/main/README.md | ## Generation procedure
The dataset was constructed using documents from [the Pile](https://pile.eleuther.ai/) scored using using [Perspective API](http://perspectiveapi.com) toxicity scores.
The procedure was the following:
1. A chunk of the Pile (3%, 7m documents) was scored using the Perspective API.
1. The first half of this dataset is [tomekkorbak/pile-toxic-chunk-0](https://huggingface.co/datasets/tomekkorbak/pile-toxic-chunk-0), 100k *most* toxic documents of the scored chunk
2. The first half of this dataset is [tomekkorbak/pile-nontoxic-chunk-0](https://huggingface.co/datasets/tomekkorbak/pile-nontoxic-chunk-0), 100k *least* toxic documents of the scored chunk
3. Then, the dataset was shuffled and a 9:1 train-test split was done
## Basic stats
The average scores of the good and bad half are 0.0014 and 0.67, respectively. The average score of the whole dataset is 0.33; the median is 0.51.
However, the weighted average score (weighted by document length) is 0.45. Correlation between score and document length is 0.2.
Score histogram:

Mean score per Pile subset
| pile_set_name | score | length |
|:------------------|----------:|------------:|
| ArXiv | 0.141808 | 9963.82 |
| Books3 | 0.405541 | 8911.67 |
| DM Mathematics | 0.535474 | 8194 |
| Enron Emails | 0.541136 | 1406.76 |
| EuroParl | 0.373395 | 4984.36 |
| FreeLaw | 0.279582 | 8986.73 |
| Github | 0.495742 | 2184.86 |
| Gutenberg (PG-19) | 0.583263 | 4034 |
| HackerNews | 0.617917 | 3714.83 |
| NIH ExPorter | 0.0376628 | 1278.83 |
| OpenSubtitles | 0.674261 | 14881.1 |
| OpenWebText2 | 0.613273 | 2634.41 |
| PhilPapers | 0.549582 | 9693 |
| Pile-CC | 0.525136 | 2925.7 |
| PubMed Abstracts | 0.0388705 | 1282.29 |
| PubMed Central | 0.235012 | 7418.34 |
| StackExchange | 0.590904 | 2210.16 |
| USPTO Backgrounds | 0.0100077 | 2086.39 |
| Ubuntu IRC | 0.598423 | 4396.67 |
| Wikipedia (en) | 0.0136901 | 1515.89 |
| YoutubeSubtitles | 0.65201 | 4729.52 | |
johnowhitaker | null | null | null | false | 1 | false | johnowhitaker/glid3_orbs | 2022-04-01T03:58:57.000Z | null | false | 0cd8a6125adf50d4f589f21a2514aff5ec63ee1c | [] | [] | https://huggingface.co/datasets/johnowhitaker/glid3_orbs/resolve/main/README.md | These orbs were generated with GLID-3, a text-to-image system (https://github.com/Jack000/glid-3)
The text prompt for many was "Orbs within orbs, concentric circles and ripples of fire (spheres and circles, roundness)"
I used a high guidance scale (10 IIRC) and generated them in batches of 64
There are two 'flavours', 'dark' and 'light' (indicated with the 'label' attribute in the dataset. The 'light' images are from a GLID-3 model I fine-tuned on some abstract art, and tend to be more pastel colors and plain shapes. The 'dark' images are from GLID-3 part way through it's training.
This dataset is intended for use in GAN training demos and other art projects. Please give attribution if you use it in your own work (and tag me @johnowhitaker so I can see what you make!)
It's also nice for other artsy things, such as this montage made up of many little orb images: https://www.easyzoom.com/imageaccess/47cab299796a45edbd98951e704cb340
gan trained on this dataset: https://huggingface.co/johnowhitaker/orbgan_e1
gan demo (spaces): https://huggingface.co/spaces/johnowhitaker/orbgan_demo |
arjundd | null | null | null | false | 1 | false | arjundd/dosma-data | 2022-03-31T18:18:27.000Z | null | false | f6a833aa772e2b7a60008061fbb637a1940b35d7 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/arjundd/dosma-data/resolve/main/README.md | ---
license: apache-2.0
---
|
hackathon-pln-es | null | null | null | false | 2 | false | hackathon-pln-es/neutral-es | 2022-10-25T10:20:48.000Z | null | false | 4bc4e8decfdda1c956ca15694d9fa1518261efd0 | [] | [
"language:es",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"task_categories:text2text-generation",
"task_categories:translation"
] | https://huggingface.co/datasets/hackathon-pln-es/neutral-es/resolve/main/README.md | ---
language:
- es
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- text2text-generation
- translation
task_ids: []
pretty_name: neutralES
---
# Spanish Gender Neutralization
<p align="center">
<img src="https://upload.wikimedia.org/wikipedia/commons/2/29/Gender_equality_symbol_%28clipart%29.png" width="250"/>
</p>
Spanish is a beautiful language and it has many ways of referring to people, neutralizing the genders and using some of the resources inside the language. One would say *Todas las personas asistentes* instead of *Todos los asistentes* and it would end in a more inclusive way for talking about people. This dataset collects a set of manually anotated examples of gendered-to-neutral spanish transformations.
The intended use of this dataset is to train a spanish language model for translating from gendered to neutral, in order to have more inclusive sentences.
### Compiled sources
One of the major challenges was to obtain a valuable dataset that would suit gender inclusion purpose, therefore, when building the dataset, the team opted to dedicate a considerable amount of time to build it from a scratch. You can find here the results.
The data used for the model training has been manually created form a compilation of sources, obtained from a series of guidelines and manuals issued by Spanish Ministry of Health, Social Services and Equality in the matter of the usage of non-sexist language, stipulated in this linked [document](https://www.inmujeres.gob.es/servRecursos/formacion/GuiasLengNoSexista/docs/Guiaslenguajenosexista_.pdf).
**NOTE: Appart from manually anotated samples, this dataset has been further increased by applying data augmentation so a minumin number of training examples are generated.**
* [Guía para un discurso igualitario en la universidad de alicante](https://ieg.ua.es/es/documentos/normativasobreigualdad/guia-para-un-discurso-igualitario-en-la-ua.pdf)
* [Guía UC de Comunicación en Igualdad](<https://web.unican.es/unidades/igualdad/SiteAssets/igualdad/comunicacion-en-igualdad/guia%20comunicacion%20igualdad%20(web).pdf>)
* [Buenas prácticas para el tratamiento del lenguaje en igualdad](https://e-archivo.uc3m.es/handle/10016/22811)
* [Guía del lenguaje no sexista de la Universidad de Castilla-La Mancha](https://unidadigualdad.ugr.es/page/guiialenguajeuniversitarionosexista_universidaddecastillalamancha/!)
* [Guía de Lenguaje Para el Ámbito Educativo](https://www.educacionyfp.gob.es/va/dam/jcr:8ce318fd-c8ff-4ad2-97b4-7318c27d1682/guialenguajeambitoeducativo.pdf)
* [Guía para un uso igualitario y no sexista del lenguaje y dela imagen en la Universidad de Jaén](https://www.ujaen.es/servicios/uigualdad/sites/servicio_uigualdad/files/uploads/Guia_lenguaje_no_sexista.pdf)
* [Guía de uso no sexista del vocabulario español](https://www.um.es/documents/2187255/2187763/guia-leng-no-sexista.pdf/d5b22eb9-b2e4-4f4b-82aa-8a129cdc83e3)
* [Guía para el uso no sexista de la lengua castellana y de imágnes en la UPV/EHV](https://www.ehu.eus/documents/1734204/1884196/Guia_uso_no_sexista_EHU.pdf)
* [Guía de lenguaje no sexista UNED](http://portal.uned.es/pls/portal/docs/PAGE/UNED_MAIN/LAUNIVERSIDAD/VICERRECTORADOS/GERENCIA/OFICINA_IGUALDAD/CONCEPTOS%20BASICOS/GUIA_LENGUAJE.PDF)
* [COMUNICACIÓN AMBIENTAL CON PERSPECTIVA DE GÉNERO](https://cima.cantabria.es/documents/5710649/5729124/COMUNICACI%C3%93N+AMBIENTAL+CON+PERSPECTIVA+DE+G%C3%89NERO.pdf/ccc18730-53e3-35b9-731e-b4c43339254b)
* [Recomendaciones para la utilización de lenguaje no sexista](https://www.csic.es/sites/default/files/guia_para_un_uso_no_sexista_de_la_lengua_adoptada_por_csic2.pdf)
* [Estudio sobre lenguaje y contenido sexista en la Web](https://www.mujeresenred.net/IMG/pdf/Estudio_paginas_web_T-incluye_ok.pdf)
* [Nombra.en.red. En femenino y en masculino](https://www.inmujeres.gob.es/areasTematicas/educacion/publicaciones/serieLenguaje/docs/Nombra_en_red.pdf)
## Team Members
- Fernando Velasco [(fermaat)](https://huggingface.co/fermaat)
- Cibeles Redondo [(CibelesR)](https://huggingface.co/CibelesR)
- Juan Julian Cea [(Juanju)](https://huggingface.co/Juanju)
- Magdalena Kujalowicz [(MacadellaCosta)](https://huggingface.co/MacadellaCosta)
- Javier Blasco [(javiblasco)](https://huggingface.co/javiblasco)
### Enjoy and feel free to collaborate with this dataset 🤗 |
NbAiLab | null | @inproceedings{kummervold-etal-2021-operationalizing,
title = "Operationalizing a National Digital Library: The Case for a {N}orwegian Transformer Model",
author = "Kummervold, Per E and
De la Rosa, Javier and
Wetjen, Freddy and
Brygfjeld, Svein Arne",
booktitle = "Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = may # " 31--2 " # jun,
year = "2021",
address = "Reykjavik, Iceland (Online)",
publisher = {Link{\"o}ping University Electronic Press, Sweden},
url = "https://aclanthology.org/2021.nodalida-main.3",
pages = "20--29",
abstract = "In this work, we show the process of building a large-scale training set from digital and digitized collections at a national library. The resulting Bidirectional Encoder Representations from Transformers (BERT)-based language model for Norwegian outperforms multilingual BERT (mBERT) models in several token and sequence classification tasks for both Norwegian Bokm{\aa}l and Norwegian Nynorsk. Our model also improves the mBERT performance for other languages present in the corpus such as English, Swedish, and Danish. For languages not included in the corpus, the weights degrade moderately while keeping strong multilingual properties. Therefore, we show that building high-quality models within a memory institution using somewhat noisy optical character recognition (OCR) content is feasible, and we hope to pave the way for other memory institutions to follow.",
} | \\nNorwegian Colossal Corpus v2. Short sequences of maximum 100k characters. | false | 1 | false | NbAiLab/nb_bert_debiased | 2022-11-02T11:23:36.000Z | null | false | c009499b227d3e3fd475cf307f30c38aebfeb074 | [] | [
"arxiv:2104.09617",
"annotations_creators:no-annotation",
"language_creators:found",
"language:en",
"language:nb",
"language:no",
"language:nn",
"language:se",
"language:dk",
"language:is",
"language:fo",
"license:odc-by",
"multilinguality:multilingual",
"size_categories:2G<n<1B",
"sourc... | https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
- nb
- 'no'
- nn
- se
- dk
- is
- fo
license:
- odc-by
multilinguality:
- multilingual
size_categories:
- 2G<n<1B
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: NCC
extra_gated_prompt: The Directive on Copyright in the Digital Single Market, which
came into force on June 6 2019, amends the European Union copyright and database
legislation and allows for Text and Data Mining (TDM) activities for research organizations
and cultural heritage institutions. Under the terms of the aforementioned directive,
by clicking on "Access repository" you agree on using the text and data contained
in this dataset for non-commercial scientific purposes only.
---
# Dataset Card for NBAiLab/nb_bert_debiased
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data Fields](#data-fiels)
- [Dataset Creation](#dataset-creation)
- [Statistics](#statistics)
- [Document Types](#document-types)
- [Languages](#languages)
- [Publish Periode](#publish-periode)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/NbAiLab/notram
- **Repository:** https://github.com/NbAiLab/notram
- **Paper:** https://arxiv.org/abs/2104.09617
- **Point of Contact:** [Freddy Wetjen](mailto:freddy.wetjen@nb.no)
The Norwegian Colossal Corpus is a collection of multiple smaller Norwegian corpuses suitable for training large language models. We have done extensive cleaning on the datasets, and have made them available in a common format. The total size of the NCC is currently 45GB.
## How to Use
```python
from datasets import load_dataset
data = load_dataset("NBAiLab/nb_bert_debiased", streaming=True)
```
## Download Data
If you do not want to use the HuggingFace Dataset-library for training, or if you want to do additional pre-processing, it is also possible to download the files locally.
```bash
# Clone the training set
git clone https://huggingface.co/datasets/NbAiLab/nb_bert_debiased
# Create one large training file of all shards without unpacking
cat nb_bert_debiased/data/train*.gz > onefile.json.gz
```
<details>
<summary>List of all the files.</summary>
* [train-shard-0001-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0001-of-0033.json.gz)
* [train-shard-0002-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0002-of-0033.json.gz)
* [train-shard-0003-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0003-of-0033.json.gz)
* [train-shard-0004-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0004-of-0033.json.gz)
* [train-shard-0005-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0005-of-0033.json.gz)
* [train-shard-0006-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0006-of-0033.json.gz)
* [train-shard-0007-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0007-of-0033.json.gz)
* [train-shard-0008-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0008-of-0033.json.gz)
* [train-shard-0009-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0009-of-0033.json.gz)
* [train-shard-0010-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0010-of-0033.json.gz)
* [train-shard-0011-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0011-of-0033.json.gz)
* [train-shard-0012-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0012-of-0033.json.gz)
* [train-shard-0013-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0013-of-0033.json.gz)
* [train-shard-0014-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0014-of-0033.json.gz)
* [train-shard-0015-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0015-of-0033.json.gz)
* [train-shard-0016-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0016-of-0033.json.gz)
* [train-shard-0017-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0017-of-0033.json.gz)
* [train-shard-0018-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0018-of-0033.json.gz)
* [train-shard-0019-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0019-of-0033.json.gz)
* [train-shard-0020-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0020-of-0033.json.gz)
* [train-shard-0021-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0021-of-0033.json.gz)
* [train-shard-0022-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0022-of-0033.json.gz)
* [train-shard-0023-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0023-of-0033.json.gz)
* [train-shard-0024-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0024-of-0033.json.gz)
* [train-shard-0025-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0025-of-0033.json.gz)
* [train-shard-0026-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0026-of-0033.json.gz)
* [train-shard-0027-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0027-of-0033.json.gz)
* [train-shard-0028-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0028-of-0033.json.gz)
* [train-shard-0029-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0029-of-0033.json.gz)
* [train-shard-0030-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0030-of-0033.json.gz)
* [train-shard-0031-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0031-of-0033.json.gz)
* [train-shard-0032-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0032-of-0033.json.gz)
* [train-shard-0033-of-0033](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/train-shard-0033-of-0033.json.gz)
* [validation-shard-0001-of-0001](https://huggingface.co/datasets/NbAiLab/nb_bert_debiased/resolve/main/data/validation-shard-0001-of-0001.json.gz)
</details>
### Dataset Summary
The nb_bert_debiased dataset contains json lines with language training data. Here is an example json line:
```json
{
"id": "1006205",
"doc_type": "cc100",
"publish_year": 2021,
"lang_fasttext": "nn",
"lang_fasttext_conf": "0.641",
"text": "Eg har ein PLAN! KOS deg og ha ei fin helg"
}
```
## Data Fields
|**id:** | String with id to source of line and a unique identifier|
|:-----------|:------------|
|**doc_type** | String describing type of media text extracted from (I.e. book,newspaper etc)|
|**publish_year** | Integer. The year text published. When year is undetermined it is set to 2021.|
|**lang_fasttext** | String. Language of text identified by FastText|
|**lang_fasttext_conf** | String. Confidence calculated by FastText|
|**text** | String. The complete utf-8 document. If longer than 1M characters it is split.|
### Dataset Creation
We are providing a **train** and a **validation** split. The standard size of the validation is a single 1GB file, while train is sharded in 1GB chunks.
All files are gzipped.
Build date: 01042022
#### Initial Data Collection and Curation
The procedure for the dataset creation is described in detail in our paper.
### Summary
| Words | Documents | Words/Document |
|--------------:|------------:|-----------------:|
| 4,886,201,920 | 10,859,366 | 449 |
### Document Types
| Source | Words | Documents | Words/Document |
|--------------------------------------:|--------------:|------------:|-----------------:|
| parliament | 1,260,502,586 | 9,225 | 136,639 |
| books | 835,555,215 | 23,539 | 35,496 |
| newspapers_online_nb | 482,883,100 | 3,415,325 | 141 |
| maalfrid_regjeringen | 357,127,434 | 911,741 | 391 |
| maalfrid_ssb | 277,248,313 | 844,469 | 328 |
| maalfrid_uio | 180,254,856 | 764,578 | 235 |
| government_nb | 132,914,771 | 3,451 | 38,514 |
| wikipedia_download_nbo | 109,831,216 | 518,951 | 211 |
| maalfrid_fylkesmannen | 101,893,489 | 458,784 | 222 |
| publicreports | 78,212,608 | 3,271 | 23,910 |
| maalfrid_nve | 66,092,070 | 299,384 | 220 |
| maalfrid_patentstyret | 64,430,833 | 212,117 | 303 |
| maalfrid_ntnu | 57,279,188 | 197,519 | 289 |
| newspapers_online_nn | 41,771,521 | 165,737 | 252 |
| lovdata_cd_odelsting_2005 | 36,005,494 | 1,932 | 18,636 |
| maalfrid_vegvesen | 33,131,414 | 164,695 | 201 |
| maalfrid_fhi | 32,476,731 | 142,987 | 227 |
| maalfrid_norad | 32,408,703 | 92,215 | 351 |
| maalfrid_skatteetaten | 32,317,533 | 81,905 | 394 |
| maalfrid_uib | 28,160,639 | 114,731 | 245 |
| wikipedia_download_nno | 26,831,488 | 141,872 | 189 |
| maalfrid_forskningsradet | 23,876,921 | 72,746 | 328 |
| maalfrid_nasjonalparkstyre | 21,130,603 | 93,013 | 227 |
| government_nn | 18,106,305 | 1,053 | 17,194 |
| maalfrid_nmbu | 17,892,631 | 69,032 | 259 |
| maalfrid_oslomet | 17,565,000 | 46,619 | 376 |
| maalfrid_domstol | 16,546,095 | 50,584 | 327 |
| maalfrid_banenor | 16,296,418 | 69,765 | 233 |
| maalfrid_nav | 16,112,370 | 73,396 | 219 |
| maalfrid_landbruksdirektoratet | 12,988,620 | 47,537 | 273 |
| maalfrid_helsedirektoratet | 12,894,141 | 48,874 | 263 |
| maalfrid_nokut | 10,028,741 | 38,243 | 262 |
| maalfrid_hi | 9,956,191 | 38,683 | 257 |
| maalfrid_norges-bank | 9,825,026 | 36,807 | 266 |
| maalfrid_udir | 9,767,693 | 38,341 | 254 |
| maalfrid_vkm | 9,743,704 | 31,997 | 304 |
| maalfrid_nbim | 9,562,477 | 17,995 | 531 |
| maalfrid_miljodirektoratet | 9,406,572 | 34,369 | 273 |
| maalfrid_distriktssenteret | 9,301,190 | 38,197 | 243 |
| maalfrid_ngu | 9,160,389 | 34,305 | 267 |
| maalfrid_ptil | 9,112,264 | 33,902 | 268 |
| maalfrid_nord | 8,917,259 | 44,408 | 200 |
| maalfrid_fiskeridir | 8,221,774 | 33,078 | 248 |
| maalfrid_hivolda | 7,752,415 | 26,223 | 295 |
| maalfrid_difi | 7,720,133 | 35,475 | 217 |
| maalfrid_mattilsynet | 7,412,149 | 26,741 | 277 |
| maalfrid_havarikommisjonen | 7,376,668 | 24,777 | 297 |
| maalfrid_kulturradet | 7,132,304 | 22,237 | 320 |
| maalfrid_ks | 6,841,571 | 27,134 | 252 |
| maalfrid_kystverket | 6,648,764 | 30,711 | 216 |
| maalfrid_udi | 6,362,856 | 18,908 | 336 |
| maalfrid_uia | 5,901,573 | 23,628 | 249 |
| maalfrid_hjelpemiddeldatabasen | 5,843,648 | 33,848 | 172 |
| maalfrid_khrono | 5,805,461 | 19,756 | 293 |
| maalfrid_helsetilsynet | 5,725,414 | 18,140 | 315 |
| maalfrid_moreforsk | 5,575,963 | 21,398 | 260 |
| maalfrid_jernbanedirektoratet | 5,427,230 | 21,485 | 252 |
| maalfrid_veiviseren | 5,261,440 | 17,865 | 294 |
| lovdata_cd_somb_rundskriv_2005 | 5,242,676 | 3,201 | 1,637 |
| maalfrid_dsb | 5,149,282 | 17,635 | 291 |
| lovdata_cd_sentrale_forskrifter_2005 | 5,007,812 | 11,381 | 440 |
| maalfrid_husbanken | 4,668,798 | 14,910 | 313 |
| maalfrid_legemiddelverket | 4,646,007 | 20,011 | 232 |
| maalfrid_vetinst | 4,619,818 | 14,350 | 321 |
| maalfrid_imdi | 4,588,421 | 15,135 | 303 |
| maalfrid_forsvarsbygg | 4,530,038 | 18,707 | 242 |
| maalfrid_sdir | 4,497,418 | 15,079 | 298 |
| maalfrid_konkurransetilsynet | 4,470,281 | 12,486 | 358 |
| maalfrid_arkivverket | 4,466,215 | 16,396 | 272 |
| maalfrid_dsa | 4,456,010 | 15,772 | 282 |
| maalfrid_hiof | 4,429,234 | 22,915 | 193 |
| maalfrid_ehelse | 4,339,382 | 22,355 | 194 |
| maalfrid_inn | 4,289,871 | 26,033 | 164 |
| maalfrid_klagenemndssekretariatet | 4,160,203 | 11,848 | 351 |
| maalfrid_sprakradet | 4,046,761 | 15,025 | 269 |
| maalfrid_nhh | 3,950,920 | 15,582 | 253 |
| maalfrid_dibk | 3,925,849 | 15,343 | 255 |
| maalfrid_kartverket | 3,690,053 | 18,511 | 199 |
| maalfrid_riksrevisjonen | 3,661,977 | 10,871 | 336 |
| maalfrid_toll | 3,478,604 | 13,678 | 254 |
| maalfrid_nibio | 3,427,231 | 16,942 | 202 |
| maalfrid_met | 3,421,328 | 18,123 | 188 |
| maalfrid_bufdir | 3,329,773 | 11,382 | 292 |
| maalfrid_artsdatabanken | 3,174,117 | 8,955 | 354 |
| maalfrid_politiet | 3,138,300 | 10,389 | 302 |
| maalfrid_nkom | 3,099,581 | 9,892 | 313 |
| maalfrid_vestlandfylke | 3,035,002 | 11,974 | 253 |
| maalfrid_uis | 2,893,474 | 9,730 | 297 |
| maalfrid_sykkelbynettverket | 2,800,659 | 11,722 | 238 |
| maalfrid_nlr | 2,621,712 | 15,694 | 167 |
| maalfrid_seniorporten | 2,590,273 | 8,044 | 322 |
| maalfrid_npd | 2,571,771 | 10,669 | 241 |
| maalfrid_custompublish | 2,419,117 | 9,128 | 265 |
| maalfrid_aldringoghelse | 2,397,641 | 6,716 | 357 |
| maalfrid_bioteknologiradet | 2,378,816 | 5,962 | 398 |
| maalfrid_arbeidstilsynet | 2,368,908 | 6,833 | 346 |
| maalfrid_nyemetoder | 2,347,435 | 10,643 | 220 |
| maalfrid_riksantikvaren | 2,234,416 | 8,679 | 257 |
| maalfrid_sjt | 2,220,680 | 11,082 | 200 |
| lovdata_cd_lokaleforskrifter_2005 | 2,165,875 | 22,106 | 97 |
| maalfrid_hvl | 2,122,182 | 9,291 | 228 |
| maalfrid_luftfartstilsynet | 2,080,150 | 9,780 | 212 |
| maalfrid_dfo | 2,065,318 | 9,087 | 227 |
| maalfrid_ldo | 2,036,871 | 7,250 | 280 |
| maalfrid_kompetansenorge | 1,932,064 | 10,175 | 189 |
| maalfrid_forbrukerradet | 1,928,045 | 7,246 | 266 |
| maalfrid_himolde | 1,903,669 | 9,889 | 192 |
| maalfrid_usn | 1,772,050 | 7,330 | 241 |
| lovdata_cd_norgeslover_2005 | 1,768,056 | 1,383 | 1,278 |
| maalfrid_naku | 1,724,479 | 5,154 | 334 |
| maalfrid_medietilsynet | 1,595,414 | 6,554 | 243 |
| maalfrid_matematikksenteret | 1,554,763 | 7,230 | 215 |
| maalfrid_diku | 1,533,863 | 6,185 | 247 |
| maalfrid_forskningsetikk | 1,528,351 | 5,488 | 278 |
| maalfrid_godeidrettsanlegg | 1,498,095 | 6,081 | 246 |
| maalfrid_dirmin | 1,451,325 | 5,246 | 276 |
| maalfrid_diskrimineringsnemnda | 1,446,778 | 4,130 | 350 |
| maalfrid_naturfag | 1,426,975 | 5,911 | 241 |
| maalfrid_arbeidsretten | 1,422,959 | 4,693 | 303 |
| maalfrid_fellesstudentsystem | 1,348,423 | 10,234 | 131 |
| lovdata_cd_rtv_rundskriv_2005 | 1,341,173 | 9,528 | 140 |
| maalfrid_nupi | 1,277,307 | 5,437 | 234 |
| maalfrid_kriminalitetsforebygging | 1,191,809 | 4,634 | 257 |
| maalfrid_anskaffelser | 1,178,401 | 5,426 | 217 |
| maalfrid_folketrygdfondet | 1,172,842 | 4,201 | 279 |
| maalfrid_miljopakken | 1,162,877 | 5,466 | 212 |
| lovdata_cd_skatt_rundskriv_2005 | 1,113,374 | 396 | 2,811 |
| maalfrid_nih | 1,107,364 | 5,246 | 211 |
| maalfrid_statsbygg | 1,093,882 | 4,375 | 250 |
| maalfrid_nb | 1,047,952 | 4,122 | 254 |
| maalfrid_npolar | 1,045,552 | 2,642 | 395 |
| maalfrid_unit | 1,038,636 | 6,274 | 165 |
| maalfrid_valgdirektoratet | 996,239 | 9,035 | 110 |
| maalfrid_barneombudet | 968,955 | 2,766 | 350 |
| maalfrid_datatilsynet | 960,327 | 2,924 | 328 |
| maalfrid_lottstift | 952,738 | 3,550 | 268 |
| maalfrid_aho | 948,960 | 4,489 | 211 |
| maalfrid_sykehuspartner | 926,472 | 4,525 | 204 |
| maalfrid_naturfagsenteret | 896,048 | 3,844 | 233 |
| maalfrid_khio | 844,370 | 3,346 | 252 |
| maalfrid_spesialenheten | 821,619 | 2,127 | 386 |
| maalfrid_xn--miljlftet-o8ab | 796,916 | 3,360 | 237 |
| maalfrid_samordnaopptak | 779,679 | 2,333 | 334 |
| maalfrid_helsenorge | 774,308 | 3,017 | 256 |
| maalfrid_skrivesenteret | 769,883 | 4,128 | 186 |
| maalfrid_mareano | 755,280 | 3,679 | 205 |
| maalfrid_fiskeridirektoratet | 745,427 | 2,414 | 308 |
| maalfrid_sykehusinnkjop | 731,256 | 4,289 | 170 |
| maalfrid_matportalen | 623,335 | 2,348 | 265 |
| maalfrid_spk | 602,237 | 2,115 | 284 |
| maalfrid_pasientsikkerhetsprogrammet | 593,147 | 4,670 | 127 |
| maalfrid_justervesenet | 584,862 | 1,876 | 311 |
| maalfrid_nhn | 580,465 | 3,563 | 162 |
| maalfrid_sshf | 566,623 | 1,883 | 300 |
| maalfrid_bibliotekutvikling | 556,597 | 3,190 | 174 |
| maalfrid_nysgjerrigper | 554,331 | 2,983 | 185 |
| maalfrid_nodnett | 531,154 | 2,650 | 200 |
| maalfrid_giek | 511,920 | 1,785 | 286 |
| maalfrid_une | 505,306 | 1,227 | 411 |
| maalfrid_samas | 497,271 | 2,533 | 196 |
| maalfrid_kriminalomsorgen | 492,290 | 1,937 | 254 |
| maalfrid_kjonnsforskning | 481,527 | 1,421 | 338 |
| lovdata_cd_rundskriv_lovavdeling_2005 | 468,349 | 408 | 1,147 |
| maalfrid_kunstkultursenteret | 464,656 | 1,419 | 327 |
| maalfrid_nynorsksenteret | 452,817 | 2,074 | 218 |
| maalfrid_stami | 442,196 | 1,154 | 383 |
| maalfrid_ceres | 439,453 | 1,916 | 229 |
| maalfrid_nsm | 436,831 | 1,519 | 287 |
| maalfrid_nfi | 418,595 | 1,510 | 277 |
| maalfrid_gjenopptakelse | 414,616 | 1,446 | 286 |
| maalfrid_nidsenter | 406,139 | 1,620 | 250 |
| maalfrid_forbrukertilsynet | 385,587 | 1,216 | 317 |
| maalfrid_nasjonalmuseet | 383,916 | 1,070 | 358 |
| maalfrid_natursekken | 375,039 | 3,535 | 106 |
| maalfrid_fordelingsutvalget | 350,682 | 1,372 | 255 |
| maalfrid_digdir | 349,083 | 2,095 | 166 |
| maalfrid_forsvaret | 329,307 | 1,209 | 272 |
| maalfrid_beccle | 326,693 | 1,503 | 217 |
| maalfrid_romsenter | 325,796 | 1,120 | 290 |
| maalfrid_geonorge | 296,865 | 1,606 | 184 |
| maalfrid_universell | 262,248 | 2,152 | 121 |
| maalfrid_ovf | 260,108 | 919 | 283 |
| maalfrid_forbrukereuropa | 256,472 | 1,008 | 254 |
| maalfrid_politihogskolen | 255,500 | 1,216 | 210 |
| maalfrid_vinmonopolet | 242,793 | 663 | 366 |
| maalfrid_energimerking | 234,655 | 1,027 | 228 |
| maalfrid_ombudsmann | 226,797 | 416 | 545 |
| maalfrid_vea-fs | 223,018 | 1,251 | 178 |
| maalfrid_traumebevisst | 221,606 | 2,409 | 91 |
| maalfrid_npe | 203,452 | 992 | 205 |
| maalfrid_pkh | 201,011 | 791 | 254 |
| maalfrid_helfo | 192,164 | 975 | 197 |
| maalfrid_opplaringslovutvalget | 191,387 | 542 | 353 |
| maalfrid_regionaleforskningsfond | 185,201 | 979 | 189 |
| maalfrid_nafkam | 174,285 | 563 | 309 |
| maalfrid_jernbanemagasinet | 173,851 | 411 | 422 |
| maalfrid_polarhistorie | 170,535 | 383 | 445 |
| maalfrid_aasentunet | 159,465 | 522 | 305 |
| maalfrid_riksteatret | 156,872 | 782 | 200 |
| maalfrid_realfagsloyper | 155,802 | 740 | 210 |
| maalfrid_koro | 153,577 | 567 | 270 |
| maalfrid_squarespace | 144,234 | 497 | 290 |
| maalfrid_politietssikkerhetstjeneste | 141,433 | 462 | 306 |
| maalfrid_unknown | 139,391 | 696 | 200 |
| maalfrid_whocc | 119,423 | 647 | 184 |
| maalfrid_konfliktraadet | 115,529 | 361 | 320 |
| maalfrid_okokrim | 114,946 | 367 | 313 |
| maalfrid_riksmekleren | 111,169 | 560 | 198 |
| maalfrid_sismo | 110,707 | 310 | 357 |
| maalfrid_brreg | 109,013 | 553 | 197 |
| maalfrid_akkreditert | 99,469 | 500 | 198 |
| maalfrid_sivilforsvaret | 98,232 | 512 | 191 |
| maalfrid_radetfordyreetikk | 94,594 | 427 | 221 |
| maalfrid_digidel | 92,808 | 598 | 155 |
| maalfrid_lanekassen | 91,949 | 295 | 311 |
| maalfrid_uit | 90,660 | 598 | 151 |
| maalfrid_nyinorge | 89,346 | 201 | 444 |
| maalfrid_lokforerskolen | 88,289 | 465 | 189 |
| maalfrid_generaladvokaten | 87,571 | 284 | 308 |
| maalfrid_varsom | 84,645 | 554 | 152 |
| maalfrid_kulturminnefondet | 79,735 | 419 | 190 |
| maalfrid_ffi | 79,606 | 214 | 371 |
| maalfrid_unesco | 76,476 | 374 | 204 |
| maalfrid_yrkesfisker | 72,721 | 491 | 148 |
| maalfrid_dekom | 72,501 | 1,298 | 55 |
| maalfrid_omsorgsforskning | 71,981 | 323 | 222 |
| maalfrid_lektor2 | 68,003 | 543 | 125 |
| maalfrid_openaccess | 63,876 | 193 | 330 |
| maalfrid_ssn | 61,318 | 293 | 209 |
| maalfrid_lokalhistorie | 60,633 | 245 | 247 |
| maalfrid_laudim | 58,222 | 392 | 148 |
| maalfrid_nlb | 57,131 | 197 | 290 |
| maalfrid_riksadvokaten | 55,995 | 150 | 373 |
| maalfrid_denkulturelleskolesekken | 45,031 | 240 | 187 |
| maalfrid_sivilrett | 43,904 | 141 | 311 |
| maalfrid_htu | 41,234 | 161 | 256 |
| maalfrid_yr | 40,051 | 554 | 72 |
| maalfrid_informasjonskompetanse | 39,227 | 320 | 122 |
| maalfrid_finansportalen | 38,872 | 180 | 215 |
| maalfrid_kulturped | 37,389 | 98 | 381 |
| maalfrid_dep | 36,476 | 121 | 301 |
| maalfrid_feide | 36,352 | 265 | 137 |
| maalfrid_kulturoghelse | 34,331 | 185 | 185 |
| maalfrid_fug | 33,825 | 119 | 284 |
| maalfrid_helseklage | 33,081 | 124 | 266 |
| maalfrid_nbsk | 30,683 | 210 | 146 |
| maalfrid_matogindustri | 30,599 | 200 | 152 |
| maalfrid_sinn | 27,629 | 152 | 181 |
| maalfrid_vergemal | 23,367 | 78 | 299 |
| maalfrid_konkursradet | 23,326 | 76 | 306 |
| maalfrid_transport21 | 22,917 | 82 | 279 |
| maalfrid_norec | 21,585 | 74 | 291 |
| maalfrid_pts | 21,215 | 80 | 265 |
| maalfrid_nasjonaleturistveger | 19,757 | 109 | 181 |
| maalfrid_hjelpelinjen | 19,099 | 85 | 224 |
| maalfrid_iearth | 18,844 | 148 | 127 |
| maalfrid_russamtalen | 18,703 | 67 | 279 |
| maalfrid_xn--kvinneligomskjring-1ub | 18,506 | 78 | 237 |
| maalfrid_nynorskbok | 17,294 | 95 | 182 |
| maalfrid_memu | 16,875 | 94 | 179 |
| maalfrid_regjeringsadvokaten | 16,862 | 53 | 318 |
| maalfrid_xn--forskerfr-t8a | 16,026 | 171 | 93 |
| maalfrid_xn--tilbakefring-2jb | 15,787 | 48 | 328 |
| maalfrid_skattefunn | 15,501 | 53 | 292 |
| maalfrid_ringerikefengsel | 15,018 | 26 | 577 |
| maalfrid_samfunnskunnskap | 14,898 | 58 | 256 |
| maalfrid_skeivtarkiv | 14,859 | 67 | 221 |
| maalfrid_fordelingsutvalet | 14,658 | 34 | 431 |
| maalfrid_shiprep | 14,451 | 142 | 101 |
| maalfrid_sevuppt | 13,985 | 54 | 258 |
| maalfrid_haldenfengsel | 13,218 | 37 | 357 |
| maalfrid_forbrukerklageutvalget | 12,953 | 49 | 264 |
| maalfrid_mhfa | 11,966 | 132 | 90 |
| maalfrid_ah | 11,787 | 36 | 327 |
| maalfrid_nettvett | 11,353 | 44 | 258 |
| maalfrid_uh-it | 11,020 | 274 | 40 |
| maalfrid_fishgen | 10,151 | 28 | 362 |
| maalfrid_designavgang | 10,083 | 73 | 138 |
| maalfrid_global | 9,363 | 43 | 217 |
| maalfrid_valg | 8,778 | 47 | 186 |
| maalfrid_havmiljo | 8,734 | 69 | 126 |
| maalfrid_miljoklagenemnda | 7,797 | 35 | 222 |
| maalfrid_altinn | 7,636 | 47 | 162 |
| maalfrid_spinn-inn | 7,381 | 46 | 160 |
| maalfrid_kantinekurset | 7,302 | 53 | 137 |
| maalfrid_bastoyfengsel | 6,990 | 54 | 129 |
| maalfrid_voldsoffererstatning | 6,079 | 27 | 225 |
| maalfrid_norskpetroleum | 5,953 | 117 | 50 |
| maalfrid_musikkbasertmiljobehandling | 4,895 | 36 | 135 |
| maalfrid_prosjektveiviseren | 4,860 | 13 | 373 |
| maalfrid_fmfiavo@fylkesmannen | 4,740 | 69 | 68 |
| maalfrid_aldersvennlig | 4,643 | 31 | 149 |
| maalfrid_barentswatch | 4,575 | 31 | 147 |
| maalfrid_kk-utvalget | 4,474 | 18 | 248 |
| maalfrid_agropub | 4,434 | 17 | 260 |
| maalfrid_utdanningiverden | 3,845 | 13 | 295 |
| maalfrid_overgangsbolig | 3,769 | 35 | 107 |
| maalfrid_forsvaretsmuseer | 3,744 | 34 | 110 |
| maalfrid_okopark | 3,282 | 12 | 273 |
| maalfrid_sikkerhverdag | 2,786 | 19 | 146 |
| maalfrid_pst | 2,643 | 13 | 203 |
| maalfrid_arkitektur | 2,321 | 14 | 165 |
| maalfrid_velgekte | 2,287 | 10 | 228 |
| maalfrid_addlab | 2,107 | 11 | 191 |
| maalfrid_romerikefengsel | 2,017 | 17 | 118 |
| maalfrid_utdanning | 2,009 | 12 | 167 |
| maalfrid_grunderskolen | 1,994 | 7 | 284 |
| maalfrid_umb | 1,958 | 9 | 217 |
| maalfrid_oslofengsel | 1,756 | 8 | 219 |
| maalfrid_alleteller | 1,511 | 7 | 215 |
| maalfrid_lykillinn | 1,349 | 4 | 337 |
| maalfrid_kulturfag | 1,215 | 6 | 202 |
| maalfrid_hjorteviltregisteret | 1,020 | 3 | 340 |
| maalfrid_unimus | 940 | 4 | 235 |
| maalfrid_anleggsregisteret | 928 | 5 | 185 |
| maalfrid_webhuset | 883 | 3 | 294 |
| maalfrid_mangfoldsprisen | 597 | 3 | 199 |
| maalfrid_algae2future | 456 | 8 | 57 |
| maalfrid_mammapresenterer | 447 | 2 | 223 |
| maalfrid_karriereveiledning | 382 | 26 | 14 |
| maalfrid_nodsms | 351 | 4 | 87 |
| maalfrid_kildekompasset | 302 | 1 | 302 |
| maalfrid_praksisfou | 297 | 1 | 297 |
| maalfrid_retttilaalese | 246 | 3 | 82 |
| maalfrid_indreostfoldfengsel | 215 | 3 | 71 |
| maalfrid_xn--kroppsvingsforskning-gcc | 205 | 2 | 102 |
| maalfrid_pahoyden | 154 | 1 | 154 |
| maalfrid_norren | 42 | 1 | 42 |
### Languages
| Language | Words | Documents | Words/Document |
|-----------:|--------------:|------------:|-----------------:|
| no | 3,208,084,695 | 8,290,110 | 386 |
| da | 917,080,415 | 322,045 | 2,847 |
| en | 462,136,101 | 1,422,633 | 324 |
| nn | 174,514,916 | 467,956 | 372 |
| fr | 48,750,032 | 104,698 | 465 |
| de | 26,433,213 | 61,760 | 427 |
| sv | 15,535,094 | 55,596 | 279 |
| es | 8,379,358 | 31,395 | 266 |
| fi | 3,857,523 | 10,268 | 375 |
| pt | 2,476,848 | 14,558 | 170 |
| oc | 2,104,415 | 4,845 | 434 |
| nl | 1,872,692 | 7,153 | 261 |
| zh | 1,452,798 | 7,540 | 192 |
| uk | 1,420,173 | 4,290 | 331 |
| ca | 1,361,797 | 3,577 | 380 |
| la | 1,280,142 | 500 | 2,560 |
| it | 1,255,675 | 6,812 | 184 |
| ru | 1,201,770 | 5,717 | 210 |
| et | 1,030,612 | 3,892 | 264 |
| cs | 909,670 | 4,254 | 213 |
| eu | 827,380 | 3,091 | 267 |
| pl | 745,342 | 5,022 | 148 |
| fa | 487,145 | 1,984 | 245 |
| ja | 340,847 | 3,481 | 97 |
| is | 303,953 | 979 | 310 |
| id | 213,904 | 1,228 | 174 |
| ar | 207,081 | 1,145 | 180 |
| hu | 190,336 | 1,290 | 147 |
| vi | 134,034 | 616 | 217 |
| so | 128,476 | 589 | 218 |
| el | 116,643 | 604 | 193 |
| hr | 109,342 | 493 | 221 |
| lv | 106,145 | 63 | 1,684 |
| sl | 91,364 | 648 | 140 |
| tr | 88,945 | 1,006 | 88 |
| eo | 80,138 | 473 | 169 |
| ro | 78,492 | 440 | 178 |
| lt | 65,104 | 545 | 119 |
| sr | 64,233 | 764 | 84 |
| gl | 62,865 | 570 | 110 |
| ko | 54,321 | 893 | 60 |
| war | 53,809 | 228 | 236 |
| th | 52,614 | 350 | 150 |
| am | 45,893 | 321 | 142 |
| ceb | 35,257 | 264 | 133 |
| ml | 34,523 | 148 | 233 |
| sq | 31,866 | 152 | 209 |
| tl | 30,909 | 161 | 191 |
| kk | 26,605 | 68 | 391 |
| mn | 21,540 | 22 | 979 |
| sw | 18,626 | 64 | 291 |
| pnb | 18,203 | 80 | 227 |
| sk | 17,548 | 196 | 89 |
| gu | 16,973 | 13 | 1,305 |
| bg | 16,746 | 96 | 174 |
| sh | 15,627 | 127 | 123 |
| ur | 15,353 | 138 | 111 |
| mk | 12,193 | 62 | 196 |
| ckb | 9,350 | 44 | 212 |
| ku | 8,316 | 48 | 173 |
| ast | 7,828 | 58 | 134 |
| az | 7,585 | 47 | 161 |
| uz | 6,873 | 34 | 202 |
| ta | 4,177 | 59 | 70 |
| fy | 3,567 | 26 | 137 |
| ms | 3,535 | 100 | 35 |
| hy | 3,409 | 31 | 109 |
| pa | 3,283 | 16 | 205 |
| hi | 2,810 | 40 | 70 |
| bo | 2,551 | 1 | 2,551 |
| ht | 2,534 | 11 | 230 |
| be | 2,418 | 42 | 57 |
| min | 2,155 | 7 | 307 |
| cy | 1,984 | 40 | 49 |
| jv | 1,887 | 30 | 62 |
| su | 1,840 | 23 | 80 |
| als | 1,826 | 40 | 45 |
| bn | 1,791 | 20 | 89 |
| ps | 1,740 | 14 | 124 |
| af | 1,703 | 20 | 85 |
| bs | 1,516 | 23 | 65 |
| qu | 1,484 | 13 | 114 |
| nds | 1,370 | 78 | 17 |
| my | 1,107 | 15 | 73 |
| ga | 967 | 26 | 37 |
| mt | 937 | 12 | 78 |
| si | 858 | 21 | 40 |
| te | 853 | 17 | 50 |
| ilo | 733 | 15 | 48 |
| io | 693 | 11 | 63 |
| km | 690 | 12 | 57 |
| tt | 675 | 20 | 33 |
| jbo | 621 | 27 | 23 |
| gn | 595 | 7 | 85 |
| as | 584 | 2 | 292 |
| ug | 581 | 6 | 96 |
| kv | 562 | 3 | 187 |
| kn | 531 | 19 | 27 |
| br | 522 | 19 | 27 |
| pam | 476 | 1 | 476 |
| he | 396 | 14 | 28 |
| kw | 327 | 5 | 65 |
| ka | 311 | 16 | 19 |
| vep | 302 | 13 | 23 |
| wa | 266 | 38 | 7 |
| yo | 261 | 5 | 52 |
| ky | 232 | 11 | 21 |
| azb | 216 | 1 | 216 |
| ba | 203 | 5 | 40 |
| gom | 164 | 9 | 18 |
| ia | 131 | 12 | 10 |
| tg | 129 | 3 | 43 |
| mr | 122 | 6 | 20 |
| lmo | 87 | 23 | 3 |
| lb | 77 | 17 | 4 |
| pms | 76 | 10 | 7 |
| vec | 67 | 3 | 22 |
| rue | 67 | 2 | 33 |
| ne | 51 | 5 | 10 |
| hsb | 51 | 2 | 25 |
| cbk | 46 | 2 | 23 |
| or | 44 | 2 | 22 |
| ie | 38 | 5 | 7 |
| tk | 36 | 4 | 9 |
| eml | 31 | 4 | 7 |
| arz | 31 | 1 | 31 |
| sco | 30 | 1 | 30 |
| bar | 30 | 3 | 10 |
| gd | 29 | 2 | 14 |
| li | 22 | 3 | 7 |
| mg | 22 | 4 | 5 |
| lrc | 20 | 1 | 20 |
| diq | 20 | 2 | 10 |
| dsb | 19 | 1 | 19 |
| yue | 19 | 1 | 19 |
| os | 15 | 2 | 7 |
| wuu | 14 | 1 | 14 |
| sd | 14 | 1 | 14 |
| nah | 14 | 2 | 7 |
| cv | 12 | 1 | 12 |
| scn | 9 | 2 | 4 |
| bcl | 8 | 1 | 8 |
| bh | 8 | 1 | 8 |
| new | 4 | 1 | 4 |
| ce | 4 | 1 | 4 |
| mzn | 3 | 1 | 3 |
| frr | 3 | 1 | 3 |
| gv | 3 | 1 | 3 |
| vo | 3 | 2 | 1 |
| lo | 2 | 1 | 2 |
### Publish Periode
| Decade | Words | Documents | Words/Document |
|---------:|--------------:|------------:|-----------------:|
| 2020 | 4,052,373,794 | 10,835,886 | 1,425 |
| 2010 | 17,009,855 | 940 | 141,801 |
| 2000 | 56,172,494 | 2,884 | 200,149 |
| 1990 | 114,019,082 | 5,874 | 197,169 |
| 1980 | 39,419,883 | 1,480 | 266,616 |
| 1970 | 21,512,880 | 841 | 251,649 |
| 1960 | 17,545,214 | 469 | 373,059 |
| 1950 | 17,141,714 | 341 | 480,561 |
| 1940 | 28,883,477 | 532 | 513,832 |
| 1930 | 35,093,392 | 693 | 504,374 |
| 1920 | 51,125,258 | 1,067 | 483,297 |
| 1910 | 61,224,579 | 1,207 | 498,450 |
| 1900 | 59,281,717 | 1,124 | 523,247 |
| 1890 | 85,597,278 | 1,746 | 486,711 |
| 1880 | 58,217,754 | 1,062 | 551,360 |
| 1870 | 25,602,577 | 614 | 404,544 |
| 1860 | 39,006,777 | 692 | 547,879 |
| 1850 | 52,875,326 | 838 | 628,249 |
| 1840 | 30,500,062 | 516 | 588,425 |
| 1830 | 18,072,551 | 363 | 487,067 |
| 1820 | 4,554,472 | 141 | 338,978 |
| 1810 | 971,784 | 56 | 127,989 |
## Considerations for Using the Data
This corpus contains data under copyright and is not allowed to be used outide the National Library of Norway. The dataset should not be distributed.
### Discussion of Biases
Please refer to our paper.
### Dataset Curators
[Freddy Wetjen](mailto:Freddy.wetjen@nb.no) and [Per Egil Kummervold](mailto:Per.Kummervold@nb.no)
## License
Various licences applies to different parts of the corpus. Every document in the corpus has a tag telling what **"doc_type"** it belongs to. If you are unable to accept any of the licenses, you should filter out the **"doc_type"** with a conflicting license.
| Doc_type | License |
| :-------- | :------------- |
| government_nb, government_nn, parliament, publicreports, lovdata_cd_\*, maalfrid_\* | [NLOD 2.0](https://data.norge.no/nlod/en/2.0/)|
| newspapers_ocr, newspapers_pdf, books| [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/)|
| newspapers_online_nb, newspapers_online_nn | [CC BY-NC 2.0](https://creativecommons.org/licenses/by-nc/2.0/)|
| opensubtitles, wikipedia | [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/)
|
### Citation Information
We are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:
```
@inproceedings{kummervold-etal-2021-operationalizing,
title = {Operationalizing a National Digital Library: The Case for a {N}orwegian Transformer Model},
author = {Kummervold, Per E and
De la Rosa, Javier and
Wetjen, Freddy and
Brygfjeld, Svein Arne",
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
year = "2021",
address = "Reykjavik, Iceland (Online)",
publisher = {Link{"o}ping University Electronic Press, Sweden},
url = "https://aclanthology.org/2021.nodalida-main.3",
pages = "20--29",
abstract = "In this work, we show the process of building a large-scale training set from digital and digitized collections at a national library.
The resulting Bidirectional Encoder Representations from Transformers (BERT)-based language model for Norwegian outperforms multilingual BERT (mBERT) models
in several token and sequence classification tasks for both Norwegian Bokm{aa}l and Norwegian Nynorsk. Our model also improves the mBERT performance for other
languages present in the corpus such as English, Swedish, and Danish. For languages not included in the corpus, the weights degrade moderately while keeping strong multilingual properties. Therefore,
we show that building high-quality models within a memory institution using somewhat noisy optical character recognition (OCR) content is feasible, and we hope to pave the way for other memory institutions to follow.",
}
```
|
KevinZ | null | @article{ettinger2020bert,
title={What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models},
author={Ettinger, Allyson},
journal={Transactions of the Association for Computational Linguistics},
volume={8},
pages={34--48},
year={2020},
publisher={MIT Press}
} | Psycholinguistic dataset from 'What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models'
by Allyson Ettinger | false | 1 | false | KevinZ/psycholinguistic_eval | 2022-10-25T10:03:37.000Z | null | false | 939a75ee8e11464d473de956df8a96fa5e5e64b7 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en-US",
"license:mit",
"multilinguality:monolingual",
"size_categories:n<1K",
"task_categories:multiple-choice",
"task_categories:fill-mask",
"task_categories:question-answering",
"task_categories:zero-shot-cl... | https://huggingface.co/datasets/KevinZ/psycholinguistic_eval/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en-US
license:
- mit
multilinguality:
- monolingual
pretty_name: psycholinguistic_eval
size_categories:
- n<1K
source_datasets: []
task_categories:
- multiple-choice
- fill-mask
- question-answering
- zero-shot-classification
task_ids: []
---
This is a suite of psycholinguistic datasets by Allyson Ettinger. See her [official Github repository](https://github.com/aetting/lm-diagnostics) for specific details. |
Ericblancosf | null | null | null | false | 1 | false | Ericblancosf/subtechnique | 2022-04-01T05:02:50.000Z | null | false | 0fbd5a29ff7f0c14ba9b11b878b05e8bdcc0a4c0 | [] | [] | https://huggingface.co/datasets/Ericblancosf/subtechnique/resolve/main/README.md | Mitre technique subtechnqie |
tan9 | null | null | null | false | 2 | false | tan9/bioasq | 2022-04-01T09:40:24.000Z | null | false | 79ac5a87c5025ad4eb1713832edaf19838d8c10f | [] | [] | https://huggingface.co/datasets/tan9/bioasq/resolve/main/README.md | annotations_creators:
- expert-generated
- machine-generated
language_creators:
- expert-generated
- machine-generated
languages: []
licenses:
- other-my-license
multilinguality:
- monolingual
pretty_name: bioasq
size_categories:
- unknown
source_datasets:
- extended|pubmed_qa
task_categories:
- question-answering
task_ids:
- multiple-choice-qa |
tan9 | null | null | null | false | 1 | false | tan9/pubmedQA | 2022-04-01T09:44:08.000Z | null | false | 3955cef83f919910b99d77369b44c09a67907ef2 | [] | [] | https://huggingface.co/datasets/tan9/pubmedQA/resolve/main/README.md | annotations_creators:
- other
language_creators:
- other
languages:
- en-US
licenses:
- other-my-license
multilinguality:
- monolingual
pretty_name: pubmedqa
size_categories:
- unknown
source_datasets: []
task_categories:
- question-answering
task_ids:
- extractive-qa |
sfdkiaei | null | null | null | false | 1 | false | sfdkiaei/EAS | 2022-04-01T11:14:44.000Z | null | false | b1aac0607073656a92770b7dc5766a02abef01d6 | [] | [] | https://huggingface.co/datasets/sfdkiaei/EAS/resolve/main/README.md | # EAS Dataset
[](https://opendatacommons.org/licenses/odbl/)
Emotions Analytic System (EAS) on Instagram social network data
Nowadays, thanks to spread of social media, and large amount of data in Internet, the need for changing how we look and interpret data is evolving. Visualization is one of the most important fields in data science. About growing usage of social media, analyzing the data they contain is crucial. In this research, the Emotion Analytic System on Instagram social network data designed and developed. In this system, we analyze emotions and words that user writes, and visualize them by visualizing techniques. Over 370,000 Instagram comments have been collected with the help of data crawlers that we developed, after that we prepared the data and preprocessed them; including normalizing, finding the keywords and etc. The system is developed by Python.
This Dataset has over 370,000 preprocessed comments (that most of them are in Persian) from 40 instagram channels. These comments are crawled from 12 April 2017 (1396/01/26 A.H.S) to 29 July 2017 (1396/05/07 A.H.S).
# Citation
If you use this dataset in your publications, please cite this paper:
```
@article {
author = {Kiaei, Seyed Faridoddin and Dehghan Rouzi, Mohammad and Farzi, Saeed},
title = {Designing and Implementing an Emotion Analytic System (EAS) on Instagram Social Network Data},
journal = {International Journal of Web Research},
volume = {2},
number = {2},
pages = {9-14},
year = {2019},
publisher = {University of Science and Culture},
issn = {2645-4335},
eissn = {2645-4343},
doi = {10.22133/ijwr.2020.225574.1052},
keywords = {Emotion Analysis,visualization,Instagram,Election},
url = {http://ijwr.usc.ac.ir/article_110287.html},
eprint = {http://ijwr.usc.ac.ir/article_110287_ad2b34be8792fd3e55ae13ea0f367b7a.pdf}
}
```
|
jimregan | null | null | The PSST Challenge focuses on a technically-challenging and clinically
important task—high-accuracy automatic phoneme recognition of disordered
speech, in a diagnostic context—which has applications in many different
areas relating to speech and language disorders. | false | 1 | false | jimregan/psst | 2022-04-01T10:56:42.000Z | null | false | befbfaa257a8c30ecac899233856947f65272b5e | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/jimregan/psst/resolve/main/README.md | ---
license: apache-2.0
---
|
huggan | null | null | null | false | 6 | false | huggan/few-shot-obama | 2022-04-12T14:05:43.000Z | null | false | 5e44a58b8670e3c79a1a78efbd08ba3f3cbddfac | [] | [
"arxiv:2101.04775"
] | https://huggingface.co/datasets/huggan/few-shot-obama/resolve/main/README.md | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 1 | false | huggan/few-shot-grumpy-cat | 2022-04-12T14:05:58.000Z | null | false | ec44f1c4b919da79f19f17f39233d1622ac359fa | [] | [
"arxiv:2101.04775"
] | https://huggingface.co/datasets/huggan/few-shot-grumpy-cat/resolve/main/README.md | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 3 | false | huggan/few-shot-panda | 2022-04-12T14:06:07.000Z | null | false | 687192a767a75775f5c7eb8dae634d52f23a7b53 | [] | [
"arxiv:2101.04775"
] | https://huggingface.co/datasets/huggan/few-shot-panda/resolve/main/README.md | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 1 | false | huggan/few-shot-cat | 2022-04-12T14:06:50.000Z | null | false | 1abe3d88240ec5d9d0072cba028e81def9a26a71 | [] | [
"arxiv:2101.04775"
] | https://huggingface.co/datasets/huggan/few-shot-cat/resolve/main/README.md | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 1 | false | huggan/few-shot-dog | 2022-04-12T14:07:22.000Z | null | false | 53fbaa9de53882a687970fba23a960f75c452df6 | [] | [
"arxiv:2101.04775"
] | https://huggingface.co/datasets/huggan/few-shot-dog/resolve/main/README.md | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.