id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
PlanTL-GOB-ES/SQAC | 2022-11-18T12:00:35.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:es",
"license:cc-by-sa-4.0",
"arxiv:1606.05250",
"region:us"
] | PlanTL-GOB-ES | This dataset contains 6,247 contexts and 18,817 questions with their answers, 1 to 5 for each fragment.
The sources of the contexts are:
* Encyclopedic articles from [Wikipedia in Spanish](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
* News from [Wikinews in Spanish](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/).
* Text from the Spanish corpus [AnCora](http://clic.ub.edu/corpus/en), which is a mix from diferent newswire and literature sources, used under [CC-by licence] (https://creativecommons.org/licenses/by/4.0/legalcode).
This dataset can be used to build extractive-QA. | bibtex
@article{DBLP:journals/corr/abs-2107-07253,
author = {Asier Guti{\'{e}}rrez{-}Fandi{\~{n}}o and
Jordi Armengol{-}Estap{\'{e}} and
Marc P{\`{a}}mies and
Joan Llop{-}Palao and
Joaqu{\'{\i}}n Silveira{-}Ocampo and
Casimiro Pio Carrino and
Aitor Gonzalez{-}Agirre and
Carme Armentano{-}Oller and
Carlos Rodr{\'{\i}}guez Penagos and
Marta Villegas},
title = {Spanish Language Models},
journal = {CoRR},
volume = {abs/2107.07253},
year = {2021},
url = {https://arxiv.org/abs/2107.07253},
archivePrefix = {arXiv},
eprint = {2107.07253},
timestamp = {Wed, 21 Jul 2021 15:55:35 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2107-07253.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 7 | 171 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- es
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: Spanish Question Answering Corpus (SQAC)
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# SQAC (Spanish Question-Answering Corpus)
## Dataset Description
SQAC is an extractive QA dataset for the Spanish language.
- **Paper:** [MarIA: Spanish Language Models](https://upcommons.upc.edu/bitstream/handle/2117/367156/6405-5863-1-PB%20%281%29.pdf?sequence=1)
- **Point of Contact:** carlos.rodriguez1@bsc.es
- **Leaderboard:** [EvalEs] (https://plantl-gob-es.github.io/spanish-benchmark/)
### Dataset Summary
Contains 6,247 contexts and 18,817 questions with their respective answers, 1 to 5 for each fragment.
The sources of the contexts are:
* Encyclopedic articles from the [Spanish Wikipedia](https://es.wikipedia.org/), used under [CC-by-sa licence](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
* News articles from [Wikinews](https://es.wikinews.org/), used under [CC-by licence](https://creativecommons.org/licenses/by/2.5/).
* Newswire and literature text from the [AnCora corpus](http://clic.ub.edu/corpus/en), used under [CC-by licence](https://creativecommons.org/licenses/by/4.0/legalcode).
### Supported Tasks
Extractive-QA
### Languages
- Spanish (es)
### Directory Structure
- README.md
- SQAC.py
- dev.json
- test.json
- train.json
## Dataset Structure
### Data Instances
<pre>
{
'id': '6cf3dcd6-b5a3-4516-8f9e-c5c1c6b66628',
'title': 'Historia de Japón',
'context': 'La historia de Japón (日本の歴史 o 日本史, Nihon no rekishi / Nihonshi?) es la sucesión de hechos acontecidos dentro del archipiélago japonés. Algunos de estos hechos aparecen aislados e influenciados por la naturaleza geográfica de Japón como nación insular, en tanto que otra serie de hechos, obedece a influencias foráneas como en el caso del Imperio chino, el cual definió su idioma, su escritura y, también, su cultura política. Asimismo, otra de las influencias foráneas fue la de origen occidental, lo que convirtió al país en una nación industrial, ejerciendo con ello una esfera de influencia y una expansión territorial sobre el área del Pacífico. No obstante, dicho expansionismo se detuvo tras la Segunda Guerra Mundial y el país se posicionó en un esquema de nación industrial con vínculos a su tradición cultural.',
'question': '¿Qué influencia convirtió Japón en una nación industrial?',
'answers': {
'text': ['la de origen occidental'],
'answer_start': [473]
}
}
</pre>
### Data Fields
<pre>
{
id: str
title: str
context: str
question: str
answers: {
answer_start: [int]
text: [str]
}
}
</pre>
### Data Splits
| Split | Size |
| ------------- | ------------- |
| `train` | 15,036 |
| `dev` | 1,864 |
| `test` | 1.910 |
## Content analysis
### Number of articles, paragraphs and questions
* Number of articles: 3,834
* Number of contexts: 6,247
* Number of questions: 18,817
* Number of sentences: 48,026
* Questions/Context ratio: 3.01
* Sentences/Context ratio: 7.70
### Number of tokens
* Total tokens in context: 1,561,616
* Average tokens/context: 250
* Total tokens in questions: 203,235
* Average tokens/question: 10.80
* Total tokens in answers: 90,307
* Average tokens/answer: 4.80
### Lexical variation
46.38% of the words in the Question can be found in the Context.
### Question type
| Question | Count | % |
|----------|-------:|---:|
| qué | 6,381 | 33.91 % |
| quién/es | 2,952 | 15.69 % |
| cuál/es | 2,034 | 10.81 % |
| cómo | 1,949 | 10.36 % |
| dónde | 1,856 | 9.86 % |
| cuándo | 1,639 | 8.71 % |
| cuánto | 1,311 | 6.97 % |
| cuántos | 495 |2.63 % |
| adónde | 100 | 0.53 % |
| cuánta | 49 | 0.26 % |
| no question mark | 43 | 0.23 % |
| cuántas | 19 | 0.10 % |
## Dataset Creation
### Curation Rationale
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines from SQUAD 1.0 [(Rajpurkar, Pranav et al.)](http://arxiv.org/abs/1606.05250).
### Source Data
#### Initial Data Collection and Normalization
The source data are scraped articles from Wikinews, the Spanish Wikipedia and the AnCora corpus.
- [Spanish Wikipedia](https://es.wikipedia.org)
- [Spanish Wikinews](https://es.wikinews.org/)
- [AnCora corpus](http://clic.ub.edu/corpus/en)
#### Who are the source language producers?
Contributors to the aforementioned sites.
### Annotations
#### Annotation process
We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQUAD 1.0 [(Rajpurkar, Pranav et al.)](http://arxiv.org/abs/1606.05250).
#### Who are the annotators?
Native language speakers.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This corpus contributes to the development of language models in Spanish.
### Discussion of Biases
No postprocessing steps were applied to mitigate potential social biases.
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
For further information, send an email to (plantl-gob-es@bsc.es).
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Licensing information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Citation Information
```
@article{maria,
author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas},
title = {MarIA: Spanish Language Models},
journal = {Procesamiento del Lenguaje Natural},
volume = {68},
number = {0},
year = {2022},
issn = {1989-7553},
url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405},
pages = {39--60}
}
```
### Contributions
[N/A]
|
scikit-learn/imdb | 2022-06-16T09:11:24.000Z | [
"license:other",
"region:us"
] | scikit-learn | null | null | null | 0 | 171 | ---
license: other
---
This is the sentiment analysis dataset based on IMDB reviews initially released by Stanford University.
```
This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets.
We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well.
Raw text and already processed bag of words formats are provided. See the README file contained in the release for more details.
```
[Here](http://ai.stanford.edu/~amaas/data/sentiment/) is the redirection.
```
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
}
``` |
tner/wnut2017 | 2022-08-06T23:30:30.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1k<10K",
"language:en",
"license:other",
"region:us"
] | tner | [WNUT 2017 NER dataset](https://aclanthology.org/W17-4418/) | @inproceedings{derczynski-etal-2017-results,
title = "Results of the {WNUT}2017 Shared Task on Novel and Emerging Entity Recognition",
author = "Derczynski, Leon and
Nichols, Eric and
van Erp, Marieke and
Limsopatham, Nut",
booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W17-4418",
doi = "10.18653/v1/W17-4418",
pages = "140--147",
abstract = "This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarization), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet {``}so.. kktny in 30 mins?!{''} {--} even human experts find the entity {`}kktny{'} hard to detect and resolve. The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities. The task as described in this paper evaluated the ability of participating entries to detect and classify novel and emerging named entities in noisy text.",
} | null | 0 | 171 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1k<10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: WNUT 2017
---
# Dataset Card for "tner/wnut2017"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/W17-4418/](https://aclanthology.org/W17-4418/)
- **Dataset:** WNUT 2017
- **Domain:** Twitter, Reddit, YouTube, and StackExchange
- **Number of Entity:** 6
### Dataset Summary
WNUT 2017 NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `creative-work`, `corporation`, `group`, `location`, `person`, `product`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tokens': ['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'],
'tags': [12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 3, 9, 9, 12, 3, 12, 12, 12, 12, 12, 12, 12, 12]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/wnut2017/raw/main/dataset/label.json).
```python
{
"B-corporation": 0,
"B-creative-work": 1,
"B-group": 2,
"B-location": 3,
"B-person": 4,
"B-product": 5,
"I-corporation": 6,
"I-creative-work": 7,
"I-group": 8,
"I-location": 9,
"I-person": 10,
"I-product": 11,
"O": 12
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|wnut2017 | 2395| 1009|1287|
### Citation Information
```
@inproceedings{derczynski-etal-2017-results,
title = "Results of the {WNUT}2017 Shared Task on Novel and Emerging Entity Recognition",
author = "Derczynski, Leon and
Nichols, Eric and
van Erp, Marieke and
Limsopatham, Nut",
booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W17-4418",
doi = "10.18653/v1/W17-4418",
pages = "140--147",
abstract = "This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarization), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet {``}so.. kktny in 30 mins?!{''} {--} even human experts find the entity {`}kktny{'} hard to detect and resolve. The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities. The task as described in this paper evaluated the ability of participating entries to detect and classify novel and emerging named entities in noisy text.",
}
``` |
luigisaetta/atco2 | 2022-08-29T07:36:28.000Z | [
"region:us"
] | luigisaetta | null | null | null | 2 | 171 | This dataset contains ATC communication.
It can be used to fine tune an **ASR** model, specialised for Air Traffic Control Communications (ATC)
Its data have been taken from the [ATCO2 site](https://www.atco2.org/data) |
ChristophSchuhmann/improved_aesthetics_5plus | 2022-08-11T12:46:57.000Z | [
"license:apache-2.0",
"region:us"
] | ChristophSchuhmann | null | null | null | 13 | 171 | ---
license: apache-2.0
---
|
taka-yayoi/databricks-dolly-15k-ja | 2023-04-17T09:18:13.000Z | [
"license:cc-by-sa-3.0",
"region:us"
] | taka-yayoi | null | null | null | 1 | 171 | ---
license: cc-by-sa-3.0
---
こちらのデータセットを活用させていただき、Dollyのトレーニングスクリプトで使えるように列名の変更とJSONLへの変換を行っています。
https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja
Dolly
https://github.com/databrickslabs/dolly |
jiacheng-ye/nl2bash | 2023-04-17T12:55:38.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"code",
"region:us"
] | jiacheng-ye | The dataset is constructed from
https://github.com/TellinaTool/nl2bash | @inproceedings{LinWZE2018:NL2Bash,
author = {Xi Victoria Lin and Chenglong Wang and Luke Zettlemoyer and Michael D. Ernst},
title = {NL2Bash: A Corpus and Semantic Parser for Natural Language Interface to the Linux Operating System},
booktitle = {Proceedings of the Eleventh International Conference on Language Resources
and Evaluation {LREC} 2018, Miyazaki (Japan), 7-12 May, 2018.},
year = {2018}
} | null | 0 | 171 | ---
task_categories:
- text-generation
language:
- en
tags:
- code
pretty_name: NL2Bash
size_categories:
- 1K<n<10K
--- |
tomas-gajarsky/cifar100-lt | 2023-06-24T20:25:07.000Z | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:cifar100",
"language:en",
"license:apache-2.0",
"region:us"
] | tomas-gajarsky | The CIFAR-100-LT dataset is comprised of under 60,000 color images, each measuring 32x32 pixels,
distributed across 100 distinct classes.
The number of samples within each class decreases exponentially with factors of 10 and 100.
The dataset includes 10,000 test images, with 100 images per class,
and fewer than 50,000 training images.
These 100 classes are further organized into 20 overarching superclasses.
Each image is assigned two labels: a fine label denoting the specific class,
and a coarse label representing the associated superclass. | @TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
} | null | 0 | 171 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license: apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- cifar100
task_categories:
- image-classification
task_ids: []
paperswithcode_id: cifar-100
pretty_name: Cifar100-LT
dataset_info:
features:
- name: img
dtype: image
- name: fine_label
dtype:
class_label:
names:
'0': apple
'1': aquarium_fish
'2': baby
'3': bear
'4': beaver
'5': bed
'6': bee
'7': beetle
'8': bicycle
'9': bottle
'10': bowl
'11': boy
'12': bridge
'13': bus
'14': butterfly
'15': camel
'16': can
'17': castle
'18': caterpillar
'19': cattle
'20': chair
'21': chimpanzee
'22': clock
'23': cloud
'24': cockroach
'25': couch
'26': cra
'27': crocodile
'28': cup
'29': dinosaur
'30': dolphin
'31': elephant
'32': flatfish
'33': forest
'34': fox
'35': girl
'36': hamster
'37': house
'38': kangaroo
'39': keyboard
'40': lamp
'41': lawn_mower
'42': leopard
'43': lion
'44': lizard
'45': lobster
'46': man
'47': maple_tree
'48': motorcycle
'49': mountain
'50': mouse
'51': mushroom
'52': oak_tree
'53': orange
'54': orchid
'55': otter
'56': palm_tree
'57': pear
'58': pickup_truck
'59': pine_tree
'60': plain
'61': plate
'62': poppy
'63': porcupine
'64': possum
'65': rabbit
'66': raccoon
'67': ray
'68': road
'69': rocket
'70': rose
'71': sea
'72': seal
'73': shark
'74': shrew
'75': skunk
'76': skyscraper
'77': snail
'78': snake
'79': spider
'80': squirrel
'81': streetcar
'82': sunflower
'83': sweet_pepper
'84': table
'85': tank
'86': telephone
'87': television
'88': tiger
'89': tractor
'90': train
'91': trout
'92': tulip
'93': turtle
'94': wardrobe
'95': whale
'96': willow_tree
'97': wolf
'98': woman
'99': worm
- name: coarse_label
dtype:
class_label:
names:
'0': aquatic_mammals
'1': fish
'2': flowers
'3': food_containers
'4': fruit_and_vegetables
'5': household_electrical_devices
'6': household_furniture
'7': insects
'8': large_carnivores
'9': large_man-made_outdoor_things
'10': large_natural_outdoor_scenes
'11': large_omnivores_and_herbivores
'12': medium_mammals
'13': non-insect_invertebrates
'14': people
'15': reptiles
'16': small_mammals
'17': trees
'18': vehicles_1
'19': vehicles_2
config_name: cifar100
splits:
- name: train
- name: test
num_bytes: 22605519
num_examples: 10000
download_size: 169001437
---
# Dataset Card for CIFAR-100-LT (Long Tail)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CIFAR Datasets](https://www.cs.toronto.edu/~kriz/cifar.html)
- **Paper:** [Paper imbalanced example](https://openaccess.thecvf.com/content_CVPR_2019/papers/Cui_Class-Balanced_Loss_Based_on_Effective_Number_of_Samples_CVPR_2019_paper.pdf)
- **Leaderboard:** [r-10](https://paperswithcode.com/sota/long-tail-learning-on-cifar-100-lt-r-10) [r-100](https://paperswithcode.com/sota/long-tail-learning-on-cifar-100-lt-r-100)
### Dataset Summary
The CIFAR-100-LT imbalanced dataset is comprised of under 60,000 color images, each measuring 32x32 pixels,
distributed across 100 distinct classes.
The number of samples within each class decreases exponentially with factors of 10 and 100.
The dataset includes 10,000 test images, with 100 images per class,
and fewer than 50,000 training images.
These 100 classes are further organized into 20 overarching superclasses.
Each image is assigned two labels: a fine label denoting the specific class,
and a coarse label representing the associated superclass.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 100 classes. The leaderboard is available [here](https://paperswithcode.com/sota/long-tail-learning-on-cifar-100-lt-r-100).
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'img': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x2767F58E080>, 'fine_label': 19,
'coarse_label': 11
}
```
### Data Fields
- `img`: A `PIL.Image.Image` object containing the 32x32 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `fine_label`: an `int` classification label with the following mapping:
`0`: apple
`1`: aquarium_fish
`2`: baby
`3`: bear
`4`: beaver
`5`: bed
`6`: bee
`7`: beetle
`8`: bicycle
`9`: bottle
`10`: bowl
`11`: boy
`12`: bridge
`13`: bus
`14`: butterfly
`15`: camel
`16`: can
`17`: castle
`18`: caterpillar
`19`: cattle
`20`: chair
`21`: chimpanzee
`22`: clock
`23`: cloud
`24`: cockroach
`25`: couch
`26`: cra
`27`: crocodile
`28`: cup
`29`: dinosaur
`30`: dolphin
`31`: elephant
`32`: flatfish
`33`: forest
`34`: fox
`35`: girl
`36`: hamster
`37`: house
`38`: kangaroo
`39`: keyboard
`40`: lamp
`41`: lawn_mower
`42`: leopard
`43`: lion
`44`: lizard
`45`: lobster
`46`: man
`47`: maple_tree
`48`: motorcycle
`49`: mountain
`50`: mouse
`51`: mushroom
`52`: oak_tree
`53`: orange
`54`: orchid
`55`: otter
`56`: palm_tree
`57`: pear
`58`: pickup_truck
`59`: pine_tree
`60`: plain
`61`: plate
`62`: poppy
`63`: porcupine
`64`: possum
`65`: rabbit
`66`: raccoon
`67`: ray
`68`: road
`69`: rocket
`70`: rose
`71`: sea
`72`: seal
`73`: shark
`74`: shrew
`75`: skunk
`76`: skyscraper
`77`: snail
`78`: snake
`79`: spider
`80`: squirrel
`81`: streetcar
`82`: sunflower
`83`: sweet_pepper
`84`: table
`85`: tank
`86`: telephone
`87`: television
`88`: tiger
`89`: tractor
`90`: train
`91`: trout
`92`: tulip
`93`: turtle
`94`: wardrobe
`95`: whale
`96`: willow_tree
`97`: wolf
`98`: woman
`99`: worm
- `coarse_label`: an `int` coarse classification label with following mapping:
`0`: aquatic_mammals
`1`: fish
`2`: flowers
`3`: food_containers
`4`: fruit_and_vegetables
`5`: household_electrical_devices
`6`: household_furniture
`7`: insects
`8`: large_carnivores
`9`: large_man-made_outdoor_things
`10`: large_natural_outdoor_scenes
`11`: large_omnivores_and_herbivores
`12`: medium_mammals
`13`: non-insect_invertebrates
`14`: people
`15`: reptiles
`16`: small_mammals
`17`: trees
`18`: vehicles_1
`19`: vehicles_2
### Data Splits
| name |train|test|
|----------|----:|---------:|
|cifar100|<50000| 10000|
### Licensing Information
Apache License 2.0
### Citation Information
```
@TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchablani) and all contributors for adding the original balanced cifar100 dataset. |
covid_qa_castorini | 2022-11-03T16:30:54.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:2004.11339",
"region:us"
] | null | CovidQA is the beginnings of a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle's COVID-19 Open Research Dataset Challenge. | @article{tang2020rapidly,
title={Rapidly Bootstrapping a Question Answering Dataset for COVID-19},
author={Tang, Raphael and Nogueira, Rodrigo and Zhang, Edwin and Gupta, Nikhil and Cam, Phuong and Cho, Kyunghyun and Lin, Jimmy},
journal={arXiv preprint arXiv:2004.11339},
year={2020}
} | null | 0 | 170 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
- extractive-qa
paperswithcode_id: covidqa
pretty_name: CovidQaCastorini
dataset_info:
- config_name: covid_qa_deepset
features:
- name: document_id
dtype: int32
- name: context
dtype: string
- name: question
dtype: string
- name: is_impossible
dtype: bool
- name: id
dtype: int32
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 65151262
num_examples: 2019
download_size: 4418117
dataset_size: 65151262
- config_name: covidqa
features:
- name: category_name
dtype: string
- name: question_query
dtype: string
- name: keyword_query
dtype: string
- name: answers
sequence:
- name: id
dtype: string
- name: title
dtype: string
- name: exact_answer
dtype: string
splits:
- name: train
num_bytes: 33757
num_examples: 27
download_size: 51438
dataset_size: 33757
- config_name: covid_qa_castorini
features:
- name: category_name
dtype: string
- name: question_query
dtype: string
- name: keyword_query
dtype: string
- name: answers
sequence:
- name: id
dtype: string
- name: title
dtype: string
- name: exact_answer
dtype: string
splits:
- name: train
num_bytes: 33757
num_examples: 27
download_size: 51438
dataset_size: 33757
---
# Dataset Card for [covid_qa_castorini]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://covidqa.ai
- **Repository:** https://github.com/castorini/pygaggle
- **Paper:** https://arxiv.org/abs/2004.11339
- **Point of Contact:** [Castorini research group @UWaterloo](https://github.com/castorini/)
### Dataset Summary
CovidQA is a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered
from Kaggle’s COVID-19 Open Research Dataset Challenge.
The dataset comprises 156 question-article pairs with 27 questions (topics) and 85 unique articles.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
**What do the instances that comprise the dataset represent?**
Each represents a question, a context (document passage from the CORD19 dataset) and an answer.
**How many instances are there in total?**
**What data does each instance consist of?**
Each instance is a query (natural language question and keyword-based), a set of answers, and a document id with its title associated with each answer.
[More Information Needed]
### Data Fields
The data was annotated in SQuAD style fashion, where each row contains:
* **question_query**: Natural language question query
* **keyword_query**: Keyword-based query
* **category_name**: Category in which the queries are part of
* **answers**: List of answers
* **id**: The document ID the answer is found on
* **title**: Title of the document of the answer
* **exact_answer**: Text (string) of the exact answer
### Data Splits
**data/kaggle-lit-review-0.2.json**: 156 question-article pairs with 27 questions (topics) and 85 unique articles from
CORD-19.
[More Information Needed]
## Dataset Creation
The dataset aims to help for guiding research until more substantial evaluation resources become available. Being a smaller dataset,
it can be helpful for evaluating the zero-shot or transfer capabilities of existing models on topics specifically related to COVID-19.
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
[More Information Needed]
### Annotations
Five of the co-authors participated in this annotation effort, applying the aforementioned approach, with one lead
annotator responsible for approving topics and answering technical questions from the other annotators. Two annotators are
undergraduate students majoring in computer science, one is a science alumna, another is a computer science professor,
and the lead annotator is a graduate student in computer science—all affiliated with the University of Waterloo.
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
The dataset was intended as a stopgap measure for guiding research until more substantial evaluation resources become available.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
While this dataset, comprising 124 question–article pairs as of the present version 0.1 release, does not have sufficient
examples for supervised machine learning, it can be helpful for evaluating the zero-shot or transfer capabilities
of existing models on topics specifically related to COVID-19.
## Additional Information
The listed authors in the homepage are maintaining/supporting the dataset.
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is licensed under the [Apache License 2.0](https://github.com/castorini/pygaggle/blob/master/LICENSE).
### Citation Information
```
@article{tang2020rapidly,
title={Rapidly Bootstrapping a Question Answering Dataset for COVID-19},
author={Tang, Raphael and Nogueira, Rodrigo and Zhang, Edwin and Gupta, Nikhil and Cam, Phuong and Cho, Kyunghyun and Lin, Jimmy},
journal={arXiv preprint arXiv:2004.11339},
year={2020}
}
```
### Contributions
Thanks to [@olinguyen](https://github.com/olinguyen) for adding this dataset. |
maritaca-ai/ag_news_pt | 2023-02-16T00:58:33.000Z | [
"region:us"
] | maritaca-ai | AG is a collection of more than 1 million news articles. News articles have been
gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of
activity. ComeToMyHead is an academic news search engine which has been running
since July, 2004. The dataset is provided by the academic comunity for research
purposes in data mining (clustering, classification, etc), information retrieval
(ranking, search, etc), xml, data compression, data streaming, and any other
non-commercial activity. For more information, please refer to the link
http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html .
The AG's news topic classification dataset is constructed by Xiang Zhang
(xiang.zhang@nyu.edu) from the dataset above. It is used as a text
classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann
LeCun. Character-level Convolutional Networks for Text Classification. Advances
in Neural Information Processing Systems 28 (NIPS 2015). | @inproceedings{Zhang2015CharacterlevelCN,
title={Character-level Convolutional Networks for Text Classification},
author={Xiang Zhang and Junbo Jake Zhao and Yann LeCun},
booktitle={NIPS},
year={2015}
} | null | 1 | 170 | Entry not found |
Francesco/trail-camera | 2023-03-30T09:11:17.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 0 | 170 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': game
'1': Deer
'2': Hog
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: trail-camera
tags:
- rf100
---
# Dataset Card for trail-camera
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/trail-camera
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
trail-camera
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/trail-camera
### Citation Information
```
@misc{ trail-camera,
title = { trail camera Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/trail-camera } },
url = { https://universe.roboflow.com/object-detection/trail-camera },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
ai4bharat/IN22-Conv | 2023-09-12T11:11:17.000Z | [
"task_categories:translation",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:1K<n<10K",
"language:as",
"language:bn",
"language:brx",
"language:doi",
"language:en",
"language:gom",
"language:gu",
"language:hi",
"language:kn",
"language:ks",
"language:mai",
"language:ml",
"language:mr",
"language:mni",
"language:ne",
"language:or",
"language:pa",
"language:sa",
"language:sat",
"language:sd",
"language:ta",
"language:te",
"language:ur",
"license:cc-by-4.0",
"arxiv:2305.16307",
"region:us"
] | ai4bharat | IN-22 is a newly created comprehensive benchmark for evaluating machine translation performance in multi-domain, n-way parallel contexts across 22 Indic languages.
IN22-Conv is the conversation domain subset of IN22. It is designed to assess translation quality in typical day-to-day conversational-style applications.
Currently, we use it for sentence-level evaluation of MT systems but can be repurposed for document translation evaluation as well. | @article{ai4bharat2023indictrans2,
title = {IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages},
author = {AI4Bharat and Jay Gala and Pranjal A. Chitale and Raghavan AK and Sumanth Doddapaneni and Varun Gumma and Aswanth Kumar and Janki Nawale and Anupama Sujatha and Ratish Puduppully and Vivek Raghavan and Pratyush Kumar and Mitesh M. Khapra and Raj Dabre and Anoop Kunchukuttan},
year = {2023},
journal = {arXiv preprint arXiv: 2305.16307}
} | null | 2 | 170 | ---
language:
- as
- bn
- brx
- doi
- en
- gom
- gu
- hi
- kn
- ks
- mai
- ml
- mr
- mni
- ne
- or
- pa
- sa
- sat
- sd
- ta
- te
- ur
language_details: >-
asm_Beng, ben_Beng, brx_Deva, doi_Deva, eng_Latn, gom_Deva, guj_Gujr,
hin_Deva, kan_Knda, kas_Arab, mai_Deva, mal_Mlym, mar_Deva, mni_Mtei,
npi_Deva, ory_Orya, pan_Guru, san_Deva, sat_Olck, snd_Deva, tam_Taml,
tel_Telu, urd_Arab
license: cc-by-4.0
language_creators:
- expert-generated
multilinguality:
- multilingual
- translation
pretty_name: in22-conv
size_categories:
- 1K<n<10K
task_categories:
- translation
---
# IN22-Conv
IN-22 is a newly created comprehensive benchmark for evaluating machine translation performance in multi-domain, n-way parallel contexts across 22 Indic languages. IN22-Conv is the conversation domain subset of IN22. It is designed to assess translation quality in typical day-to-day conversational-style applications. The evaluation subset consists of 1024 sentences translated across 22 Indic languages enabling evaluation of MT systems across 506 directions.
Currently, we use it for sentence-level evaluation of MT systems but can be repurposed for document translation evaluation as well.
Here is the domain distribution of our IN22-Conv evaluation subset.
<table style="width:25%">
<tr>
<td>domain</td>
<td>count</td>
</tr>
<tr>
<td>hobbies</td>
<td>120</td>
</tr>
<tr>
<td>daily_dialogue</td>
<td>117</td>
</tr>
<tr>
<td>government</td>
<td>116</td>
</tr>
<tr>
<td>geography</td>
<td>114</td>
</tr>
<tr>
<td>sports</td>
<td>100</td>
</tr>
<tr>
<td>entertainment</td>
<td>97</td>
</tr>
<tr>
<td>history</td>
<td>97</td>
</tr>
<tr>
<td>legal</td>
<td>96</td>
</tr>
<tr>
<td>arts</td>
<td>95</td>
</tr>
<tr>
<td>college_life</td>
<td>94</td>
</tr>
<tr>
<td>tourism</td>
<td>91</td>
</tr>
<tr>
<td>school_life</td>
<td>87</td>
</tr>
<tr>
<td>insurance</td>
<td>82</td>
</tr>
<tr>
<td>culture</td>
<td>73</td>
</tr>
<tr>
<td>healthcare</td>
<td>67</td>
</tr>
<tr>
<td>banking</td>
<td>57</td>
</tr>
<tr>
<td>total</td>
<td>1503</td>
</tr>
</table>
Please refer to the `Appendix E: Dataset Card` of the [preprint](https://arxiv.org/abs/2305.16307) on detailed description of dataset curation, annotation and quality control process.
### Dataset Structure
#### Dataset Fields
- `id`: Row number for the data entry, starting at 1.
- `doc_id`: Unique identifier of the conversation.
- `sent_id`: Unique identifier of the sentence order in each conversation.
- `topic`: The specific topic of the conversation within the domain.
- `domain`: The domain of the conversation.
- `prompt`: The prompt provided to annotators to simulate the conversation.
- `scenario`: The scenario or context in which the conversation takes place.
- `speaker`: The speaker identifier in the conversation.
- `turn`: The turn within the conversation.
#### Data Instances
A sample from the `gen` split for the English language (`eng_Latn` config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits.
```python
{
"id": 1,
"doc_id": 0,
"sent_id": 1,
"topic": "Festivities",
"domain": "culture",
"prompt": "14th April a holiday",
"scenario": "Historical importance",
"speaker": 1,
"turn": 1,
"sentence": "Mom, let's go for a movie tomorrow."
}
```
When using a hyphenated pairing or using the `all` function, data will be presented as follows:
```python
{
"id": 1,
"doc_id": 0,
"sent_id": 1,
"topic": "Festivities",
"domain": "culture",
"prompt": "14th April a holiday",
"scenario": "Historical importance",
"speaker": 1,
"turn": 1,
"sentence_eng_Latn": "Mom, let's go for a movie tomorrow.",
"sentence_hin_Deva": "माँ, चलो कल एक फिल्म देखने चलते हैं।"
}
```
#### Sample Conversation
<table>
<tr>
<td>Speaker</td>
<td>Turn</td>
</tr>
<tr>
<td>Speaker 1</td>
<td>Mom, let's go for a movie tomorrow. I don't have to go to school. It is a holiday.</td>
</tr>
<tr>
<td>Speaker 2</td>
<td>Oh, tomorrow is the 14th of April right? Your dad will also have the day off from work. We can make a movie plan!</td>
</tr>
<tr>
<td>Speaker 1</td>
<td>That's a good news! Why is it a holiday though? Are all schools, colleges and offices closed tomorrow?</td>
</tr>
<tr>
<td>Speaker 2</td>
<td>It is Ambedkar Jayanti tomorrow! This day is celebrated annually to mark the birth of Dr. B. R Ambedkar. Have you heard of him?</td>
</tr>
<tr>
<td>Speaker 1</td>
<td>I think I have seen him in my History and Civics book. Is he related to our Constitution?</td>
</tr>
<tr>
<td>Speaker 2</td>
<td>Absolutely! He is known as the father of the Indian Constitution. He was a civil rights activist who played a major role in formulating the Constitution. He played a crucial part in shaping the vibrant democratic structure that India prides itself upon.</td>
</tr>
<tr>
<td></td>
<td>...</td>
</tr>
</table>
### Usage Instructions
```python
from datasets import load_dataset
# download and load all the pairs
dataset = load_dataset("ai4bharat/IN22-Conv", "all")
# download and load specific pairs
dataset = load_dataset("ai4bharat/IN22-Conv", "eng_Latn-hin_Deva")
```
### Languages Covered
<table style="width: 40%">
<tr>
<td>Assamese (asm_Beng)</td>
<td>Kashmiri (Arabic) (kas_Arab)</td>
<td>Punjabi (pan_Guru)</td>
</tr>
<tr>
<td>Bengali (ben_Beng)</td>
<td>Kashmiri (Devanagari) (kas_Deva)</td>
<td>Sanskrit (san_Deva)</td>
</tr>
<tr>
<td>Bodo (brx_Deva)</td>
<td>Maithili (mai_Deva)</td>
<td>Santali (sat_Olck)</td>
</tr>
<tr>
<td>Dogri (doi_Deva)</td>
<td>Malayalam (mal_Mlym)</td>
<td>Sindhi (Arabic) (snd_Arab)</td>
</tr>
<tr>
<td>English (eng_Latn)</td>
<td>Marathi (mar_Deva)</td>
<td>Sindhi (Devanagari) (snd_Deva)</td>
</tr>
<tr>
<td>Konkani (gom_Deva)</td>
<td>Manipuri (Bengali) (mni_Beng)</td>
<td>Tamil (tam_Taml)</td>
</tr>
<tr>
<td>Gujarati (guj_Gujr)</td>
<td>Manipuri (Meitei) (mni_Mtei)</td>
<td>Telugu (tel_Telu)</td>
</tr>
<tr>
<td>Hindi (hin_Deva)</td>
<td>Nepali (npi_Deva)</td>
<td>Urdu (urd_Arab)</td>
</tr>
<tr>
<td>Kannada (kan_Knda)</td>
<td>Odia (ory_Orya)</td>
</tr>
</table>
### Citation
If you consider using our work then please cite using:
```
@article{ai4bharat2023indictrans2,
title = {IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages},
author = {AI4Bharat and Jay Gala and Pranjal A. Chitale and Raghavan AK and Sumanth Doddapaneni and Varun Gumma and Aswanth Kumar and Janki Nawale and Anupama Sujatha and Ratish Puduppully and Vivek Raghavan and Pratyush Kumar and Mitesh M. Khapra and Raj Dabre and Anoop Kunchukuttan},
year = {2023},
journal = {arXiv preprint arXiv: 2305.16307}
}
```
|
qa_zre | 2023-04-05T13:37:03.000Z | [
"task_categories:question-answering",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:unknown",
"zero-shot-relation-extraction",
"region:us"
] | null | A dataset reducing relation extraction to simple reading comprehension questions | @inproceedings{levy-etal-2017-zero,
title = "Zero-Shot Relation Extraction via Reading Comprehension",
author = "Levy, Omer and
Seo, Minjoon and
Choi, Eunsol and
Zettlemoyer, Luke",
booktitle = "Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017)",
month = aug,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/K17-1034",
doi = "10.18653/v1/K17-1034",
pages = "333--342",
} | null | 1 | 169 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: QaZre
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- question-answering
task_ids: []
paperswithcode_id: null
tags:
- zero-shot-relation-extraction
dataset_info:
features:
- name: relation
dtype: string
- name: question
dtype: string
- name: subject
dtype: string
- name: context
dtype: string
- name: answers
sequence: string
splits:
- name: test
num_bytes: 29410194
num_examples: 120000
- name: validation
num_bytes: 1481430
num_examples: 6000
- name: train
num_bytes: 2054954011
num_examples: 8400000
download_size: 516061636
dataset_size: 2085845635
---
# Dataset Card for QaZre
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://nlp.cs.washington.edu/zeroshot](http://nlp.cs.washington.edu/zeroshot)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 516.06 MB
- **Size of the generated dataset:** 2.09 GB
- **Total amount of disk used:** 2.60 GB
### Dataset Summary
A dataset reducing relation extraction to simple reading comprehension questions
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 516.06 MB
- **Size of the generated dataset:** 2.09 GB
- **Total amount of disk used:** 2.60 GB
An example of 'validation' looks as follows.
```
{
"answers": [],
"context": "answer",
"question": "What is XXX in this question?",
"relation": "relation_name",
"subject": "Some entity Here is a bit of context which will explain the question in some way"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `relation`: a `string` feature.
- `question`: a `string` feature.
- `subject`: a `string` feature.
- `context`: a `string` feature.
- `answers`: a `list` of `string` features.
### Data Splits
| name | train | validation | test |
|---------|--------:|-----------:|-------:|
| default | 8400000 | 6000 | 120000 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Unknown.
### Citation Information
```
@inproceedings{levy-etal-2017-zero,
title = "Zero-Shot Relation Extraction via Reading Comprehension",
author = "Levy, Omer and
Seo, Minjoon and
Choi, Eunsol and
Zettlemoyer, Luke",
booktitle = "Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017)",
month = aug,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/K17-1034",
doi = "10.18653/v1/K17-1034",
pages = "333--342",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@ghomasHudson](https://github.com/ghomasHudson), [@lewtun](https://github.com/lewtun) for adding this dataset. |
imodels/credit-card | 2022-08-14T15:37:54.000Z | [
"task_categories:tabular-classification",
"size_categories:10K<n<100K",
"interpretability",
"fairness",
"medicine",
"region:us"
] | imodels | null | null | null | 3 | 169 | ---
annotations_creators: []
language: []
language_creators: []
license: []
multilinguality: []
pretty_name: credit-card
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- interpretability
- fairness
- medicine
task_categories:
- tabular-classification
task_ids: []
---
Port of the credit-card dataset from UCI (link [here](https://www.kaggle.com/datasets/uciml/default-of-credit-card-clients-dataset)). See details there and use carefully.
Basic preprocessing done by the [imodels team](https://github.com/csinva/imodels) in [this notebook](https://github.com/csinva/imodels-data/blob/master/notebooks_fetch_data/00_get_datasets_custom.ipynb).
The target is the binary outcome `default.payment.next.month`.
### Sample usage
Load the data:
```
from datasets import load_dataset
dataset = load_dataset("imodels/credit-card")
df = pd.DataFrame(dataset['train'])
X = df.drop(columns=['default.payment.next.month'])
y = df['default.payment.next.month'].values
```
Fit a model:
```
import imodels
import numpy as np
m = imodels.FIGSClassifier(max_rules=5)
m.fit(X, y)
print(m)
```
Evaluate:
```
df_test = pd.DataFrame(dataset['test'])
X_test = df.drop(columns=['default.payment.next.month'])
y_test = df['default.payment.next.month'].values
print('accuracy', np.mean(m.predict(X_test) == y_test))
``` |
ashraq/ott-qa-20k | 2022-10-21T09:06:25.000Z | [
"region:us"
] | ashraq | null | null | null | 3 | 169 | ---
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: header
sequence: string
- name: data
sequence:
sequence: string
- name: section_title
dtype: string
- name: section_text
dtype: string
- name: uid
dtype: string
- name: intro
dtype: string
splits:
- name: train
num_bytes: 41038376
num_examples: 20000
download_size: 23329221
dataset_size: 41038376
---
# Dataset Card for "ott-qa-20k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
The data was obtained from [here](https://github.com/wenhuchen/OTT-QA) |
antolin/python-150_interduplication | 2023-09-18T08:35:19.000Z | [
"region:us"
] | antolin | null | null | null | 1 | 169 | ---
dataset_info:
features:
- name: id_within_dataset
dtype: int64
- name: snippet
dtype: string
- name: tokens
sequence: string
- name: nl
dtype: string
- name: split_within_dataset
dtype: string
- name: is_duplicated
dtype: bool
splits:
- name: train
num_bytes: 41652808.061011426
num_examples: 40871
- name: test
num_bytes: 13890723.835276498
num_examples: 13630
- name: valid
num_bytes: 13861169.103712078
num_examples: 13601
download_size: 30553251
dataset_size: 69404701.0
---
# Dataset Card for "python-150_interduplication"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
species_800 | 2023-06-16T11:33:29.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | We have developed an efficient algorithm and implementation of a dictionary-based approach to named entity recognition,
which we here use to identifynames of species and other taxa in text. The tool, SPECIES, is more than an order of
magnitude faster and as accurate as existing tools. The precision and recall was assessed both on an existing gold-standard
corpus and on a new corpus of 800 abstracts, which were manually annotated after the development of the tool. The corpus
comprises abstracts from journals selected to represent many taxonomic groups, which gives insights into which types of
organism names are hard to detect and which are easy. Finally, we have tagged organism names in the entire Medline database
and developed a web resource, ORGANISMS, that makes the results accessible to the broad community of biologists. | @article{pafilis2013species,
title={The SPECIES and ORGANISMS resources for fast and accurate identification of taxonomic names in text},
author={Pafilis, Evangelos and Frankild, Sune P and Fanini, Lucia and Faulwetter, Sarah and Pavloudi, Christina and Vasileiadou, Aikaterini and Arvanitidis, Christos and Jensen, Lars Juhl},
journal={PloS one},
volume={8},
number={6},
pages={e65390},
year={2013},
publisher={Public Library of Science}
} | null | 2 | 168 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: species800
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B
'2': I
config_name: species_800
splits:
- name: train
num_bytes: 2579096
num_examples: 5734
- name: validation
num_bytes: 385756
num_examples: 831
- name: test
num_bytes: 737760
num_examples: 1631
download_size: 18204624
dataset_size: 3702612
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [SPECIES](https://species.jensenlab.org/)
- **Repository:**
- **Paper:** https://doi.org/10.1371/journal.pone.0065390
- **Leaderboard:**
- **Point of Contact:** [Lars Juhl Jensen](mailto:lars.juhl.jensen@cpr.ku.dk)
### Dataset Summary
S800 Corpus: a novel abstract-based manually annotated corpus. S800 comprises 800 PubMed abstracts in which organism mentions were identified and mapped to the corresponding NCBI Taxonomy identifiers.
To increase the corpus taxonomic mention diversity the S800 abstracts were collected by selecting 100 abstracts from the following 8 categories: bacteriology, botany, entomology, medicine, mycology, protistology, virology and zoology. S800 has been annotated with a focus at the species level; however, higher taxa mentions (such as genera, families and orders) have also been considered.
The Species-800 dataset was pre-processed and split based on the dataset of Pyysalo (https://github.com/spyysalo/s800).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English (`en`).
## Dataset Structure
### Data Instances
```
{'id': '0',
'tokens': ['Methanoregula',
'formicica',
'sp',
'.',
'nov',
'.',
',',
'a',
'methane',
'-',
'producing',
'archaeon',
'isolated',
'from',
'methanogenic',
'sludge',
'.'],
'ner_tags': [1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}
```
### Data Fields
- `id`: Sentence identifier.
- `tokens`: Array of tokens composing a sentence.
- `ner_tags`: Array of tags, where `0` indicates no species mentioned, `1` signals the first token of a species and `2` the subsequent tokens of the species.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The species-level S800 corpus is subject to Medline restrictions.
### Citation Information
Original data:
```
@article{pafilis2013species,
title={The SPECIES and ORGANISMS resources for fast and accurate identification of taxonomic names in text},
author={Pafilis, Evangelos and Frankild, Sune P and Fanini, Lucia and Faulwetter, Sarah and Pavloudi, Christina and Vasileiadou, Aikaterini and Arvanitidis, Christos and Jensen, Lars Juhl},
journal={PloS one},
volume={8},
number={6},
pages={e65390},
year={2013},
publisher={Public Library of Science}
}
```
Source data of this dataset:
```
@article{10.1093/bioinformatics/btz682,
author = {Lee, Jinhyuk and Yoon, Wonjin and Kim, Sungdong and Kim, Donghyeon and Kim, Sunkyu and So, Chan Ho and Kang, Jaewoo},
title = "{BioBERT: a pre-trained biomedical language representation model for biomedical text mining}",
journal = {Bioinformatics},
volume = {36},
number = {4},
pages = {1234-1240},
year = {2019},
month = {09},
issn = {1367-4803},
doi = {10.1093/bioinformatics/btz682},
url = {https://doi.org/10.1093/bioinformatics/btz682},
eprint = {https://academic.oup.com/bioinformatics/article-pdf/36/4/1234/48983216/bioinformatics\_36\_4\_1234.pdf},
}
```
and
```
https://github.com/spyysalo/s800
```
### Contributions
Thanks to [@edugp](https://github.com/edugp) for adding this dataset. |
animelover/danbooru2022 | 2023-07-13T05:49:37.000Z | [
"task_categories:text-to-image",
"size_categories:1M<n<10M",
"language:en",
"license:cc0-1.0",
"doi:10.57967/hf/0425",
"region:us"
] | animelover | null | null | null | 92 | 168 | ---
license: cc0-1.0
task_categories:
- text-to-image
language:
- en
pretty_name: Danbooru 2022
size_categories:
- 1M<n<10M
---
Collect images from [danbooru website](https://danbooru.donmai.us/).
Post id range: 6019085 - 1019085
About 4M+ images.
All images with the shortest edge greater than 768 are scaled to the shortest edge equal to 768.
Some images not download in the range:
- need gold account
- removed
- over 25000000 pixels
- has one of ['furry', "realistic", "3d", "1940s_(style)","1950s_(style)","1960s_(style)","1970s_(style)","1980s_(style)","1990s_(style)","retro_artstyle","screentones","pixel_art","magazine_scan","scan"] tag.
|
CM/codexglue_code2text_python | 2023-04-22T01:52:50.000Z | [
"region:us"
] | CM | null | null | null | 2 | 168 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 813663148
num_examples: 251820
- name: validation
num_bytes: 46888564
num_examples: 13914
- name: test
num_bytes: 50659688
num_examples: 14918
download_size: 325303743
dataset_size: 911211400
---
# Dataset Card for "codexglue_code2text_python"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SotirisLegkas/clickbait | 2023-06-23T11:30:01.000Z | [
"region:us"
] | SotirisLegkas | null | null | null | 0 | 168 | Entry not found |
sentiment140 | 2023-04-05T13:40:06.000Z | [
"language:en",
"region:us"
] | null | Sentiment140 consists of Twitter messages with emoticons, which are used as noisy labels for
sentiment classification. For more detailed information please refer to the paper. | @article{go2009twitter,
title={Twitter sentiment classification using distant supervision},
author={Go, Alec and Bhayani, Richa and Huang, Lei},
journal={CS224N project report, Stanford},
volume={1},
number={12},
pages={2009},
year={2009}
} | null | 8 | 167 | ---
language:
- en
paperswithcode_id: sentiment140
pretty_name: Sentiment140
train-eval-index:
- config: sentiment140
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
sentiment: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
dataset_info:
features:
- name: text
dtype: string
- name: date
dtype: string
- name: user
dtype: string
- name: sentiment
dtype: int32
- name: query
dtype: string
config_name: sentiment140
splits:
- name: test
num_bytes: 73365
num_examples: 498
- name: train
num_bytes: 225742946
num_examples: 1600000
download_size: 81363704
dataset_size: 225816311
---
# Dataset Card for "sentiment140"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://help.sentiment140.com/home](http://help.sentiment140.com/home)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 81.36 MB
- **Size of the generated dataset:** 225.82 MB
- **Total amount of disk used:** 307.18 MB
### Dataset Summary
Sentiment140 consists of Twitter messages with emoticons, which are used as noisy labels for
sentiment classification. For more detailed information please refer to the paper.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### sentiment140
- **Size of downloaded dataset files:** 81.36 MB
- **Size of the generated dataset:** 225.82 MB
- **Total amount of disk used:** 307.18 MB
An example of 'train' looks as follows.
```
{
"date": "23-04-2010",
"query": "NO_QUERY",
"sentiment": 3,
"text": "train message",
"user": "train user"
}
```
### Data Fields
The data fields are the same among all splits.
#### sentiment140
- `text`: a `string` feature.
- `date`: a `string` feature.
- `user`: a `string` feature.
- `sentiment`: a `int32` feature.
- `query`: a `string` feature.
### Data Splits
| name | train |test|
|------------|------:|---:|
|sentiment140|1600000| 498|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{go2009twitter,
title={Twitter sentiment classification using distant supervision},
author={Go, Alec and Bhayani, Richa and Huang, Lei},
journal={CS224N project report, Stanford},
volume={1},
number={12},
pages={2009},
year={2009}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
ScandEval/angry-tweets-mini | 2023-07-05T09:52:07.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:da",
"license:cc-by-4.0",
"region:us"
] | ScandEval | null | null | null | 0 | 167 | ---
license: cc-by-4.0
task_categories:
- text-classification
language:
- da
size_categories:
- 1K<n<10K
--- |
kensho/spgispeech | 2022-10-21T14:46:30.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:2104.02014",
"region:us"
] | kensho | The SPGISpeech corpus is derived from company earnings calls manually transcribed by S&P Global, Inc. according to a pro- fessional style guide detailing conventions for capitalization, punctuation, denormalization of non-standard words and tran- scription of disfluencies in spontaneous speech. The basic unit of SPGISpeech is a pair consisting of a 5 to 15 second long 16 bit, 16kHz mono wav audio file and its transcription.. | @ARTICLE{2021arXiv210402014O,
author = {{O'Neill}, Patrick K. and {Lavrukhin}, Vitaly and {Majumdar}, Somshubra and {Noroozi}, Vahid and {Zhang}, Yuekai and {Kuchaiev}, Oleksii and {Balam}, Jagadeesh and {Dovzhenko}, Yuliya and {Freyberg}, Keenan and {Shulman}, Michael D. and {Ginsburg}, Boris and {Watanabe}, Shinji and {Kucsko}, Georg},
title = "{SPGISpeech: 5,000 hours of transcribed financial audio for fully formatted end-to-end speech recognition}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language, Electrical Engineering and Systems Science - Audio and Speech Processing},
year = 2021,
month = apr,
eid = {arXiv:2104.02014},
pages = {arXiv:2104.02014},
archivePrefix = {arXiv},
eprint = {2104.02014},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv210402014O},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
} | null | 19 | 167 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: SpgiSpeech
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
extra_gated_prompt: |-
Your access to and use of the information in the Kensho Transcript Dataset (the “Content”), which is provided by Kensho Technologies, LLC, a subsidiary of S&P Global, Inc., (“Kensho”), shall be governed by the following terms and conditions of usage (“Terms of Usage”). The Content may be accessed only by persons who have been authorized to use this Content pursuant to their acceptance and acknowledgement of these Terms of Usage (in each case, an “Authorized User”). By providing your electronic signature at the end of these Terms of Usage, you represent that you are an Authorized User and that you accept these Terms of Usage and agree to be bound by them.
If you do not wish to be bound by these Terms of Usage, you must not use this Content. PLEASE READ THESE TERMS OF USAGE CAREFULLY BEFORE USING THIS CONTENT.
Section 1 – THE CONTENT
1.1 The Content is provided for academic research purposes and internal use only and must not be used to: assemble or create a database; construct or facilitate the construction of products which compete with the Content; identify or attempt to identify or contact any individual; or link to another dataset.
The Content, which is comprised of public earnings calls in audio and corresponding text format, and all accompanying derived products is proprietary to Kensho and its third-party content providers. You shall not modify the Content; create derivative works based on the Content, rewrite or reprocess the Content except as expressly provided herein. You must not publish, display, transfer or redistribute the Content or any portions or derivative versions thereof to anyone without prior written consent from Kensho. You agree not to contact Kensho or its affiliates concerning individuals whose information may be included in the Content.
1.2 Disclaimer. Content to which you are provided access, either directly or indirectly, from or on this Content will not have been reviewed or monitored by Kensho, and Kensho cannot and does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any such content.
The Content is provided for your convenience only and is not a republication or reconfirmation of the opinion or information contained therein. The provision of the Content is without any obligation on the part of Kensho or its third-party content providers to review such or any liability or responsibility arising out of your use thereof. Kensho does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any Content and shall not be liable for any errors, delays, or actions taken in reliance on information. In addition, the Content speaks only as of the date issued and is based on conference calls that may contain projections of other forward-looking statements. You should not rely on the Content as expressing Kensho’s opinion or as representing current information. None of Kensho or the third-party content providers has undertaken, and do not undertake any duty to update any Content or otherwise advise you of any changes in the Content.
1.3 Ownership of Third-Party Content. You acknowledge that all proprietary rights in the Content that are owned by Kensho or third party content providers shall remain the property of Kensho or such third party content providers, and you shall have no right or interest in such third party content except the rights to use such third party content in accordance with these Terms of Usage. Any additional rights not granted herein shall require a separate, direct agreement with Kensho. You acknowledge that the Content and third party content as compiled, prepared, selected and arranged by Kensho or its third party content providers constitutes an expenditure of substantial time, effort and money by Kensho and its third party content providers and constitutes valuable commercial property and/or trade secrets of Kensho and such third party content providers. Kensho retains all rights and remedies afforded under the copyright, trademark, service mark, patent and other laws of the United States and the States thereof, including without limitation any laws designed to protect proprietary or confidential information. You agree that you will not remove or modify any copyright notice, disclosures, disclaimers or other notification or trade name or marks of Kensho or the third party content providers that may appear in the Content or third party content and that any permitted reproduction and/or distribution of the Content or third party content shall contain such notices and/or marks as they appear in the Content or third party content. You may not use Kensho’s or the third-party content providers’ name or trademarks without the prior written consent of Kensho or such third-party content providers. Apart from the rights granted hereunder, no conveyance of ownership, right, title or interest is intended herein. Any additional rights require a separate agreement with Kensho.
1.4 Posted Guidelines. In addition to these Terms of Usage, when using this Content, you shall be subject to and agree to follow any posted notice, guidelines or rules, which may be posted and amended from time to time. Nothing on this Content shall be considered a recommendation or solicitation to buy or an offer to sell a security to any person in any jurisdiction.
1.5 Registration Data. In consideration of your use of this Content, you and/or your employer agree to: (a) provide true, accurate, current and complete Registration Data (as defined below in Section 3.1) to Kensho as prompted by the registration form completed prior to accessing the Content and (b) maintain and promptly update the Registration Data and to keep the same true, accurate, current and complete.
1.6 Right to Terminate User Access. Kensho reserves the right to limit, restrict and immediately terminate your access to and use of this Content at any time, in whole or in part, in its sole discretion and without notice.
Section 2 - DISCLAIMER OF WARRANTY AND LIMITATION OF LIABILITY
2.1 THE CONTENT IS PROVIDED “AS IS” AND “AS AVAILABLE” WITHOUT REPRESENTATION OR WARRANTY OF ANY KIND. USE OF THE CONTENT IS AT THE USER’S OWN RISK. IN NO EVENT SHALL KENSHO OR ITS THIRD-PARTY CONTENT PROVIDERS BE LIABLE FOR ANY DECISION MADE OR ACTION OR INACTION TAKEN IN RELIANCE ON ANY CONTENT, INCLUDING THIRD-PARTY CONTENT, INCLUDING YOUR HANDLING AND STORING OF THE CONTENT. KENSHO FURTHER EXPLICITLY DISCLAIMS, ANY WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OF ORIGINALITY, ACCURACY, COMPLETENESS, TIMELINESS, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. KENSHO EXPRESSLY DISCLAIMS, AND YOU WAIVE, ANY LIABILITY THAT MAY ARISE FROM YOUR PUBLICATION OR PROVISION OF THE CONTENT TO A THIRD PARTY, OR ANY REPRESENTATION OR WARRANTY MADE BY YOU TO ANY THIRD PARTY, WHETHER OR NOT RELATED TO THE CONTENT. KENSHO, SUPPLIERS OF THIRD-PARTY CONTENT AND ANY OTHER THIRD PARTY WORKING WITH KENSHO SHALL NOT BE RESPONSIBLE OR LIABLE, DIRECTLY OR INDIRECTLY, FOR ANY DAMAGES OR LOSS (INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL AND ANY AND ALL OTHER FORMS OF DAMAGES OR LOSSES REGARDLESS OF THE FORM OF THE ACTION OR THE BASIS OF THE CLAIM) CAUSED OR ALLEGED TO BE CAUSED IN CONNECTION WITH YOUR USE OF THE CONTENT WHETHER OR NOT FORESEEABLE, EVEN IF KENSHO OR ANY OF THE SUPPLIERS OF THIRD-PARTY CONTENT OR OTHER THIRD PARTIES WORKING WITH KENSHO IN CONNECTION WITH THE CONTENT HAS BEEN ADVISED OF THE POSSIBILITY OR LIKELIHOOD OF SUCH DAMAGES.
2.2 THE CONTENT IS NOT INTENDED TO PROVIDE TAX, LEGAL, INSURANCE OR INVESTMENT ADVICE, AND NOTHING IN THE CONTENT SHOULD BE CONSTRUED AS AN OFFER TO SELL, A SOLICITATION OF AN OFFER TO BUY, OR A RECOMMENDATION FOR ANY SECURITY BY KENSHO OR ANY THIRD PARTY.
2.3 For third party demands, claims, actions, proceedings and liability for losses, damages, reasonable legal costs and other reasonable expenses of any nature, you agree to defend, indemnify and hold Kensho and its affiliates harmless, including its respective directors, officers, employees and agents from and against all claims to the extent arising from your access to and/or use of the Content, any failure by you to abide by the Terms of Usage, or breach of applicable law.
Section 3 - PRIVACY
3.1 Access and Collection. In order to access this Content, during the registration process, either you or your employer will be required to provide Kensho with certain information; including your name, employer or academic institution, and e-mail address (“Registration Data”). In addition, when you request or view Content, Kensho may obtain user identifiable information related to your request of, or access to, such Content (“Access Data”). For example, while you are accessing this Content, our Web servers may recognize your: (a) domain name; (b) ISP’s domain name; (c) IP address; (d) browser type; and (e) operating system. If you contact us with a technical question, we may collect certain information about your systems, including: (a) your browser type, version and settings (e.g., Java and cookie settings); (b) connectivity information (e.g., SSL/HTTPS compatibility, bandwidth capacity); and browser plug-in information (e.g., do you have Adobe, what is your media player, can you open Flash files, etc.).
3.2 Use of Your Information. Registration Data and Access Data may be used by Kensho for research and development purposes and to communicate with users and to troubleshoot any technical issues pertaining to the Content. You acknowledge that in the event that a separate agreement is required, Kensho may share Registration Data with its Affiliates (as defined below).
3.3 Disclosure of Your Information. Except as otherwise noted herein, Kensho will not disclose, rent or sell personal information collected from or about you without your permission. For the purposes specified in the preceding paragraph, we may transfer or disclose Registration Data and Access Data to S&P Global Inc. and its affiliates (“Kensho Affiliates”) and third parties who are contracted to perform services on behalf of Kensho, such as those who assist Kensho in bringing you this Content and providing you with certain features and functionality included within or accessible via this Content. We may also disclose Registration Data and Access Data to Kensho Affiliates and third parties in connection with their providing you access to this Content. Disclosures to these third parties will be subject to confidentiality agreements and, where required, governed by contract. Kensho may also be required to disclose information to governmental, regulatory or self-regulatory entities or agencies in response to regulatory inquiries or to comply with applicable laws, rules, regulations, orders, subpoenas or other legal processes.
3.4 Consent. By (a) agreeing to these Terms of Usage, or (b) by using this Content, and, in either case, providing any information that may be required, requested or otherwise collected by us as set forth above, you freely consent to Kensho processing your information in the United States and in other countries and territories for the purposes set out in these Terms of Usage, and you also consent to the transfer of your information for such purposes to any third party content provider wherever such entity may from time to time be located and to any third parties as described above and in accordance with applicable law and regulations. If you do not permit Kensho to collect any of your information or do not agree with any of the terms and conditions of these Terms of Usage, you should not use this Content and should exit this page and/or Content, as the case may be. If after registering with Kensho, you desire to withdraw the consent granted in this Section 3.4 for all future use of your information by Kensho, you must notify Kensho in writing at the address listed below in Section 3.8 and immediately cease use of this Content.
3.5 Inquiries. If you have any questions regarding these Terms of Usage or your information that is held by us, please contact Kensho in writing using the contact information provided below. If we receive a request regarding your personal information held by us, we will use reasonable means to provide you with such information that we can reasonably compile. You will be given the opportunity to rectify any inaccuracies in such information.
3.6 Encryption. Kensho may use encryption technology to protect certain transmissions of data to/from this Content, but e-mail and other communications, unless otherwise noted on this Content, are not encrypted to/from this Content. Therefore, you should not send any personal or identifying information, such as account numbers, credit card numbers, Social Security numbers, passwords, etc., to Kensho via e-mail. By utilizing e-mail or other electronic communication means you acknowledge that you have no expectation of privacy with respect to the information delivered thereby and that Kensho will not be responsible for any loss or damage that could result from interception by third parties of any information so sent.
3.7 Contact Information. In the event you have any questions regarding these Terms of Use, this Privacy Statement or to make any requests or queries regarding your information that is held by us you may contact us in writing at privacy@kensho.com or Kensho Technologies LLC, Attn: General Counsel, 55 Water Street, New York, NY 10041.
Section 4 - MISCELLANEOUS
4.1 Entire Agreement. These Terms of Usage constitute the entire agreement of the parties hereto with respect to the subject matter hereof and supersede all prior agreements and undertakings, both written and oral, between the parties with respect to the subject matter hereof.
4.2 Severability. If any term or other provision of these Terms of Usage is invalid, illegal or incapable of being enforced by any law or public policy, all other terms and provisions of these Terms of Usage shall nevertheless remain in full force and effect so long as the economic or legal substance of the transactions contemplated hereby is not affected in any manner materially adverse to any party.
4.3 Governing Law; Forum. These Terms of Usage shall be governed in all respects by the laws of the State of New York, and any litigation arising out of or connected in any way with these Terms of Usage shall take place in a State or Federal court of competent jurisdiction in New York County, State of New York.
4.4 Waiver of Jury Trial. YOU WAIVE TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW ANY RIGHT YOU MAY HAVE TO A TRIAL BY JURY WITH RESPECT TO ANY ACTIONS OR PROCEEDINGS DIRECTLY OR INDIRECTLY ARISING OUT OF, UNDER OR IN CONNECTION WITH THESE TERMS OF USAGE.
4.5 Conflict. In the event of a conflict between these Terms of Use and any other agreement with Kensho that relates to Third-Party Content, the more restrictive terms shall prevail.
extra_gated_fields:
Full name: text
Email: text
Institution: text
I accept the Terms of Usage: checkbox
---
# Dataset Card for SPGISpeech
## Table of Contents
- [Table of Contents](#table-of-contents)
<img src="https://s3.amazonaws.com/moonup/production/uploads/1661776840270-62e049fe81d9ca6484eff137.png" alt="SPGISpeech Logo" width="200"/>
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
- [Terms of Usage](#terms-of-usage)
## Dataset Description
- **Homepage:** https://datasets.kensho.com/datasets/spgispeech
- **Repository:**
- **Paper:** https://arxiv.org/abs/2104.02014
- **Leaderboard:**
- **Point of Contact:** [data@kensho.com](mailto:data@kensho.com )
## Dataset Description
SPGISpeech (rhymes with “squeegee-speech”) is a large-scale transcription dataset, freely available for academic research.
SPGISpeech is a corpus of 5,000 hours of professionally-transcribed financial audio.
SPGISpeech contains a broad cross-section of L1 and L2 English accents,
strongly varying audio quality, and both spontaneous and narrated speech. The transcripts have each been cross-checked
by multiple professional editors for high accuracy and are fully formatted, including capitalization, punctuation, and
denormalization of non-standard words.
SPGISpeech consists of 5,000 hours of recorded company earnings calls and their respective transcriptions.
The original calls were split into slices ranging from 5 to 15 seconds in length to allow easy training for
speech recognition systems. Calls represent a broad cross-section of international business English;
SPGISpeech contains approximately 50,000 speakers, one of the largest numbers of any speech corpus,
and offers a variety of L1 and L2 English accents. The format of each WAV file is single channel, 16kHz, 16 bit audio.
### Example Usage
The training split has several configurations of various size: S, M, L. See the Section [Data Splits](#data-splits)
for for more information. To download the S configuration:
```python
from datasets import load_dataset
spgi = load_dataset("kensho/spgispeech", "S", use_auth_token=True)
# see structure
print(spgi)
# load audio sample on the fly
audio_input = spgi["train"][0]["audio"] # first decoded audio sample
transcription = spgi["train"][0]["text"] # first transcription
```
It is possible to download only the development or test data:
```python
spgi_dev = load_dataset("kensho/spgispeech", "dev", use_auth_token=True)
spgi_test = load_dataset("kensho/spgispeech", "test", use_auth_token=True)
```
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
The model is presented with an audio file and asked to transcribe the audio file to written text.
The most common evaluation metric is the word error rate (WER).
### Languages
SPGISpeech contains audio and transcription data in business English and offers a variety of L1 and L2 accents.
## Dataset Structure
### Data Instances
```python
{
'wav_filename': '32bcf9c9dc707fb61a04290e296f31eb/99.wav',
'audio': {
'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/c7082e2bd5b.../dev_part_2/32bcf9c9dc707fb61a04290e296f31eb/99.wav',
'array': array([-0.00039673, -0.00057983, -0.00057983, ..., -0.0007019 ,
-0.00027466, 0.00021362], dtype=float32),
'sampling_rate': 16000
},
'wav_filesize': 292844,
'transcript': 'This is proving to be true, and through focused execution we are on track to exceed our targeted savings in 2017. As a reminder,'
}
```
### Data Fields
* wav_filename (string) - audio filename (includes parent directory).
* audio (Audio feature) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate.
In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio
inside its archive (as files are not downloaded and extracted locally).
* wav_filesize (int) - size of the file in bytes.
* transcript (string) - transcription of the file.
### Data Splits
The dataset has three splits: train, evaluation (dev) and test. The train split has three configurations of various sizes:
S, M, L. Larger subsets are supersets of smaller subsets, e.g., the L subset contains all the data from the M subset.
#### Transcribed Subsets Size
| Subset | Size |
|:------:|:-------:|
| S | 22Gb |
| M | 107Gb |
| L | 530Gb |
| dev | 11Gb |
| test | 11Gb |
## Dataset Creation
### Curation Rationale
To augment the open-source speech-to-text datasets available for R&D.
### Source Data
The dataset contains S&P Global company earnings calls.
#### Initial Data Collection and Normalization
Public earnings calls spanning the time period from 2007-2020 were converted to 16kHz, 16-bit audio.
#### Who are the source language producers?
English speakers with a diverse selection of accents, including non-native ones (L2), producing both
spontaneous and narrated speech.
### Annotations
#### Annotation process
Data is orthographically transcribed according to a professional style guide detailing conventions for capitalization, punctuation,
denormalization of non-standard words and transcription of disfluencies in spontaneous speech.
The transcripts have each been cross-checked by multiple professional editors for high accuracy and are fully formatted.
Full earnings calls last 30-60 minutes in length and are typically
transcribed as whole units, without internal timestamps. In order to produce short audio slices suitable for STT
training, the files were segmented with [Gentle](https://lowerquality.com/gentle/), a double-pass forced aligner,
with the beginning and end of each slice of audio imputed by voice activity detection with
[py-webrtc](https://github.com/wiseman/py-webrtcvad).
#### Who are the annotators?
Earning calls are manually transcribed by S&P Global, Inc.
### Personal and Sensitive Information
Though earnings calls are public, we nevertheless identified full names with the spaCy en core web large model.
We withheld samples containing names that appeared fewer than ten times (7% of total). Full
names appearing ten times or more in the data were considered to be public figures and were retained.
This necessarily incomplete approach to named entity recognition was complemented with randomized manual spot
checks which uncovered no false negatives missed by the automated approach.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
The largest issue inherent with the dataset is that the speaker distribution of SPGISpeech reflects the speaker distribution seen during earning calls.
One example issue that stems from this: during earnings calls, close to 90% of speakers are male.
### Other Known Limitations
Due to formal language seen during earnings calls, the dataset needs augmentation for training systems that transcribe informal speech.
## Additional Information
### Dataset Curators
Kensho Technologies
### Licensing Information
### Citation Information
Please cite this paper:
```bibtext
@ARTICLE{2021arXiv210402014O,
author = {{O'Neill}, Patrick K. and {Lavrukhin}, Vitaly and {Majumdar},
Somshubra and {Noroozi}, Vahid and {Zhang}, Yuekai and {Kuchaiev}, Oleksii and {Balam},
Jagadeesh and {Dovzhenko}, Yuliya and {Freyberg}, Keenan and {Shulman}, Michael D. and {Ginsburg},
Boris and {Watanabe}, Shinji and {Kucsko}, Georg},
title = "{SPGISpeech: 5,000 hours of transcribed financial audio for fully formatted end-to-end speech recognition}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language, Electrical Engineering and Systems Science - Audio and Speech Processing},
year = 2021,
month = apr,
eid = {arXiv:2104.02014},
pages = {arXiv:2104.02014},
archivePrefix = {arXiv},
eprint = {2104.02014},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv210402014O},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
```
### Contributions
Thanks to [@sanchit-gandhi](https://github.com/sanchit-gandhi), [@patrickvonplaten](https://github.com/patrickvonplaten),
and [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
## Terms of Usage
Your access to and use of the information in the Kensho Transcript Dataset (the “Content”), which is provided by Kensho Technologies, LLC, a subsidiary of S&P Global, Inc., (“Kensho”), shall be governed by the following terms and conditions of usage (“Terms of Usage”). The Content may be accessed only by persons who have been authorized to use this Content pursuant to their acceptance and acknowledgement of these Terms of Usage (in each case, an “Authorized User”). By providing your electronic signature at the end of these Terms of Usage, you represent that you are an Authorized User and that you accept these Terms of Usage and agree to be bound by them.
If you do not wish to be bound by these Terms of Usage, you must not use this Content. PLEASE READ THESE TERMS OF USAGE CAREFULLY BEFORE USING THIS CONTENT.
Section 1 – THE CONTENT
1.1 The Content is provided for academic research purposes and internal use only and must not be used to:
- assemble or create a database;
- construct or facilitate the construction of products which compete with the Content;
- identify or attempt to identify or contact any individual; or link to another dataset.
The Content, which is comprised of public earnings calls in audio and corresponding text format, and all accompanying derived products is proprietary to Kensho and its third-party content providers. You shall not modify the Content; create derivative works based on the Content, rewrite or reprocess the Content except as expressly provided herein. You must not publish, display, transfer or redistribute the Content or any portions or derivative versions thereof to anyone without prior written consent from Kensho. You agree not to contact Kensho or its affiliates concerning individuals whose information may be included in the Content.
1.2 Disclaimer. Content to which you are provided access, either directly or indirectly, from or on this Content will not have been reviewed or monitored by Kensho, and Kensho cannot and does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any such content.
The Content is provided for your convenience only and is not a republication or reconfirmation of the opinion or information contained therein. The provision of the Content is without any obligation on the part of Kensho or its third-party content providers to review such or any liability or responsibility arising out of your use thereof. Kensho does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any Content and shall not be liable for any errors, delays, or actions taken in reliance on information. In addition, the Content speaks only as of the date issued and is based on conference calls that may contain projections of other forward-looking statements. You should not rely on the Content as expressing Kensho’s opinion or as representing current information. None of Kensho or the third-party content providers has undertaken, and do not undertake any duty to update any Content or otherwise advise you of any changes in the Content.
1.3 Ownership of Third-Party Content. You acknowledge that all proprietary rights in the Content that are owned by Kensho or third party content providers shall remain the property of Kensho or such third party content providers, and you shall have no right or interest in such third party content except the rights to use such third party content in accordance with these Terms of Usage. Any additional rights not granted herein shall require a separate, direct agreement with Kensho. You acknowledge that the Content and third party content as compiled, prepared, selected and arranged by Kensho or its third party content providers constitutes an expenditure of substantial time, effort and money by Kensho and its third party content providers and constitutes valuable commercial property and/or trade secrets of Kensho and such third party content providers. Kensho retains all rights and remedies afforded under the copyright, trademark, service mark, patent and other laws of the United States and the States thereof, including without limitation any laws designed to protect proprietary or confidential information. You agree that you will not remove or modify any copyright notice, disclosures, disclaimers or other notification or trade name or marks of Kensho or the third party content providers that may appear in the Content or third party content and that any permitted reproduction and/or distribution of the Content or third party content shall contain such notices and/or marks as they appear in the Content or third party content. You may not use Kensho’s or the third-party content providers’ name or trademarks without the prior written consent of Kensho or such third-party content providers. Apart from the rights granted hereunder, no conveyance of ownership, right, title or interest is intended herein. Any additional rights require a separate agreement with Kensho.
1.4 Posted Guidelines. In addition to these Terms of Usage, when using this Content, you shall be subject to and agree to follow any posted notice, guidelines or rules, which may be posted and amended from time to time. Nothing on this Content shall be considered a recommendation or solicitation to buy or an offer to sell a security to any person in any jurisdiction.
1.5 Registration Data. In consideration of your use of this Content, you and/or your employer agree to: (a) provide true, accurate, current and complete Registration Data (as defined below in Section 3.1) to Kensho as prompted by the registration form completed prior to accessing the Content and (b) maintain and promptly update the Registration Data and to keep the same true, accurate, current and complete.
1.6 Right to Terminate User Access. Kensho reserves the right to limit, restrict and immediately terminate your access to and use of this Content at any time, in whole or in part, in its sole discretion and without notice.
Section 2 - DISCLAIMER OF WARRANTY AND LIMITATION OF LIABILITY
2.1 THE CONTENT IS PROVIDED “AS IS” AND “AS AVAILABLE” WITHOUT REPRESENTATION OR WARRANTY OF ANY KIND. USE OF THE CONTENT IS AT THE USER’S OWN RISK. IN NO EVENT SHALL KENSHO OR ITS THIRD-PARTY CONTENT PROVIDERS BE LIABLE FOR ANY DECISION MADE OR ACTION OR INACTION TAKEN IN RELIANCE ON ANY CONTENT, INCLUDING THIRD-PARTY CONTENT, INCLUDING YOUR HANDLING AND STORING OF THE CONTENT. KENSHO FURTHER EXPLICITLY DISCLAIMS, ANY WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OF ORIGINALITY, ACCURACY, COMPLETENESS, TIMELINESS, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. KENSHO EXPRESSLY DISCLAIMS, AND YOU WAIVE, ANY LIABILITY THAT MAY ARISE FROM YOUR PUBLICATION OR PROVISION OF THE CONTENT TO A THIRD PARTY, OR ANY REPRESENTATION OR WARRANTY MADE BY YOU TO ANY THIRD PARTY, WHETHER OR NOT RELATED TO THE CONTENT. KENSHO, SUPPLIERS OF THIRD-PARTY CONTENT AND ANY OTHER THIRD PARTY WORKING WITH KENSHO SHALL NOT BE RESPONSIBLE OR LIABLE, DIRECTLY OR INDIRECTLY, FOR ANY DAMAGES OR LOSS (INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL AND ANY AND ALL OTHER FORMS OF DAMAGES OR LOSSES REGARDLESS OF THE FORM OF THE ACTION OR THE BASIS OF THE CLAIM) CAUSED OR ALLEGED TO BE CAUSED IN CONNECTION WITH YOUR USE OF THE CONTENT WHETHER OR NOT FORESEEABLE, EVEN IF KENSHO OR ANY OF THE SUPPLIERS OF THIRD-PARTY CONTENT OR OTHER THIRD PARTIES WORKING WITH KENSHO IN CONNECTION WITH THE CONTENT HAS BEEN ADVISED OF THE POSSIBILITY OR LIKELIHOOD OF SUCH DAMAGES.
2.2 THE CONTENT IS NOT INTENDED TO PROVIDE TAX, LEGAL, INSURANCE OR INVESTMENT ADVICE, AND NOTHING IN THE CONTENT SHOULD BE CONSTRUED AS AN OFFER TO SELL, A SOLICITATION OF AN OFFER TO BUY, OR A RECOMMENDATION FOR ANY SECURITY BY KENSHO OR ANY THIRD PARTY.
2.3 For third party demands, claims, actions, proceedings and liability for losses, damages, reasonable legal costs and other reasonable expenses of any nature, you agree to defend, indemnify and hold Kensho and its affiliates harmless, including its respective directors, officers, employees and agents from and against all claims to the extent arising from your access to and/or use of the Content, any failure by you to abide by the Terms of Usage, or breach of applicable law.
Section 3 - PRIVACY
3.1 Access and Collection. In order to access this Content, during the registration process, either you or your employer will be required to provide Kensho with certain information; including your name, employer or academic institution, and e-mail address (“Registration Data”). In addition, when you request or view Content, Kensho may obtain user identifiable information related to your request of, or access to, such Content (“Access Data”). For example, while you are accessing this Content, our Web servers may recognize your: (a) domain name; (b) ISP’s domain name; (c) IP address; (d) browser type; and (e) operating system. If you contact us with a technical question, we may collect certain information about your systems, including: (a) your browser type, version and settings (e.g., Java and cookie settings); (b) connectivity information (e.g., SSL/HTTPS compatibility, bandwidth capacity); and browser plug-in information (e.g., do you have Adobe, what is your media player, can you open Flash files, etc.).
3.2 Use of Your Information. Registration Data and Access Data may be used by Kensho for research and development purposes and to communicate with users and to troubleshoot any technical issues pertaining to the Content. You acknowledge that in the event that a separate agreement is required, Kensho may share Registration Data with its Affiliates (as defined below).
3.3 Disclosure of Your Information. Except as otherwise noted herein, Kensho will not disclose, rent or sell personal information collected from or about you without your permission. For the purposes specified in the preceding paragraph, we may transfer or disclose Registration Data and Access Data to S&P Global Inc. and its affiliates (“Kensho Affiliates”) and third parties who are contracted to perform services on behalf of Kensho, such as those who assist Kensho in bringing you this Content and providing you with certain features and functionality included within or accessible via this Content. We may also disclose Registration Data and Access Data to Kensho Affiliates and third parties in connection with their providing you access to this Content. Disclosures to these third parties will be subject to confidentiality agreements and, where required, governed by contract. Kensho may also be required to disclose information to governmental, regulatory or self-regulatory entities or agencies in response to regulatory inquiries or to comply with applicable laws, rules, regulations, orders, subpoenas or other legal processes.
3.4 Consent. By (a) agreeing to these Terms of Usage, or (b) by using this Content, and, in either case, providing any information that may be required, requested or otherwise collected by us as set forth above, you freely consent to Kensho processing your information in the United States and in other countries and territories for the purposes set out in these Terms of Usage, and you also consent to the transfer of your information for such purposes to any third party content provider wherever such entity may from time to time be located and to any third parties as described above and in accordance with applicable law and regulations. If you do not permit Kensho to collect any of your information or do not agree with any of the terms and conditions of these Terms of Usage, you should not use this Content and should exit this page and/or Content, as the case may be. If after registering with Kensho, you desire to withdraw the consent granted in this Section 3.4 for all future use of your information by Kensho, you must notify Kensho in writing at the address listed below in Section 3.8 and immediately cease use of this Content.
3.5 Inquiries. If you have any questions regarding these Terms of Usage or your information that is held by us, please contact Kensho in writing using the contact information provided below. If we receive a request regarding your personal information held by us, we will use reasonable means to provide you with such information that we can reasonably compile. You will be given the opportunity to rectify any inaccuracies in such information.
3.6 Encryption. Kensho may use encryption technology to protect certain transmissions of data to/from this Content, but e-mail and other communications, unless otherwise noted on this Content, are not encrypted to/from this Content. Therefore, you should not send any personal or identifying information, such as account numbers, credit card numbers, Social Security numbers, passwords, etc., to Kensho via e-mail. By utilizing e-mail or other electronic communication means you acknowledge that you have no expectation of privacy with respect to the information delivered thereby and that Kensho will not be responsible for any loss or damage that could result from interception by third parties of any information so sent.
3.7 Contact Information. In the event you have any questions regarding these Terms of Use, this Privacy Statement or to make any requests or queries regarding your information that is held by us you may contact us in writing at privacy@kensho.com or Kensho Technologies LLC, Attn: General Counsel, 55 Water Street, New York, NY 10041.
Section 4 - MISCELLANEOUS
4.1 Entire Agreement. These Terms of Usage constitute the entire agreement of the parties hereto with respect to the subject matter hereof and supersede all prior agreements and undertakings, both written and oral, between the parties with respect to the subject matter hereof.
4.2 Severability. If any term or other provision of these Terms of Usage is invalid, illegal or incapable of being enforced by any law or public policy, all other terms and provisions of these Terms of Usage shall nevertheless remain in full force and effect so long as the economic or legal substance of the transactions contemplated hereby is not affected in any manner materially adverse to any party.
4.3 Governing Law; Forum. These Terms of Usage shall be governed in all respects by the laws of the State of New York, and any litigation arising out of or connected in any way with these Terms of Usage shall take place in a State or Federal court of competent jurisdiction in New York County, State of New York.
4.4 Waiver of Jury Trial. YOU WAIVE TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW ANY RIGHT YOU MAY HAVE TO A TRIAL BY JURY WITH RESPECT TO ANY ACTIONS OR PROCEEDINGS DIRECTLY OR INDIRECTLY ARISING OUT OF, UNDER OR IN CONNECTION WITH THESE TERMS OF USAGE.
4.5 Conflict. In the event of a conflict between these Terms of Use and any other agreement with Kensho that relates to Third-Party Content, the more restrictive terms shall prevail.
|
MoritzLaurer/multilingual-NLI-26lang-2mil7 | 2022-08-22T21:40:14.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"annotations_creators:crowdsourced",
"language_creators:machinetranslation",
"size_categories:1M<n<5",
"source_datasets:multi_nli",
"source_datasets:anli",
"source_datasets:fever",
"source_datasets:lingnli",
"source_datasets:alisawuffles/WANLI",
"language:multilingual",
"language:zh",
"language:ja",
"language:ar",
"language:ko",
"language:de",
"language:fr",
"language:es",
"language:pt",
"language:hi",
"language:id",
"language:it",
"language:tr",
"language:ru",
"language:bn",
"language:ur",
"language:mr",
"language:ta",
"language:vi",
"language:fa",
"language:pl",
"language:uk",
"language:nl",
"language:sv",
"language:he",
"language:sw",
"language:ps",
"arxiv:2104.07179",
"region:us"
] | MoritzLaurer | null | null | null | 28 | 167 | ---
annotations_creators:
- crowdsourced
language_creators:
- machinetranslation
size_categories:
- 1M<n<5
source_datasets:
- multi_nli
- anli
- fever
- lingnli
- alisawuffles/WANLI
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
language:
- multilingual
- zh
- ja
- ar
- ko
- de
- fr
- es
- pt
- hi
- id
- it
- tr
- ru
- bn
- ur
- mr
- ta
- vi
- fa
- pl
- uk
- nl
- sv
- he
- sw
- ps
---
# Datasheet for the dataset: multilingual-NLI-26lang-2mil7
## Dataset Summary
This dataset contains 2 730 000 NLI text pairs in 26 languages spoken by more than 4 billion people. The dataset can be used to train models for multilingual NLI (Natural Language Inference) or zero-shot classification. The dataset is based on the English datasets [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [ANLI](https://huggingface.co/datasets/anli), [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) and [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) and was created using the latest open-source machine translation models.
The dataset is designed to complement the established multilingual [XNLI](https://huggingface.co/datasets/xnli) dataset. XNLI contains older machine translations of the MultiNLI dataset from 2018 for 14 languages, as well as human translations of 2490 texts for validation and 5010 texts for testing per language. multilingual-NLI-26lang-2mil7 is sourced from 5 different NLI datasets and contains 105 000 machine translated texts for each of 26 languages, leading to 2 730 000 NLI text pairs.
The release of the dataset is accompanied by the fine-tuned [mDeBERTa-v3-base-xnli-multilingual-nli-2mil7](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) model, which can be used for NLI or zero-shot classification in 100 languages.
## Dataset Creation
The languages in the dataset are: ['ar', 'bn', 'de', 'es', 'fa', 'fr', 'he', 'hi', 'id', 'it', 'ja', 'ko', 'mr', 'nl', 'pl', 'ps', 'pt', 'ru', 'sv', 'sw', 'ta', 'tr', 'uk', 'ur', 'vi', 'zh'] (see [ISO language codes](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes)) plus the original English texts. The languages were chosen based on two criteria: (1) They are either included in the list of the [20 most spoken languages](https://en.wikipedia.org/wiki/List_of_languages_by_total_number_of_speakers) (excluding Telugu and Nigerian Pidgin, for which no machine translation model was available); (2) or they are spoken in polit-economically important countries such as the [G20](https://en.wikipedia.org/wiki/G20) or Iran and Israel.
For each of the 26 languages, a different random sample of 25 000 hypothesis-premise pairs was taken from each of the following four datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli) (392 702 texts in total), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md) (196 805 texts), [ANLI](https://huggingface.co/datasets/anli) (162 865 texts), [WANLI](https://huggingface.co/datasets/alisawuffles/WANLI) (102 885 texts). Moreover, a sample of 5000 texts was taken from [LingNLI](https://arxiv.org/pdf/2104.07179.pdf) (29 985 texts) given its smaller total size. This leads to a different random sample of 105 000 source texts per target language with a diverse distribution of data from 5 different NLI datasets.
Each sample was then machine translated using the latest open-source machine translation models available for the respective language:
- [opus-mt-tc-big models](https://huggingface.co/models?sort=downloads&search=opus-mt-tc-big) were available for English to ['ar', 'es', 'fr', 'it', 'pt', 'tr']
- [opus-mt-models](https://huggingface.co/models?sort=downloads&search=opus-mt) were available for English to ['de', 'he', 'hi', 'id', 'mr', 'nl', 'ru', 'sv', 'sw', 'uk', 'ur', 'vi', 'zh']
- [m2m100_1.2B](https://huggingface.co/facebook/m2m100_1.2B) was used for the remaining languages ['bn', 'fa', 'ja', 'ko', 'pl', 'ps', 'ta']
## DatasetStructure
### Data Splits
The dataset contains 130 splits (26 * 5), one for each language-dataset pair following the format '{language-iso}_{dataset}'. For example, split 'zh_mnli' contains the Chinese translation of 25 000 texts from the MultiNLI dataset etc.
### Data Fields
- `premise_original`: The original premise from the English source dataset
- `hypothesis_original`: The original hypothesis from the English source dataset
- `label`: The classification label, with possible values `entailment` (0), `neutral` (1), `contradiction` (2).
- `premise`: The machine translated premise in the target language
- `hypothesis`: The machine translated premise in the target language
### Example of a data instance:
```
{
"premise_original": "I would not be surprised if the top priority for the Navy was to build a new carrier.",
"hypothesis_original": "The top priority for the Navy is to build a new carrier.",
"label": 1,
"premise": "Ich würde mich nicht wundern, wenn die oberste Priorität für die Navy wäre, einen neuen Träger zu bauen.",
"hypothesis": "Die oberste Priorität für die Navy ist es, einen neuen Träger zu bauen."
}
```
## Limitations and bias
Machine translation is not as good as human translation. Machine translation can introduce inaccuracies that can be problematic for complex tasks like NLI. In an ideal world, original NLI data would be available for many languages. Given the lack of NLI data, using the latest open-source machine translation seems like a good solution to improve multilingual NLI. You can use the Hugging Face data viewer to inspect the data and verify the translation quality for your language of interest. Note that grammatical errors are less problematic for zero-shot use-cases as grammar is less relevant for these applications.
## Other
The machine translation for the full dataset took roughly 100 hours on an A100 GPU, especially due to the size of the [m2m100_1.2B](https://huggingface.co/facebook/m2m100_1.2B) model.
## Ideas for cooperation or questions?
For updates on new models and datasets, follow me on [Twitter](https://twitter.com/MoritzLaurer).
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or on [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
### Citation Information
If the dataset is useful for you, please cite the following article:
```
@article{laurer_less_2022,
title = {Less {Annotating}, {More} {Classifying} – {Addressing} the {Data} {Scarcity} {Issue} of {Supervised} {Machine} {Learning} with {Deep} {Transfer} {Learning} and {BERT} - {NLI}},
url = {https://osf.io/74b8k},
language = {en-us},
urldate = {2022-07-28},
journal = {Preprint},
author = {Laurer, Moritz and Atteveldt, Wouter van and Casas, Andreu Salleras and Welbers, Kasper},
month = jun,
year = {2022},
note = {Publisher: Open Science Framework},
}
```
|
ywchoi/pubmed_abstract_1 | 2022-09-13T00:56:17.000Z | [
"region:us"
] | ywchoi | null | null | null | 0 | 167 | Entry not found |
bigbio/anat_em | 2022-12-22T15:43:16.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | bigbio | The extended Anatomical Entity Mention corpus (AnatEM) consists of 1212 documents (approx. 250,000 words) manually annotated to identify over 13,000 mentions of anatomical entities. Each annotation is assigned one of 12 granularity-based types such as Cellular component, Tissue and Organ, defined with reference to the Common Anatomy Reference Ontology. | @article{pyysalo2014anatomical,
title={Anatomical entity mention recognition at literature scale},
author={Pyysalo, Sampo and Ananiadou, Sophia},
journal={Bioinformatics},
volume={30},
number={6},
pages={868--875},
year={2014},
publisher={Oxford University Press}
} | null | 0 | 167 |
---
language:
- en
bigbio_language:
- English
license: cc-by-sa-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_SA_3p0
pretty_name: AnatEM
homepage: http://nactem.ac.uk/anatomytagger/#AnatEM
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for AnatEM
## Dataset Description
- **Homepage:** http://nactem.ac.uk/anatomytagger/#AnatEM
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
The extended Anatomical Entity Mention corpus (AnatEM) consists of 1212 documents (approx. 250,000 words) manually annotated to identify over 13,000 mentions of anatomical entities. Each annotation is assigned one of 12 granularity-based types such as Cellular component, Tissue and Organ, defined with reference to the Common Anatomy Reference Ontology.
## Citation Information
```
@article{pyysalo2014anatomical,
title={Anatomical entity mention recognition at literature scale},
author={Pyysalo, Sampo and Ananiadou, Sophia},
journal={Bioinformatics},
volume={30},
number={6},
pages={868--875},
year={2014},
publisher={Oxford University Press}
}
```
|
kuanhuggingface/promptTTS_encodec_v2_small | 2023-06-12T05:45:16.000Z | [
"region:us"
] | kuanhuggingface | null | null | null | 0 | 167 | ---
dataset_info:
features:
- name: file_id
dtype: string
- name: instruction
dtype: string
- name: transcription
dtype: string
- name: src_encodec_0
sequence: int64
- name: src_encodec_1
sequence: int64
- name: src_encodec_2
sequence: int64
- name: src_encodec_3
sequence: int64
- name: src_encodec_4
sequence: int64
- name: src_encodec_5
sequence: int64
- name: src_encodec_6
sequence: int64
- name: src_encodec_7
sequence: int64
- name: tgt_encodec_0
sequence: int64
- name: tgt_encodec_1
sequence: int64
- name: tgt_encodec_2
sequence: int64
- name: tgt_encodec_3
sequence: int64
- name: tgt_encodec_4
sequence: int64
- name: tgt_encodec_5
sequence: int64
- name: tgt_encodec_6
sequence: int64
- name: tgt_encodec_7
sequence: int64
splits:
- name: train
num_bytes: 2975164369
num_examples: 47270
- name: validation
num_bytes: 97855975
num_examples: 1349
- name: test
num_bytes: 80754157
num_examples: 1350
download_size: 437609990
dataset_size: 3153774501
---
# Dataset Card for "promptTTS_encodec_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Amani27/massive_translation_dataset | 2023-07-25T14:54:44.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:en",
"language:de",
"language:es",
"language:hi",
"language:fr",
"language:it",
"language:ar",
"language:nl",
"language:ja",
"language:pt",
"license:cc-by-4.0",
"region:us"
] | Amani27 | null | null | null | 3 | 167 | ---
configs:
- config_name: default
data_files:
- split: train
path: "train.csv"
- split: validation
path: "validation.csv"
- split: test
path: "test.csv"
license: cc-by-4.0
task_categories:
- translation
language:
- en
- de
- es
- hi
- fr
- it
- ar
- nl
- ja
- pt
size_categories:
- 10K<n<100K
---
# Dataset Card for Massive Dataset for Translation
### Dataset Summary
This dataset is derived from AmazonScience/MASSIVE dataset for translation task purpose.
### Supported Tasks and Leaderboards
Translation
### Languages
1. English (en_US)
2. German (de_DE)
3. Hindi (hi_IN)
4. Spanish (es_ES)
5. French (fr_FR)
6. Italian (it_IT)
7. Arabic (ar_SA)
8. Dutch (nl_NL)
9. Japanese (ja_JP)
10. Portugese (pt_PT)
|
euclaise/mqa | 2023-09-25T01:52:04.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"region:us"
] | euclaise | null | null | null | 0 | 167 | ---
dataset_info:
features:
- name: msg
dtype: string
- name: resp_correct
dtype: string
- name: resp_incorrect
sequence: string
splits:
- name: train
num_bytes: 21626021.146013975
num_examples: 23408
download_size: 18857093
dataset_size: 21626021.146013975
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- question-answering
pretty_name: MultiQA
size_categories:
- 10K<n<100K
---
# MQA
Aggregation of datasets as per [here](https://huggingface.co/collections/euclaise/mqa-650f41afae507a2c7ca18b55) |
thaisum | 2022-11-18T21:51:46.000Z | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:th",
"license:mit",
"region:us"
] | null | ThaiSum is a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath,
ThaiPBS, Prachathai, and The Standard. This dataset consists of over 350,000 article and summary pairs
written by journalists. | @mastersthesis{chumpolsathien_2020,
title={Using Knowledge Distillation from Keyword Extraction to Improve the Informativeness of Neural Cross-lingual Summarization},
author={Chumpolsathien, Nakhun},
year={2020},
school={Beijing Institute of Technology} | null | 7 | 166 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- th
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: ThaiSum
dataset_info:
features:
- name: title
dtype: string
- name: body
dtype: string
- name: summary
dtype: string
- name: type
dtype: string
- name: tags
dtype: string
- name: url
dtype: string
config_name: thaisum
splits:
- name: train
num_bytes: 2945472406
num_examples: 358868
- name: validation
num_bytes: 118437310
num_examples: 11000
- name: test
num_bytes: 119496704
num_examples: 11000
download_size: 647582078
dataset_size: 3183406420
---
# Dataset Card for ThaiSum
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/nakhunchumpolsathien/ThaiSum
- **Repository:** https://github.com/nakhunchumpolsathien/ThaiSum
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** https://github.com/nakhunchumpolsathien
### Dataset Summary
ThaiSum is a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard. This dataset consists of over 350,000 article and summary pairs written by journalists.
### Supported Tasks and Leaderboards
summarization, language modeling
### Languages
Thai
## Dataset Structure
### Data Instances
```
{'body': 'กีเก ซานเชซ ฟลอเรส\xa0 กุนซือเลือดกระทิงของทีมวัตฟอร์ด\xa0 เมินประเด็นจุดโทษปัญหาในเกมพรีเมียร์ลีก อังกฤษ นัดที่แตนอาละวาดเปิดบ้านพ่าย คริสตัล พาเลซ 0-1ชี้ทีมของเขาเล่นไม่ดีพอเอง,สำนักข่าวต่างประเทศรายงานวันที่ 27 ก.ย. ว่า กีเก ซานเชซ ฟลอเรส\xa0 ผู้จัดการทีมชาวสเปน ของ แตนอาละวาด วัตฟอร์ด\xa0 ยอมรับทีมของเขาเล่นได้ไม่ดีพอเอง ในเกมพรีเมียร์ลีก อังกฤษ นัดเปิดบ้านพ่าย อินทรีผงาด คริสตัล พาเลซ 0-1 เมื่อคืนวันอาทิตย์ที่ผ่านมา,เกมนี้จุดเปลี่ยนมาอยู่ที่การได้จุดโทษในช่วงครึ่งหลังของ คริสตัล พาเลซ ซึ่งไม่ค่อยชัดเจนเท่าไหร่ว่า อัลลัน นียอม นั้นไปทำฟาล์วใส่ วิลฟรีด ซาฮา ในเขตโทษหรือไม่ แต่ผู้ตัดสินก็ชี้เป็นจุดโทษ ซึ่ง โยอัน กาบาย สังหารไม่พลาด และเป็นประตูชัยช่วยให้ คริสตัล พาเลซ เอาชนะ วัตฟอร์ด ไป 1-0 และเป็นการพ่ายแพ้ในบ้านนัดแรกของวัตฟอร์ดในฤดูกาลนี้อีกด้วย,ฟลอเรส กล่าวว่า มันเป็นเรื่องยากในการหยุดเกมรุกของคริสตัล พาเลซ ซึ่งมันอึดอัดจริงๆสำหรับเรา เราเล่นกันได้ไม่ดีนักในตอนที่ได้ครองบอล เราต้องเล่นทางริมเส้นให้มากกว่านี้ เราไม่สามารถหยุดเกมสวนกลับของพวกเขาได้ และแนวรับของเราก็ยืนไม่เป็นระเบียบสักเท่าไหร่ในช่วงครึ่งแรก ส่วนเรื่องจุดโทษการตัดสินใจขั้นสุดท้ายมันอยู่ที่ผู้ตัดสิน ซึ่งมันเป็นการตัดสินใจที่สำคัญ ผมเองก็ไม่รู้ว่าเขาตัดสินถูกหรือเปล่า บางทีมันอาจเป็นจุดที่ตัดสินเกมนี้เลย แต่เราไม่ได้แพ้เกมนี้เพราะจุดโทษ เราแพ้ในวันนี้เพราะเราเล่นไม่ดีและคริสตัล พาเลซ เล่นดีกว่าเรา เราไม่ได้มีฟอร์มการเล่นที่ดีในเกมนี้เลย', 'summary': 'กีเก ซานเชซ ฟลอเรส กุนซือเลือดกระทิงของทีมวัตฟอร์ด เมินประเด็นจุดโทษปัญหาในเกมพรีเมียร์ลีก อังกฤษ นัดที่แตนอาละวาดเปิดบ้านพ่าย คริสตัล พาเลซ 0-1ชี้ทีมของเขาเล่นไม่ดีพอเอง', 'tags': 'พรีเมียร์ลีก,วัตฟอร์ด,คริสตัล พาเลซ,กีเก ซานเชซ ฟลอเรส,ข่าวกีฬา,ข่าว,ไทยรัฐออนไลน์', 'title': 'ฟลอเรส รับ วัตฟอร์ดห่วยเองเกมพ่ายพาเลซคาบ้าน', 'type': '', 'url': 'https://www.thairath.co.th/content/528322'}
```
### Data Fields
- `title`: title of article
- `body`: body of article
- `summary`: summary of article
- `type`: type of article, if any
- `tags`: tags of article, separated by `,`
- `url`: URL of article
### Data Splits
train/valid/test: 358868 / 11000 / 11000
## Dataset Creation
### Curation Rationale
Sequence-to-sequence (Seq2Seq) models have shown great achievement in text summarization. However, Seq2Seq model often requires large-scale training data to achieve effective results. Although many impressive advancements in text summarization field have been made, most of summarization studies focus on resource-rich languages. The progress of Thai text summarization is still far behind. The dearth of large-scale dataset keeps Thai text summarization in its infancy. As far as our knowledge goes, there is not a large-scale dataset for Thai text summarization available anywhere. Thus, we present ThaiSum, a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard.
### Source Data
#### Initial Data Collection and Normalization
We used a python library named Scrapy to crawl articles from several news websites namely Thairath, Prachatai, ThaiPBS and, The Standard. We first collected news URLs provided in their sitemaps. During web-crawling, we used HTML markup and metadata available in HTML pages to identify article text, summary, headline, tags and label. Collected articles were published online from 2014 to August 2020. <br> <br>
We further performed data cleansing process to minimize noisy data. We filtered out articles that their article text or summary is missing. Articles that contains article text with less than 150 words or summary with less than 15 words were removed. We also discarded articles that contain at least one of these following tags: ‘ดวง’ (horoscope), ‘นิยาย’ (novel), ‘อินสตราแกรมดารา’ (celebrity Instagram), ‘คลิปสุดฮา’(funny video) and ‘สรุปข่าว’ (highlight news). Some summaries were completely irrelevant to their original article texts. To eliminate those irrelevant summaries, we calculated abstractedness score between summary and its article text. Abstractedness score is written formally as: <br>
<center><a href="https://www.codecogs.com/eqnedit.php?latex=\begin{equation}&space;\frac{|S-A|}{r}&space;\times&space;100&space;\end{equation}" target="_blank"><img src="https://latex.codecogs.com/gif.latex?\begin{equation}&space;\frac{|S-A|}{r}&space;\times&space;100&space;\end{equation}" title="\begin{equation} \frac{|S-A|}{r} \times 100 \end{equation}" /></a></center><br>
<br>Where 𝑆 denotes set of article tokens. 𝐴 denotes set of summary tokens. 𝑟 denotes a total number of summary tokens. We omitted articles that have abstractedness score at 1-grams higher than 60%.
<br><br>
It is important to point out that we used [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp), version 2.2.4, tokenizing engine = newmm, to process Thai texts in this study. It is challenging to tokenize running Thai text into words or sentences because there are not clear word/sentence delimiters in Thai language. Therefore, using different tokenization engines may result in different segment of words/sentences.
After data-cleansing process, ThaiSum dataset contains over 358,000 articles. The size of this dataset is comparable to a well-known English document summarization dataset, CNN/Dily mail dataset. Moreover, we analyse the characteristics of this dataset by measuring the abstractedness level, compassion rate, and content diversity. For more details, see [thaisum_exploration.ipynb](https://github.com/nakhunchumpolsathien/ThaiSum/blob/master/thaisum_exploration.ipynb).
#### Dataset Statistics
ThaiSum dataset consists of 358,868 articles. Average lengths of article texts and summaries are approximately 530 and 37 words respectively. As mentioned earlier, we also collected headlines, tags and labels provided in each article. Tags are similar to keywords of the article. An article normally contains several tags but a few labels. Tags can be name of places or persons that article is about while labels indicate news category (politic, entertainment, etc.). Ultimatly, ThaiSum contains 538,059 unique tags and 59 unique labels. Note that not every article contains tags or labels.
|Dataset Size| 358,868 | articles |
|:---|---:|---:|
|Avg. Article Length| 529.5 | words|
|Avg. Summary Length | 37.3 | words|
|Avg. Headline Length | 12.6 | words|
|Unique Vocabulary Size | 407,355 | words|
|Occurring > 10 times | 81,761 | words|
|Unique News Tag Size | 538,059 | tags|
|Unique News Label Size | 59 | labels|
#### Who are the source language producers?
Journalists of respective articles
### Annotations
#### Annotation process
`summary`, `type` and `tags` are created by journalists who wrote the articles and/or their publishers.
#### Who are the annotators?
`summary`, `type` and `tags` are created by journalists who wrote the articles and/or their publishers.
### Personal and Sensitive Information
All data are public news articles. No personal and sensitive information is expected to be included.
## Considerations for Using the Data
### Social Impact of Dataset
- News summarization in Thai
- Language modeling for Thai news
### Discussion of Biases
- [ThaiPBS](https://www.thaipbs.or.th/home) [receives funding from Thai government](https://www.bangkokbiznews.com/blog/detail/648740).
- [Thairath](https://www.thairath.co.th/) is known as [the most popular newspaper in Thailand](https://mgronline.com/onlinesection/detail/9620000058532); no clear political leaning.
- [The Standard](https://thestandard.co/) is a left-leaning online magazine.
- [Prachathai](https://prachatai.com/) is a left-leaning, human-right-focused news site.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[@nakhunchumpolsathien](https://github.com/nakhunchumpolsathien/)
[@caramelWaffle](https://github.com/caramelWaffle)
### Licensing Information
MIT License
### Citation Information
```
@mastersthesis{chumpolsathien_2020,
title={Using Knowledge Distillation from Keyword Extraction to Improve the Informativeness of Neural Cross-lingual Summarization},
author={Chumpolsathien, Nakhun},
year={2020},
school={Beijing Institute of Technology}
```
### Contributions
Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset. |
stanford-crfm/DSIR-filtered-pile-50M | 2023-09-16T14:50:10.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"size_categories:10M<n<100M",
"language:en",
"license:mit",
"language modeling",
"masked language modeling",
"pretraining",
"pile",
"DSIR",
"arxiv:2302.03169",
"region:us"
] | stanford-crfm | null | null | null | 4 | 166 | ---
license: mit
language:
- en
size_categories:
- 10M<n<100M
task_categories:
- text-generation
- fill-mask
tags:
- language modeling
- masked language modeling
- pretraining
- pile
- DSIR
---
# Dataset Card for DSIR-filtered-pile-50M
## Dataset Description
- **Repository:** https://github.com/p-lambda/dsir
- **Paper:** https://arxiv.org/abs/2302.03169
- **Point of Contact: Sang Michael Xie <xie@cs.stanford.edu>**
### Dataset Summary
This dataset is a subset of The Pile, selected via the DSIR data selection method. The target distribution for DSIR is the Wikipedia and BookCorpus2 subsets of The Pile.
### Languages
English (EN)
## Dataset Structure
A train set is provided (51.2M examples) in jsonl format.
### Data Instances
```
{"contents": "Hundreds of soul music enthusiasts from the United Kingdom plan to make their way to Detroit this month for a series of concerts.\n\nDetroit A-Go-Go, a festival organized by DJ Phil Dick, will take place Oct. 19-22 with 26 scheduled acts.\n\nThe festival is focused on what Dick calls the northern soul movement.\n\n\"We just love Detroit soul and Motown music,\" Dick said. \"It's been popular in England for decades. Every weekend, thousands of people go out and listen to this music in England.\"\n\nArtists booked for the festival include: The Elgins, Pat Lewis, Melvin Davis, The Velvelettes, The Contours, Kim Weston, Ronnie McNeir, The Capitols, Yvonne Vernee, JJ Barnes, Gino Washington, Spyder Turner, The Adorables, Lorraine Chandler, Eddie Parker, Dusty Wilson, The Precisions, The Professionals, The Tomangoes, The Fabulous Peps andNow that\u2019s a punishment: club vice president sent to train with the reserves!\n\nFor almost an entire year, Gabriel Bostina has been playing a double role for Universitatea Cluj. Unfortunately for him, the position acquired in the club\u2019s board didn\u2019t earn him any favors from the technical staff, who recently punished the central midfielder. Twice. First of all, Bostina lost the armband during one of the training camps from Antalya for some unknown disciplinary problems and now the player & vice president has suffered further embarrassment being sent to train with the reservers \u201cfor an unlimited period\u201d.\n\nCurrently injured, he failed to show up for the weekend training sessions that were going to be supervised by the club\u2019s medical staff, so the former Otelul, Steaua and Dinamo man is now", "metadata": {"pile_set_name": ["OpenWebText2", "Pile-CC"]}, "id": 423}
```
### Data Fields
```
"contents": the text
"metadata": contains information about the source(s) of text that the text comes from. Multiple sources means that the example is concatenated from two sources.
"id": Ignore - a non-unique identifier
```
## Dataset Creation
We first select 102.4M examples then concatenate every two examples to create 51.2M examples.
This ensures that the examples are long enough for a max token length of 512 without much padding.
We train the importance weight estimator for DSIR from The Pile validation set, where the target is Wikipedia + BookCorpus2 + Gutenberg + Books3 and the raw data come from the rest of the data sources in The Pile.
We first select 98.4M examples from non-Wikipedia and book data, then randomly select 2M from Wikipedia and 0.66M each from BookCorpus2, Gutenberg, and Books3.
After this, we concatenate every two examples.
### Source Data
The Pile
#### Initial Data Collection and Normalization
We select data from The Pile, which comes in 30 random chunks. We reserve chunk 0 for validation purposes and only consider the last 29 chunks.
We first divided the documents in The Pile into chunks of 128 words, according to whitespace tokenization.
These chunks define the examples that we do data selection on, totaling 1.7B examples.
Before DSIR, we first apply a manual quality filter (see paper for details) and only consider the examples that pass the filter.
## Considerations for Using the Data
The dataset is biased towards choosing data from non-Wikipedia and non-Books sources. A balanced approach would be to mix in more data from Wikipedia and books.
### Dataset Curators
Sang Michael Xie, Shibani Santurkar
### Citation Information
Paper: <https://arxiv.org/abs/2302.03169>
```
@article{xie2023data,
author = {Sang Michael Xie and Shibani Santurkar and Tengyu Ma and Percy Liang},
journal = {arXiv preprint arXiv:2302.03169},
title = {Data Selection for Language Models via Importance Resampling},
year = {2023},
}
``` |
ruanchaves/faquad-nli | 2023-04-13T18:26:38.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:extended|wikipedia",
"language:pt",
"license:cc-by-4.0",
"region:us"
] | ruanchaves | null | 1 | 166 | ---
pretty_name: FaQuAD-NLI
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pt
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
# paperswithcode_id: faquad
train-eval-index:
- config: plain_text
task: question-answering
task_id: extractive_question_answering
splits:
train_split: train
eval_split: validation
col_mapping:
question: question
context: context
answers:
text: text
answer_start: answer_start
metrics:
- type: squad
name: SQuAD
---
# Dataset Card for FaQuAD-NLI
## Dataset Description
- **Homepage:** https://github.com/liafacom/faquad
- **Repository:** https://github.com/liafacom/faquad
- **Paper:** https://ieeexplore.ieee.org/document/8923668/
<!-- - **Leaderboard:** -->
- **Point of Contact:** Eraldo R. Fernandes <eraldoluis@gmail.com>
### Dataset Summary
FaQuAD is a Portuguese reading comprehension dataset that follows the format of the Stanford Question Answering Dataset (SQuAD). It is a pioneer Portuguese reading comprehension dataset using the challenging format of SQuAD. The dataset aims to address the problem of abundant questions sent by academics whose answers are found in available institutional documents in the Brazilian higher education system. It consists of 900 questions about 249 reading passages taken from 18 official documents of a computer science college from a Brazilian federal university and 21 Wikipedia articles related to the Brazilian higher education system.
FaQuAD-NLI is a modified version of the [FaQuAD dataset](https://huggingface.co/datasets/eraldoluis/faquad) that repurposes the question answering task as a textual entailment task between a question and its possible answers.
### Supported Tasks and Leaderboards
- `question_answering`: The dataset can be used to train a model for question-answering tasks in the domain of Brazilian higher education institutions.
- `textual_entailment`: FaQuAD-NLI can be used to train a model for textual entailment tasks, where answers in Q&A pairs are classified as either suitable or unsuitable.
### Languages
This dataset is in Brazilian Portuguese.
## Dataset Structure
### Data Fields
- `document_index`: an integer representing the index of the document.
- `document_title`: a string containing the title of the document.
- `paragraph_index`: an integer representing the index of the paragraph within the document.
- `question`: a string containing the question related to the paragraph.
- `answer`: a string containing the answer related to the question.
- `label`: an integer (0 or 1) representing if the answer is suitable (1) or unsuitable (0) for the question.
### Data Splits
The dataset is split into three subsets: train, validation, and test.
The splits were made carefully to avoid question and answer pairs belonging to the same document appearing in more than one split.
| | Train | Validation | Test |
|------------|-------|------------|------|
| Instances | 3128 | 731 | 650 |
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset. | ||
distil-whisper/librispeech_asr | 2023-09-25T10:30:13.000Z | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] | distil-whisper | LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz,
prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read
audiobooks from the LibriVox project, and has been carefully segmented and aligned.87 | @inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
} | null | 0 | 166 | ---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: LibriSpeech ASR
---
# Distil Whisper: LibriSpeech ASR
This is a variant of the [LibriSpeech ASR](https://huggingface.co/datasets/librispeech_asr) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/librispeech_asr).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/librispeech_asr", "all")
# take the first sample of the validation set
sample = dataset["validation.clean"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/librispeech_asr", "all", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation.clean"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
|
paniniDot/sci_lay | 2023-09-05T16:39:49.000Z | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:cc-by-4.0",
"medical",
"region:us"
] | paniniDot | SCILAY comprises 43,790 instances, each representing a scientific article in the biomedical domain.
Each instance in the dataset includes the following components:
- plain_text: Containing a plain language summary of the scientific article. This section is written in a simple and accessible language, and is intended to be understandable by a wide audience.
- technical_text: This section contains the abstract of the scientific article. It provides a detailed and technical description of the research conducted in the article.
- full_text: This section contains the complete article of the scientific research.
In addition to the textual content, each instance is associated with the following metadata:
- Keywords: Keywords that capture the main topics and themes addressed in the article.
- Journal: The journal in which the article is published, providing context about the source of the research.
- DOI (Digital Object Identifier): A unique identifier for the article, facilitating easy referencing.
The main objective of the SCILAY dataset is to support the development and evaluation of text summarization models that can effectively simplify complex scientific language while retaining the essential information. | null | 0 | 166 | ---
license: cc-by-4.0
task_categories:
- summarization
tags:
- medical
pretty_name: Sci Lay - Biomedic Articles Lay Summarization Dataset
size_categories:
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
dataset_info:
- config_name: all
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 35026
num_bytes: 1579515071
- name: validation
num_examples: 4380
num_bytes: 197196187
- name: test
num_examples: 4384
num_bytes: 198833964
- config_name: NC
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 5549
num_bytes: 286453072
- name: validation
num_examples: 694
num_bytes: 35652636
- name: test
num_examples: 694
num_bytes: 35869803
- config_name: A
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 3909
num_bytes: 128936951
- name: validation
num_examples: 489
num_bytes: 1303884
- name: test
num_examples: 489
num_bytes: 1303884
- config_name: PLGEN
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 3087
num_bytes: 9651536
- name: validation
num_examples: 386
num_bytes: 1195717
- name: test
num_examples: 386
num_bytes: 1204735
- config_name: PLPAT
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 2920
num_bytes: 9311936
- name: validation
num_examples: 365
num_bytes: 1161792
- name: test
num_examples: 365
num_bytes: 1148729
- config_name: PLCB
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 2589
num_bytes: 149165851
- name: validation
num_examples: 324
num_bytes: 1009541
- name: test
num_examples: 324
num_bytes: 1013732
- config_name: PLNTD
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 2289
num_bytes: 7958581
- name: validation
num_examples: 286
num_bytes: 990392
- name: test
num_examples: 287
num_bytes: 996549
- config_name: B
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 1617
num_bytes: 57956055
- name: validation
num_examples: 202
num_bytes: 547314
- name: test
num_examples: 203
num_bytes: 537459
- config_name: I
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 1181
num_bytes: 37682107
- name: validation
num_examples: 148
num_bytes: 393826
- name: test
num_examples: 148
num_bytes: 390039
- config_name: PLB
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 896
num_bytes: 54106804
- name: validation
num_examples: 112
num_bytes: 350955
- name: test
num_examples: 113
num_bytes: 352922
- config_name: CB
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 867
num_bytes: 43533134
- name: validation
num_examples: 108
num_bytes: 5664682
- name: test
num_examples: 109
num_bytes: 172812
- config_name: SD
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 725
num_bytes: 23671697
- name: validation
num_examples: 91
num_bytes: 3033467
- name: test
num_examples: 91
num_bytes: 2972947
- config_name: MBIO
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 607
num_bytes: 1602641
- name: validation
num_examples: 76
num_bytes: 203737
- name: test
num_examples: 76
num_bytes: 200707
- config_name: C
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 6782
num_bytes: 242721690
- name: validation
num_examples: 848
num_bytes: 30735056
- name: test
num_examples: 848
num_bytes: 31018214
- config_name: OTHER
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 2008
num_bytes: 89866504
- name: validation
num_examples: 251
num_bytes: 11316433
- name: test
num_examples: 251
num_bytes: 11564599
config_names:
- all
- NC
- A
- PLGEN
- PLPAT
- PLCB
- PLNTD
- B
- I
- PLB
- CB
- SD
- MBIO
- C
- OTHER
---
# Dataset Card for Sci Lay
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Sci Lay](https://github.com/paniniDot/summarization-model)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Mattia Panni](mailto:mattia.panni@studio.unibo.it)
### Dataset Summary
SCILAY comprises 43,790 instances, each representing a scientific article in the biomedical domain.
Each instance in the dataset includes the following components:
- plain_text: Containing a plain language summary of the scientific article. This section is written in a simple and accessible language, and is intended to be understandable by a wide audience.
- technical_text: This section contains the abstract of the scientific article. It provides a detailed and technical description of the research conducted in the article.
- full_text: This section contains the complete article of the scientific research.
In addition to the textual content, each instance is associated with the following metadata:
- Keywords: Keywords that capture the main topics and themes addressed in the article.
- Journal: The journal in which the article is published, providing context about the source of the research.
- DOI (Digital Object Identifier): A unique identifier for the article, facilitating easy referencing.
The main objective of the SCILAY dataset is to support the development and evaluation of text summarization models that can effectively simplify complex scientific language while retaining the essential information.
Each article is published by a scientific journal. There are fifteen such journal classifications:
- NC: Nature Communications
- A: Animals : an Open Access Journal from MDPI
- PLGEN: PLoS Genetics
- PLPAT: PLoS Pathogens
- PLCB: PLoS Computational Biology
- PLNTD: PLoS Neglected Tropical Diseases
- B: Biology
- I: Insects
- PLB: PLoS Biology
- CB: Communications Biology
- SD: Scientific Data
- MBIO: mBio
- C: Cancers
- OTHER: which includes additional journals that taken individually would not have contributed sufficient instances
Current defaults are 1.0.0 version (cased raw strings) and 'all' journals:
```python
from datasets import load_dataset
ds = load_dataset("paniniDot/sci_lay") # default is 'all' journals
ds = load_dataset("paniniDot/sci_lay", "all") # the same as above
ds = load_dataset("paniniDot/sci_lay", "NC") # only 'NC' journal (Nature Communications)
ds = load_dataset("paniniDot/sci_lay", journals=["NC", "A"])
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
Each instance contains a set of `doi`, `pmcid`, `plain_text`, `technical_text`, `journal`, `topics`, `keywords`. Each of which was extracted by scraping articles in XML and HTML format.
```
{
'doi': '10.3390/ani12040445',
'pmcid': 'PMC8868321',
'plain_text': 'PPP3CA is one of the candidate genes for goat reproduction, but no studies have been carried out yet. Therefore, the purpose of this study was to determine the associations between copy number variations in the goat PPP3CA gene and litter size and semen quality in goats, including Shaanbei white cashmere goats (SBWC) (n = 353) and Guizhou Heima (GZHM) goats (n = 64). Based on the association analysis, the results showed that only CNV1 (copy number variation 1) and CNV2 (copy number variation 2) were distinctly related to the first-birth litter size in female goats (p = 7.6802 × 10−11; p = 5.0895 × 10−9), and they were also significantly associated with the semen quality of SBWC goats (p < 0.05). These findings prove that the PPP3CA gene plays an important role in reproduction traits in goats.',
'technical_text': 'Copy number variations (CNVs) have many forms of variation structure, and they play an important role in the research of variety diversity, biological evolution and disease correlation. Since CNVs have a greater impact on gene regulation and expression, more studies are being finalized on CNVs in important livestock and poultry species. The protein phosphatase 3 catalytic subunit alpha (PPP3CA) is a key candidate gene involved in the goat fecundity trait, and has important effects on precocious puberty, estrogen signal transduction pathways and oocyte meiosis. Additionally, PPP3CA also has a dephosphorylation effect in the process of spermatogonial stem cell meiosis and spermatogenesis. So far, there is no research on the relationship between the copy number variations of the PPP3CA gene and reproduction traits. Therefore, the purpose of this study was to determine the association between copy number variations in the goat PPP3CA gene and litter size and semen quality in Shaanbei white cashmere goats (SBWC) (n = 353) and Guizhou Heima goats (n = 64). Based on the association analysis, the results showed that only CNV1 and CNV2 within the PPP3CA gene were distinctly related to the first-birth litter size in female goats (p = 7.6802 × 10−11; p = 5.0895 × 10−9, respectively) and they were also significantly associated with the semen quality of SBWC goats (p < 0.05). In addition, individuals with Loss genotypes demonstrated better phenotypic performance compared to those with other types. Therefore, CNV1 and CNV2 of the PPP3CA gene are potentially useful for breeding, as they are linked to important goat reproduction traits.',
'full_text': '...'
'journal': 'Animals : an Open Access Journal from MDPI',
'topics': [ 'Article' ],
'keywords': [ 'goat', 'PPP3CA', 'copy number variation (CNV)', 'litter size', 'semen quality' ]
}
```
### Data Fields
- `doi`: (Digital Object Identifier). It is a unique alphanumeric string assigned to a digital document, such as a research paper, article, or dataset. Not all istances have it.
- `pmcid`: A unique identifier in the [PubMed Central library](https://www.ncbi.nlm.nih.gov/pmc/) database. Not all istances have it.
- `plain_text`: The summary of the article in plain english.
- `technical_text`: The abstract of the article.
- `full_text`: The complete article.
- `journal`: The journal which published the article.
- `topics`: An object containing the types in which the article is classified (i.e. Research Article, Review, ecc.). Not all istances have it.
- `keywords`: An object containing the keywords of the article. Not all istances have it.
### Data Splits
| | train | validation | test |
|-------|-------|------------|------|
| all | 35026 | 4380 | 4384 |
| NC | 5549 | 694 | 694 |
| A | 3909 | 489 | 489 |
| PLGEN | 3087 | 386 | 386 |
| PLPAT | 2920 | 365 | 365 |
| PLCB | 2589 | 324 | 324 |
| PLNTD | 2289 | 286 | 287 |
| B | 1617 | 202 | 203 |
| I | 1181 | 148 | 148 |
| PLB | 896 | 112 | 113 |
| CB | 867 | 108 | 109 |
| SD | 725 | 91 | 91 |
| MBIO | 607 | 76 | 76 |
| C | 6782 | 848 | 848 |
| OTHER | 2008 | 251 | 251 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
| |
amitness/maltese-news-nli-sports | 2023-09-10T18:27:51.000Z | [
"region:us"
] | amitness | null | null | null | 0 | 166 | ---
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_entailment
'1': entailment
splits:
- name: train
num_bytes: 564856
num_examples: 409
- name: validation
num_bytes: 114307
num_examples: 88
- name: test
num_bytes: 114877
num_examples: 88
download_size: 516805
dataset_size: 794040
---
# Dataset Card for "maltese-news-nli-sports"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jakartaresearch/semeval-absa | 2022-08-14T05:38:21.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"aspect-based-sentiment-analysis",
"semeval",
"semeval2015",
"region:us"
] | jakartaresearch | This dataset is built as a playground for aspect-based sentiment analysis. | null | null | 1 | 165 | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'SemEval 2015: Aspect-based Sentiement Analysis'
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- aspect-based-sentiment-analysis
- semeval
- semeval2015
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for SemEval Task 12: Aspect-based Sentiment Analysis
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is orignally from [SemEval-2015 Task 12](https://alt.qcri.org/semeval2015/task12/).
From the page:
> SE-ABSA15 will focus on the same domains as SE-ABSA14 (restaurants and laptops). However, unlike SE-ABSA14, the input datasets of SE-ABSA15 will contain entire reviews, not isolated (potentially out of context) sentences. SE-ABSA15 consolidates the four subtasks of SE-ABSA14 within a unified framework. In addition, SE-ABSA15 will include an out-of-domain ABSA subtask, involving test data from a domain unknown to the participants, other than the domains that will be considered during training. In particular, SE-ABSA15 consists of the following two subtasks.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. |
clarin-knext/trec-covid-pl | 2023-06-07T08:12:18.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | null | 0 | 165 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl |
jxie/country211 | 2023-08-13T19:11:22.000Z | [
"region:us"
] | jxie | null | null | null | 0 | 165 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AD
'1': AE
'2': AF
'3': AG
'4': AI
'5': AL
'6': AM
'7': AO
'8': AQ
'9': AR
'10': AT
'11': AU
'12': AW
'13': AX
'14': AZ
'15': BA
'16': BB
'17': BD
'18': BE
'19': BF
'20': BG
'21': BH
'22': BJ
'23': BM
'24': BN
'25': BO
'26': BQ
'27': BR
'28': BS
'29': BT
'30': BW
'31': BY
'32': BZ
'33': CA
'34': CD
'35': CF
'36': CH
'37': CI
'38': CK
'39': CL
'40': CM
'41': CN
'42': CO
'43': CR
'44': CU
'45': CV
'46': CW
'47': CY
'48': CZ
'49': DE
'50': DK
'51': DM
'52': DO
'53': DZ
'54': EC
'55': EE
'56': EG
'57': ES
'58': ET
'59': FI
'60': FJ
'61': FK
'62': FO
'63': FR
'64': GA
'65': GB
'66': GD
'67': GE
'68': GF
'69': GG
'70': GH
'71': GI
'72': GL
'73': GM
'74': GP
'75': GR
'76': GS
'77': GT
'78': GU
'79': GY
'80': HK
'81': HN
'82': HR
'83': HT
'84': HU
'85': ID
'86': IE
'87': IL
'88': IM
'89': IN
'90': IQ
'91': IR
'92': IS
'93': IT
'94': JE
'95': JM
'96': JO
'97': JP
'98': KE
'99': KG
'100': KH
'101': KN
'102': KP
'103': KR
'104': KW
'105': KY
'106': KZ
'107': LA
'108': LB
'109': LC
'110': LI
'111': LK
'112': LR
'113': LT
'114': LU
'115': LV
'116': LY
'117': MA
'118': MC
'119': MD
'120': ME
'121': MF
'122': MG
'123': MK
'124': ML
'125': MM
'126': MN
'127': MO
'128': MQ
'129': MR
'130': MT
'131': MU
'132': MV
'133': MW
'134': MX
'135': MY
'136': MZ
'137': NA
'138': NC
'139': NG
'140': NI
'141': NL
'142': 'NO'
'143': NP
'144': NZ
'145': OM
'146': PA
'147': PE
'148': PF
'149': PG
'150': PH
'151': PK
'152': PL
'153': PR
'154': PS
'155': PT
'156': PW
'157': PY
'158': QA
'159': RE
'160': RO
'161': RS
'162': RU
'163': RW
'164': SA
'165': SB
'166': SC
'167': SD
'168': SE
'169': SG
'170': SH
'171': SI
'172': SJ
'173': SK
'174': SL
'175': SM
'176': SN
'177': SO
'178': SS
'179': SV
'180': SX
'181': SY
'182': SZ
'183': TG
'184': TH
'185': TJ
'186': TL
'187': TM
'188': TN
'189': TO
'190': TR
'191': TT
'192': TW
'193': TZ
'194': UA
'195': UG
'196': US
'197': UY
'198': UZ
'199': VA
'200': VE
'201': VG
'202': VI
'203': VN
'204': VU
'205': WS
'206': XK
'207': YE
'208': ZA
'209': ZM
'210': ZW
splits:
- name: train
num_bytes: 5411225958.1
num_examples: 31650
- name: validation
num_bytes: 1816894779.75
num_examples: 10550
- name: test
num_bytes: 3632130288.7
num_examples: 21100
download_size: 11359939585
dataset_size: 10860251026.55
---
# Dataset Card for "country211"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DDSC/dagw_reddit_filtered_v1.0.0 | 2022-11-06T15:30:56.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:DDSC/partial-danish-gigaword-no-twitter",
"source_datasets:DDSC/reddit-da",
"language:da",
"license:cc-by-4.0",
"arxiv:2005.03521",
"arxiv:2112.11446",
"region:us"
] | DDSC | null | null | null | 1 | 164 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- da
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- DDSC/partial-danish-gigaword-no-twitter
- DDSC/reddit-da
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: Danish Gigaword Corpus, Reddit (filtered)
language_bcp47:
- da
- da-bornholm
- da-synnejyl
---
# Danish Gigaword Corpus, Reddit (filtered)
*Version*: 1.0.0
*License*: See the respective dataset
This dataset is a variant of the Danish Gigaword [3], which excludes the sections containing
tweets and modified news contained in danavis20.
Twitter was excluded as it was a sample of a dataset which was available to the authors only.
DanAvis20 (or danavis) was excluded due to preprocessing described in [3] (version 1 on
[arxiv](https://arxiv.org/pdf/2005.03521v1.pdf))including shuffling of sentences,
pseudonymization of proper nouns and the replacement of infrequent content-words with
statistical cognates, which could lead to sentences such as *"Der er skilsmissesager i
forsikringsselskabet"*.
Additionally this dataset includes the [reddit-da](https://huggingface.co/datasets/DDSC/reddit-da) dataset, which includes
1,908,887 documents. This dataset has had low-quality text removed using a series
of heuristic filters. Following filtering,
DAGW$_{DFM}$ is deduplicated to remove exact and near-duplicates. For more on data
cleaning, see the section on post-processing.
This dataset included 1,310,789,818 tokens before filtering, and 833,664,528 (0.64%) after.
# Dataset information
This is a composite dataset consisting of Danish Gigaword and
[reddit-da](https://huggingface.co/datasets/DDSC/reddit-da). Thus it does not contains its own documentation. For more information, we recommend checking the documentation of the
respective datasets.
### Motivation:
**For what purpose was the dataset created? Who created the dataset? Who funded the
creation of the dataset?**
This dataset was created with the purpose of pre-training Danish language models. It was created by a team of
researchers at the Center for Humanities Computing Aarhus (CHCAA) using a codebase jointly
developed with partners from industry and academia, e.g. KMD, Ekstra Bladet, deepdivr,
and Bristol University. For more on collaborators on this project see
the [GitHub repository](https://github.com/centre-for-humanities-computing/danish-foundation-models
).
## Processing
### Quality Filter:
DAGW$_{DFM}$ applies a filter akin to [2]. It keeps documents that:
- Contain at least 2 Danish stopwords. For the stopword list, we use the one used in
SpaCy v.3.1.4.
- Have a mean word length between 3 and 10.
- Have a token length between 50 and 100,000.
- Contain fewer than 5,000,000 characters.
- Among all words, at least 60% have at least one alphabetic character.
- Have a symbol-to-word ratio lower than 10% for hashtags and ellipsis.
- Have fewer than 90% of lines starting with a bullet point.
- Have fewer than 30% of lines ending with an ellipsis.
- Have a low degree of repetitious text:
- Fewer than 30% duplicate lines.
- Fewer than 30% duplicate paragraphs.
- Fewer than 30% of characters are contained within duplicate lines.
- The top 2-4 grams constitute less than 20%, 18%, and 16% of characters, respectively.
- Where, for each document, 5-10 grams which occur more than once, constitute less than 15%, 14%, 13%, 12%, 11%, and 10% of
the characters, respectively.
### Deduplication
The deduplication removed all documents with a 13-gram similarity higher than 80%
following the MinHash algorithm [1] using 128 permutations. The MinHash algorithm is a
probabilistic data structure for approximating the Jaccard similarity between two sets.
# References:
- [1] Broder, Andrei Z. "On the resemblance and containment of documents."
Proceedings. Compression and Complexity of SEQUENCES 1997
(Cat. No. 97TB100171). IEEE, 1997.
- [2] Rae, J. W., Borgeaud, S., Cai, T., Millican, K., Hoffmann, J., Song, F.,
Aslanides, J., Henderson, S., Ring, R., Young, S., Rutherford, E., Hennigan,
T., Menick, J., Cassirer, A., Powell, R., Driessche, G. van den, Hendricks,
L. A., Rauh, M., Huang, P.-S., … Irving, G. (2021).
Scaling Language Models: Methods, Analysis & Insights from Training Gopher.
https://arxiv.org/abs/2112.11446v2
- [3] Strømberg-Derczynski, L., Ciosici, M., Baglini, R., Christiansen, M. H.,
Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A.,
Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Madsen, J., Petersen, M. L.,
Rystrøm, J. H., & Varab, D. (2021). The Danish Gigaword corpus. Proceedings of the
23rd Nordic Conference on Computational Linguistics (NoDaLiDa), 413–421.
https://aclanthology.org/2021.nodalida-main.46
### Citation
If you wish to cite this work, please see the GitHub page for an up-to-date citation:
https://github.com/centre-for-humanities-computing/danish-foundation-models
|
bitext/Bitext-customer-support-llm-chatbot-training-dataset | 2023-09-19T23:48:25.000Z | [
"task_categories:question-answering",
"task_categories:table-question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cdla-sharing-1.0",
"question-answering",
"llm",
"chatbot",
"costumer-support",
"conversional-ai",
"generative-ai",
"natural-language-understanding",
"fine-tuning",
"Retail",
"region:us"
] | bitext | null | null | null | 4 | 163 | ---
license: cdla-sharing-1.0
task_categories:
- question-answering
- table-question-answering
language:
- en
tags:
- question-answering
- llm
- chatbot
- costumer-support
- conversional-ai
- generative-ai
- natural-language-understanding
- fine-tuning
- Retail
pretty_name: >-
Bitext - Customer Service Tagged Training Dataset for LLM-based Virtual
Assistants
size_categories:
- 10K<n<100K
---
# Bitext - Customer Service Tagged Training Dataset for LLM-based Virtual Assistants
## Overview
This dataset can be used to train Large Language Models such as GPT, Llama2 and Falcon, both for Fine Tuning and Domain Adaptation.
The dataset has the following specs:
- Use Case: Intent Detection
- Vertical: Customer Service
- 27 intents assigned to 10 categories
- 26872 question/answer pairs, around 1000 per intent
- 30 entity/slot types
- 12 different types of language generation tags
The categories and intents have been selected from Bitext's collection of 20 vertical-specific datasets, covering the intents that are common across all 20 verticals. The verticals are:
- Automotive, Retail Banking, Education, Events & Ticketing, Field Services, Healthcare, Hospitality, Insurance, Legal Services, Manufacturing, Media Streaming, Mortgages & Loans, Moving & Storage, Real Estate/Construction, Restaurant & Bar Chains, Retail/E-commerce, Telecommunications, Travel, Utilities, Wealth Management
For a full list of verticals and its intents see [https://www.bitext.com/chatbot-verticals/](https://www.bitext.com/chatbot-verticals/).
The question/answer pairs have been generated using a hybrid methodology that uses natural texts as source text, NLP technology to extract seeds from these texts, and NLG technology to expand the seed texts. All steps in the process are curated by computational linguists.
## Dataset Token Count
The dataset contains an extensive amount of text data across its 'instruction' and 'response' columns. After processing and tokenizing the dataset, we've identified a total of 3.57 million tokens. This rich set of tokens is essential for training advanced LLMs for AI Conversational, AI Generative, and Question and Answering (Q&A) models.
## Fields of the Dataset
Each entry in the dataset contains the following fields:
- flags: tags (explained below in the Language Generation Tags section)
- instruction: a user request from the Customer Service domain
- category: the high-level semantic category for the intent
- intent: the intent corresponding to the user instruction
- response: an example expected response from the virtual assistant
## Categories and Intents
The categories and intents covered by the dataset are:
- ACCOUNT: create_account, delete_account, edit_account, switch_account
- CANCELLATION_FEE: check_cancellation_fee
- DELIVERY: delivery_options
- FEEDBACK: complaint, review
- INVOICE: check_invoice, get_invoice
- NEWSLETTER: newsletter_subscription
- ORDER: cancel_order, change_order, place_order
- PAYMENT: check_payment_methods, payment_issue
- REFUND: check_refund_policy, track_refund
- SHIPPING_ADDRESS: change_shipping_address, set_up_shipping_address
## Entities
The entities covered by the dataset are:
- {{Order Number}}, typically present in:
- Intents: cancel_order, change_order, change_shipping_address, check_invoice, check_refund_policy, complaint, delivery_options, delivery_period, get_invoice, get_refund, place_order, track_order, track_refund
- {{Invoice Number}}, typically present in:
- Intents: check_invoice, get_invoice
- {{Online Order Interaction}}, typically present in:
- Intents: cancel_order, change_order, check_refund_policy, delivery_period, get_refund, review, track_order, track_refund
- {{Online Payment Interaction}}, typically present in:
- Intents: cancel_order, check_payment_methods
- {{Online Navigation Step}}, typically present in:
- Intents: complaint, delivery_options
- {{Online Customer Support Channel}}, typically present in:
- Intents: check_refund_policy, complaint, contact_human_agent, delete_account, delivery_options, edit_account, get_refund, payment_issue, registration_problems, switch_account
- {{Profile}}, typically present in:
- Intent: switch_account
- {{Profile Type}}, typically present in:
- Intent: switch_account
- {{Settings}}, typically present in:
- Intents: cancel_order, change_order, change_shipping_address, check_cancellation_fee, check_invoice, check_payment_methods, contact_human_agent, delete_account, delivery_options, edit_account, get_invoice, newsletter_subscription, payment_issue, place_order, recover_password, registration_problems, set_up_shipping_address, switch_account, track_order, track_refund
- {{Online Company Portal Info}}, typically present in:
- Intents: cancel_order, edit_account
- {{Date}}, typically present in:
- Intents: check_invoice, check_refund_policy, get_refund, track_order, track_refund
- {{Date Range}}, typically present in:
- Intents: check_cancellation_fee, check_invoice, get_invoice
- {{Shipping Cut-off Time}}, typically present in:
- Intent: delivery_options
- {{Delivery City}}, typically present in:
- Intent: delivery_options
- {{Delivery Country}}, typically present in:
- Intents: check_payment_methods, check_refund_policy, delivery_options, review, switch_account
- {{Salutation}}, typically present in:
- Intents: cancel_order, check_payment_methods, check_refund_policy, create_account, delete_account, delivery_options, get_refund, recover_password, review, set_up_shipping_address, switch_account, track_refund
- {{Client First Name}}, typically present in:
- Intents: check_invoice, get_invoice
- {{Client Last Name}}, typically present in:
- Intents: check_invoice, create_account, get_invoice
- {{Customer Support Phone Number}}, typically present in:
- Intents: change_shipping_address, contact_customer_service, contact_human_agent, payment_issue
- {{Customer Support Email}}, typically present in:
- Intents: cancel_order, change_shipping_address, check_invoice, check_refund_policy, complaint, contact_customer_service, contact_human_agent, get_invoice, get_refund, newsletter_subscription, payment_issue, recover_password, registration_problems, review, set_up_shipping_address, switch_account
- {{Live Chat Support}}, typically present in:
- Intents: check_refund_policy, complaint, contact_human_agent, delete_account, delivery_options, edit_account, get_refund, payment_issue, recover_password, registration_problems, review, set_up_shipping_address, switch_account, track_order
- {{Website URL}}, typically present in:
- Intents: check_payment_methods, check_refund_policy, complaint, contact_customer_service, contact_human_agent, create_account, delete_account, delivery_options, get_refund, newsletter_subscription, payment_issue, place_order, recover_password, registration_problems, review, switch_account
- {{Upgrade Account}}, typically present in:
- Intents: create_account, edit_account, switch_account
- {{Account Type}}, typically present in:
- Intents: cancel_order, change_order, change_shipping_address, check_cancellation_fee, check_invoice, check_payment_methods, check_refund_policy, complaint, contact_customer_service, contact_human_agent, create_account, delete_account, delivery_options, delivery_period, edit_account, get_invoice, get_refund, newsletter_subscription, payment_issue, place_order, recover_password, registration_problems, review, set_up_shipping_address, switch_account, track_order, track_refund
- {{Account Category}}, typically present in:
- Intents: cancel_order, change_order, change_shipping_address, check_cancellation_fee, check_invoice, check_payment_methods, check_refund_policy, complaint, contact_customer_service, contact_human_agent, create_account, delete_account, delivery_options, delivery_period, edit_account, get_invoice, get_refund, newsletter_subscription, payment_issue, place_order, recover_password, registration_problems, review, set_up_shipping_address, switch_account, track_order, track_refund
- {{Account Change}}, typically present in:
- Intent: switch_account
- {{Program}}, typically present in:
- Intent: place_order
- {{Refund Amount}}, typically present in:
- Intent: track_refund
- {{Money Amount}}, typically present in:
- Intents: check_refund_policy, complaint, get_refund, track_refund
- {{Store Location}}, typically present in:
- Intents: complaint, delivery_options, place_order
## Language Generation Tags
The dataset contains tags that reflect how language varies/changes across different linguistic phenomena like colloquial or offensive language. So if an utterance for intent “cancel_order” contains the “COLLOQUIAL” tag, the utterance will express an informal language variation like: “can u cancel my order”.
These tags indicate the type of language variation that the entry expresses. When associated to each entry, they allow Conversational Designers to customize training datasets to different user profiles with different uses of language. Through these tags, many different datasets can be created to make the resulting assistant more accurate and robust. A bot that sells sneakers should be mainly targeted to younger population that use a more colloquial language; while a classical retail banking bot should be able to handle more formal or polite language. The dataset also reflects commonly occurring linguistic phenomena of real-life virtual assistant, such as spelling mistakes, run-on words, punctuation errors…
The dataset contains tagging for all relevant linguistic phenomena that can be used to customize the dataset for different user profiles.
### Tags for Lexical variation
M - Morphological variation: inflectional and derivational
“is my SIM card active”, “is my SIM card activated”
L - Semantic variations: synonyms, use of hyphens, compounding…
“what’s my billing date", “what’s my anniversary date”
### Tags for Syntactic structure variation
B - Basic syntactic structure:
“activate my SIM card”, “I need to activate my SIM card”
I - Interrogative structure
“can you activate my SIM card?”, “how do I activate my SIM card?”
C - Coordinated syntactic structure
“I have a new SIM card, what do I need to do to activate it?”
N - Negation
“I do not want this item, where to cancel my order?”
### Tags for language register variations
P - Politeness variation
“could you help me activate my SIM card, please?”
Q - Colloquial variation
“can u activ8 my SIM?”
W - Offensive language
“I want to talk to a f*&%*g agent”
### Tags for stylistic variations
K - Keyword mode
"activate SIM", "new SIM"
E - Use of abbreviations:
“I'm / I am interested in getting a new SIM”
Z - Errors and Typos: spelling issues, wrong punctuation…
“how can i activaet my card”
### Other tags not in use in this Dataset
D - Indirect speech
“ask my agent to activate my SIM card”
G - Regional variations
US English vs UK English: "truck" vs "lorry"
France French vs Canadian French: "tchatter" vs "clavarder"
R - Respect structures - Language-dependent variations
English: "may" vs "can…"
French: "tu" vs "vous..."
Spanish: "tú" vs "usted..."
Y - Code switching
“activer ma SIM card”
---
(c) Bitext Innovations, 2023 |
skadewdl3/recipe-nlg-llama2 | 2023-10-04T07:40:19.000Z | [
"region:us"
] | skadewdl3 | null | null | null | 0 | 163 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: title
dtype: string
- name: ingredients
dtype: string
- name: directions
dtype: string
- name: link
dtype: string
- name: source
dtype: string
- name: NER
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 3317395276.3463464
num_examples: 2008027
- name: test
num_bytes: 368600943.6536536
num_examples: 223115
download_size: 168971675
dataset_size: 3685996220.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "recipe-nlg-llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vblagoje/lfqa_support_docs | 2021-12-30T10:28:31.000Z | [
"region:us"
] | vblagoje | null | null | null | 6 | 162 | Support documents for building https://huggingface.co/vblagoje/bart_lfqa model
|
c-s-ale/alpaca-gpt4-data-zh | 2023-05-03T17:56:55.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:zh",
"license:cc-by-4.0",
"gpt",
"alpaca",
"fine-tune",
"instruct-tune",
"instruction",
"arxiv:2304.03277",
"region:us"
] | c-s-ale | null | null | null | 20 | 162 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 32150579
num_examples: 48818
download_size: 35100559
dataset_size: 32150579
license: cc-by-4.0
language:
- zh
pretty_name: Instruction Tuning with GPT-4
size_categories:
- 10K<n<100K
task_categories:
- text-generation
tags:
- gpt
- alpaca
- fine-tune
- instruct-tune
- instruction
---
# Dataset Description
- **Project Page:** https://instruction-tuning-with-gpt-4.github.io
- **Repo:** https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
- **Paper:** https://arxiv.org/abs/2304.03277
# Dataset Card for "alpaca-gpt4-data-zh"
All of the work is done by [this team](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM).
# Usage and License Notices
The data is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
# English Dataset
[Found here](https://huggingface.co/datasets/c-s-ale/alpaca-gpt4-data)
# Citation
```
@article{peng2023gpt4llm,
title={Instruction Tuning with GPT-4},
author={Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao},
journal={arXiv preprint arXiv:2304.03277},
year={2023}
}
``` |
sirius0707/imagenet_10 | 2023-07-23T02:29:00.000Z | [
"task_categories:image-classification",
"language:en",
"region:us"
] | sirius0707 | null | null | null | 0 | 162 | ---
task_categories:
- image-classification
language:
- en
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': goldfish
'1': scuba diver
'2': seashore
'3': green lizard
'4': ski
'5': flamingo
'6': red wine
'7': volcano
'8': jack-o'-lantern
'9': cowboy boot
---
|
tmu_gfm_dataset | 2022-11-03T16:30:48.000Z | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"grammatical-error-correction",
"region:us"
] | null | A dataset for GEC metrics with manual evaluations of grammaticality, fluency, and meaning preservation for system outputs. More detail about the creation of the dataset can be found in Yoshimura et al. (2020). | @inproceedings{yoshimura-etal-2020-reference,
title = "{SOME}: Reference-less Sub-Metrics Optimized for Manual Evaluations of Grammatical Error Correction",
author = "Yoshimura, Ryoma and
Kaneko, Masahiro and
Kajiwara, Tomoyuki and
Komachi, Mamoru",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.coling-main.573",
pages = "6516--6522",
abstract = "We propose a reference-less metric trained on manual evaluations of system outputs for grammatical error correction (GEC). Previous studies have shown that reference-less metrics are promising; however, existing metrics are not optimized for manual evaluations of the system outputs because no dataset of the system output exists with manual evaluation. This study manually evaluates outputs of GEC systems to optimize the metrics. Experimental results show that the proposed metric improves correlation with the manual evaluation in both system- and sentence-level meta-evaluation. Our dataset and metric will be made publicly available.",
} | null | 2 | 161 | ---
annotations_creators:
- crowdsourced
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: null
pretty_name: TMU-GFM-Dataset
tags:
- grammatical-error-correction
dataset_info:
features:
- name: source
dtype: string
- name: output
dtype: string
- name: grammer
sequence: int32
- name: fluency
sequence: int32
- name: meaning
sequence: int32
- name: system
dtype: string
- name: ave_g
dtype: float32
- name: ave_f
dtype: float32
- name: ave_m
dtype: float32
splits:
- name: train
num_bytes: 1446144
num_examples: 4221
download_size: 1270197
dataset_size: 1446144
---
# Dataset Card for TMU-GFM-Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [N/A]
- **Repository:** https://github.com/tmu-nlp/TMU-GFM-Dataset
- **Paper:** [SOME: Reference-less Sub-Metrics Optimized for Manual Evaluations of Grammatical Error Correction](https://www.aclweb.org/anthology/2020.coling-main.573.pdf)
- **Leaderboard:** [N/A]
- **Point of Contact:** Check the paper.
### Dataset Summary
Authors collected manual evaluations for the grammaticality, fluency, and meaning preservation of the system outputs of 1,381 sentences from CoNLL 2013.
To collect the manual evaluations for various system outputs, each source sentence was corrected by the following five typical systems: statistical machine translation (SMT) (Grundkiewicz and Junczys-Dowmunt, 2018), recurrent neural network (RNN) (Luong et al., 2015), convolutional neural network (CNN) (Chollampatt and Ng, 2018), self-attention network (SAN) (Vaswani et al., 2017), and SAN with copy mechanism (SAN+Copy) (Zhao et al., 2019).
Manual evaluation for the grammaticality, fluency, and meaning preservation were assigned to a total of 4,223 sentences.
### Supported Tasks and Leaderboards
Grammatical Error Correction
### Languages
English
## Dataset Structure
### Data Instances
An example from the TMU-GFM-Dataset looks as follows:
```
{'ave_f': 3.4000000953674316,
'ave_g': 3.4000000953674316,
'ave_m': 3.5999999046325684,
'fluency': [3, 4, 3, 4, 3],
'grammer': [3, 4, 3, 4, 3],
'meaning': [3, 4, 4, 4, 3],
'output': 'After all, there will be an endless battle between the technology and human mentality.',
'source': 'Afterall there will be an endless battle between the technology and human mentality.',
'system': 'lstm,cnn'}
```
### Data Fields
The are 9 columns in the tmu-gfm-dataset.
- source: source sentence.
- output: system output sentence.
- grammer: Grammaticaliry annotations by 5 annotators.
- fluency: Fluency annotations by 5 annotators.
- meaning: Meaning Preservation annotations by 5 annotators.
- system: Which system the output sentence is from.
- ave_g: Average grammer score.
- ave_f: Average fluency score.
- ave_m: Average meaning score.
### Data Splits
Authors divided the dataset into train/dev/test with 3,376/422/423 sentences and used for fine-tuning BERT in thier paper.
## Dataset Creation
### Curation Rationale
The authors proposed a reference-less metric trained on manual evaluations of system outputs for grammatical error correction (GEC).
They said that previous studies have shown that reference-less metrics are promising; however, existing metrics are not optimized for manual evaluation of the system output because there is no dataset of system output with manual evaluation.
To achieve a better correlation with manual evaluation, they created a dataset to optimize each sub-metric to the manual evaluation of GEC systems. Their annotators evaluated the output of five typical GEC systems.
### Source Data
#### Initial Data Collection and Normalization
Authors collected manual evaluations for the grammaticality, fluency, and meaning preservation of the system outputs of 1,381 sentences from CoNLL 2013.
To collect the manual evaluations for various system outputs, each source sentence was corrected by the following five typical systems: statistical machine translation (SMT) (Grundkiewicz and Junczys-Dowmunt, 2018), recurrent neural network (RNN) (Luong et al., 2015), convolutional neural network (CNN) (Chollampatt and Ng, 2018), self-attention network (SAN) (Vaswani et al., 2017), and SAN with copy mechanism (SAN+Copy) (Zhao et al., 2019).
#### Who are the source language producers?
machine-generated
### Annotations
#### Annotation process
By excluding duplicate corrected sentences, manual evaluation for the grammaticality, fluency, and meaning preservation were assigned to a total of 4,223 sentences, as follows:
- Grammaticality: Annotators evaluated the grammatical correctness of the system output. The authors followed the five-point scale evaluation criteria (4: Perfect, 3: Comprehensible, 2: Somewhat comprehensible, 1: Incomprehensible, and 0: Other) proposed by Heilman et al. (2014).
- Fluency: Annotators evaluated how natural the sentence sounds for native speakers. The authors followed the criteria (4: Extremely natural, 3: Somewhat natural, 2: Somewhat unnatural, and 1: Extremely unnatural) proposed by Lau et al. (2015).
- Meaning preservation: Annotators evaluated the extent to which the meaning of source sentences is preserved in system output. The authors followed the criteria (4: Identical, 3: Minor differences, 2: Moderate differences, 1: Sub- stantially different, and 0: Other) proposed by Xu et al. (2016).
Finally, the authors created a dataset with manual evaluations for a total of 4,221 sentences, excluding sentences in which three or more annotators answered “0: Other.”
#### Who are the annotators?
Five native English annotators reqruited by using Amazon Mechaincal turk
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
@inproceedings{yoshimura-etal-2020-reference,
title = "{SOME}: Reference-less Sub-Metrics Optimized for Manual Evaluations of Grammatical Error Correction",
author = "Yoshimura, Ryoma and
Kaneko, Masahiro and
Kajiwara, Tomoyuki and
Komachi, Mamoru",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.coling-main.573",
pages = "6516--6522",
abstract = "We propose a reference-less metric trained on manual evaluations of system outputs for grammatical error correction (GEC). Previous studies have shown that reference-less metrics are promising; however, existing metrics are not optimized for manual evaluations of the system outputs because no dataset of the system output exists with manual evaluation. This study manually evaluates outputs of GEC systems to optimize the metrics. Experimental results show that the proposed metric improves correlation with the manual evaluation in both system- and sentence-level meta-evaluation. Our dataset and metric will be made publicly available.",
}
### Contributions
Thanks to [@forest1988](https://github.com/forest1988) for adding this dataset. |
ccdv/arxiv-classification | 2022-10-22T09:23:50.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:topic-classification",
"size_categories:10K<n<100K",
"language:en",
"long context",
"region:us"
] | ccdv | Arxiv Classification Dataset: a classification of Arxiv Papers (11 classes).
It contains 11 slightly unbalanced classes, 33k Arxiv Papers divided into 3 splits: train (23k), val (5k) and test (5k).
Copied from "Long Document Classification From Local Word Glimpses via Recurrent Attention Learning" by JUN HE LIQUN WANG LIU LIU, JIAO FENG AND HAO WU
See: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8675939
See: https://github.com/LiqunW/Long-document-dataset | null | null | 9 | 161 | ---
language: en
task_categories:
- text-classification
tags:
- long context
task_ids:
- multi-class-classification
- topic-classification
size_categories: 10K<n<100K
---
**Arxiv Classification: a classification of Arxiv Papers (11 classes).**
This dataset is intended for long context classification (documents have all > 4k tokens). \
Copied from "Long Document Classification From Local Word Glimpses via Recurrent Attention Learning"
```
@ARTICLE{8675939,
author={He, Jun and Wang, Liqun and Liu, Liu and Feng, Jiao and Wu, Hao},
journal={IEEE Access},
title={Long Document Classification From Local Word Glimpses via Recurrent Attention Learning},
year={2019},
volume={7},
number={},
pages={40707-40718},
doi={10.1109/ACCESS.2019.2907992}
}
```
* See: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8675939
* See: https://github.com/LiqunW/Long-document-dataset
It contains 11 slightly unbalanced classes, 33k Arxiv Papers divided into 3 splits: train (28k), val (2.5k) and test (2.5k).
2 configs:
* default
* no_ref, removes references to the class inside the document (eg: [cs.LG] -> [])
Compatible with [run_glue.py](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) script:
```
export MODEL_NAME=roberta-base
export MAX_SEQ_LENGTH=512
python run_glue.py \
--model_name_or_path $MODEL_NAME \
--dataset_name ccdv/arxiv-classification \
--do_train \
--do_eval \
--max_seq_length $MAX_SEQ_LENGTH \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 4 \
--learning_rate 2e-5 \
--num_train_epochs 1 \
--max_eval_samples 500 \
--output_dir tmp/arxiv
``` |
rahular/itihasa | 2022-10-24T18:06:01.000Z | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:original",
"language:sa",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"region:us"
] | rahular | A Sanskrit-English machine translation dataset. | @inproceedings{aralikatte-etal-2021-itihasa,
title = "Itihasa: A large-scale corpus for {S}anskrit to {E}nglish translation",
author = "Aralikatte, Rahul and
de Lhoneux, Miryam and
Kunchukuttan, Anoop and
S{\o}gaard, Anders",
booktitle = "Proceedings of the 8th Workshop on Asian Translation (WAT2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.wat-1.22",
pages = "191--197",
abstract = "This work introduces Itihasa, a large-scale translation dataset containing 93,000 pairs of Sanskrit shlokas and their English translations. The shlokas are extracted from two Indian epics viz., The Ramayana and The Mahabharata. We first describe the motivation behind the curation of such a dataset and follow up with empirical analysis to bring out its nuances. We then benchmark the performance of standard translation models on this corpus and show that even state-of-the-art transformer architectures perform poorly, emphasizing the complexity of the dataset.",
} | null | 3 | 161 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- sa
- en
license:
- apache-2.0
multilinguality:
- translation
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: Itihasa
metrics:
- bleu
- sacrebleu
- rouge
- ter
- chrF
tags:
- conditional-text-generation
---
# Itihāsa
Itihāsa is a Sanskrit-English translation corpus containing 93,000 Sanskrit shlokas and their English translations extracted from M. N. Dutt's seminal works on The Rāmāyana and The Mahābhārata. The paper which introduced this dataset can be found [here](https://aclanthology.org/2021.wat-1.22/).
This repository contains the randomized train, development, and test sets. The original extracted data can be found [here](https://github.com/rahular/itihasa/tree/gh-pages/res) in JSON format. If you just want to browse the data, you can go [here](http://rahular.com/itihasa/).
## Usage
```
>> from datasets import load_dataset
>> dataset = load_dataset("rahular/itihasa")
>> dataset
DatasetDict({
train: Dataset({
features: ['translation'],
num_rows: 75162
})
validation: Dataset({
features: ['translation'],
num_rows: 6149
})
test: Dataset({
features: ['translation'],
num_rows: 11722
})
})
>> dataset['train'][0]
{'translation': {'en': 'The ascetic Vālmīki asked Nārada, the best of sages and foremost of those conversant with words, ever engaged in austerities and Vedic studies.',
'sn': 'ॐ तपः स्वाध्यायनिरतं तपस्वी वाग्विदां वरम्। नारदं परिपप्रच्छ वाल्मीकिर्मुनिपुङ्गवम्॥'}}
```
## Citation
If you found this dataset to be useful, please consider citing the paper as follows:
```
@inproceedings{aralikatte-etal-2021-itihasa,
title = "Itihasa: A large-scale corpus for {S}anskrit to {E}nglish translation",
author = "Aralikatte, Rahul and
de Lhoneux, Miryam and
Kunchukuttan, Anoop and
S{\o}gaard, Anders",
booktitle = "Proceedings of the 8th Workshop on Asian Translation (WAT2021)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.wat-1.22",
pages = "191--197",
abstract = "This work introduces Itihasa, a large-scale translation dataset containing 93,000 pairs of Sanskrit shlokas and their English translations. The shlokas are extracted from two Indian epics viz., The Ramayana and The Mahabharata. We first describe the motivation behind the curation of such a dataset and follow up with empirical analysis to bring out its nuances. We then benchmark the performance of standard translation models on this corpus and show that even state-of-the-art transformer architectures perform poorly, emphasizing the complexity of the dataset.",
}
``` |
jonaskoenig/Questions-vs-Statements-Classification | 2022-07-11T15:36:35.000Z | [
"region:us"
] | jonaskoenig | null | null | null | 2 | 161 | [Needs More Information]
# Dataset Card for Questions-vs-Statements-Classification
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [Kaggle](https://www.kaggle.com/datasets/shahrukhkhan/questions-vs-statementsclassificationdataset)
- **Point of Contact:** [Shahrukh Khan](https://www.kaggle.com/shahrukhkhan)
### Dataset Summary
A dataset containing statements and questions with their corresponding labels.
### Supported Tasks and Leaderboards
multi-class-classification
### Languages
en
## Dataset Structure
### Data Splits
Train Test Valid
## Dataset Creation
### Curation Rationale
The goal of this project is to classify sentences, based on type:
Statement (Declarative Sentence)
Question (Interrogative Sentence)
### Source Data
[Kaggle](https://www.kaggle.com/datasets/shahrukhkhan/questions-vs-statementsclassificationdataset)
#### Initial Data Collection and Normalization
The dataset is created by parsing out the SQuAD dataset and combining it with the SPAADIA dataset.
### Other Known Limitations
Questions in this case ar are only one sentence, statements are a single sentence or more. They are classified correctly but don't include sentences prior to questions.
## Additional Information
### Dataset Curators
[SHAHRUKH KHAN](https://www.kaggle.com/shahrukhkhan)
### Licensing Information
[CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/)
|
joelniklaus/legal_case_document_summarization | 2023-02-02T23:52:54.000Z | [
"region:us"
] | joelniklaus | null | null | null | 7 | 161 | # Dataset Card for LegalCaseDocumentSummarization
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub](https://github.com/Law-AI/summarization)
- **Repository:** [Zenodo](https://zenodo.org/record/7152317#.Y69PkeKZODW)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset.
|
RicardoRei/wmt-mqm-human-evaluation | 2023-02-16T18:29:11.000Z | [
"size_categories:100K<n<1M",
"language:en",
"language:de",
"language:ru",
"language:zh",
"license:apache-2.0",
"mt-evaluation",
"WMT",
"region:us"
] | RicardoRei | null | null | null | 0 | 161 | ---
license: apache-2.0
language:
- en
- de
- ru
- zh
tags:
- mt-evaluation
- WMT
size_categories:
- 100K<n<1M
---
# Dataset Summary
This dataset contains all MQM human annotations from previous [WMT Metrics shared tasks](https://wmt-metrics-task.github.io/) and the MQM annotations from [Experts, Errors, and Context](https://aclanthology.org/2021.tacl-1.87/).
The data is organised into 8 columns:
- lp: language pair
- src: input text
- mt: translation
- ref: reference translation
- score: MQM score
- system: MT Engine that produced the translation
- annotators: number of annotators
- domain: domain of the input text (e.g. news)
- year: collection year
You can also find the original data [here](https://github.com/google/wmt-mqm-human-evaluation). We recommend using the original repo if you are interested in annotation spans and not just the final score.
## Python usage:
```python
from datasets import load_dataset
dataset = load_dataset("RicardoRei/wmt-mqm-human-evaluation", split="train")
```
There is no standard train/test split for this dataset but you can easily split it according to year, language pair or domain. E.g. :
```python
# split by year
data = dataset.filter(lambda example: example["year"] == 2022)
# split by LP
data = dataset.filter(lambda example: example["lp"] == "en-de")
# split by domain
data = dataset.filter(lambda example: example["domain"] == "ted")
```
## Citation Information
If you use this data please cite the following works:
- [Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation](https://aclanthology.org/2021.tacl-1.87/)
- [Results of the WMT21 Metrics Shared Task: Evaluating Metrics with Expert-based Human Evaluations on TED and News Domain](https://aclanthology.org/2021.wmt-1.73/)
- [Results of WMT22 Metrics Shared Task: Stop Using BLEU – Neural Metrics Are Better and More Robust](https://aclanthology.org/2022.wmt-1.2/) |
RIW/small-coco-wm_50 | 2023-03-11T23:13:04.000Z | [
"region:us"
] | RIW | null | null | null | 0 | 161 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
- name: url
dtype: string
- name: key
dtype: string
- name: status
dtype: string
- name: error_message
dtype: 'null'
- name: width
dtype: int64
- name: height
dtype: int64
- name: original_width
dtype: int64
- name: original_height
dtype: int64
- name: exif
dtype: string
- name: sha256
dtype: string
splits:
- name: train
num_bytes: 1884418582.296
num_examples: 18982
- name: validation
num_bytes: 1827717279.35
num_examples: 18935
download_size: 1641694126
dataset_size: 3712135861.646
---
# Dataset Card for "small-coco-wm_50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ruanchaves/porsimplessent | 2023-04-12T15:57:26.000Z | [
"size_categories:1K<n<10K",
"region:us"
] | ruanchaves | null | 1 | 161 | ---
size_categories:
- 1K<n<10K
---
# Dataset Card for PorSimplesSent
## Dataset Description
- **Repository:** [sidleal/porsimplessent](https://github.com/sidleal/porsimplessent)
- **Paper:** [A Nontrivial Sentence Corpus for the Task of Sentence Readability Assessment in Portuguese](https://aclanthology.org/C18-1034/)
- **Point of Contact:** [Sidney Evaldo Leal](sidleal@gmail.com)
### Dataset Summary
PorSimplesSent is a Portuguese corpus of aligned sentence pairs and triplets created for the purpose of investigating sentence readability
assessment in Portuguese. The dataset consists of 4,968 pairs and 1,141 triplets of sentences, combining the three levels of the PorSimples
corpus: Original, Natural, and Strong. The dataset can be used for tasks such as sentence-pair classification, sentence retrieval, and readability assessment.
### Supported Tasks and Leaderboards
The dataset supports the following tasks:
- `sentence-pair-classification`: The dataset can be used to train a model for sentence-pair classification, which consists in determining whether one sentence is simpler than the other or if both sentences are equally simple. Success on this task is typically measured by achieving a high accuracy, f1, precision, and recall.
### Languages
The dataset consists of sentence pairs in Portuguese.
## Dataset Structure
### Data Instances
```json
{
'sentence1': '-- Parece que o assassinato de civis iraquianos transformou-se em um fenômeno cotidiano e banal -- disse o presidente da Associação Iraquiana dos Direitos Humanos, Muayed al-Anbaki.',
'sentence2': '-- Parece que o assassinato de civis iraquianos transformou-se em um fenômeno comum e banal -- disse o presidente da Associação Iraquiana dos Direitos Humanos, Muayed al-Anbaki.',
'label': 2,
'production_id': 3,
'level': 'ORI->NAT',
'changed': 'S',
'split': 'N',
'sentence_text_from': '-- Parece que o assassinato de civis iraquianos transformou-se em um fenômeno cotidiano e banal -- disse o presidente da Associação Iraquiana dos Direitos Humanos, Muayed al-Anbaki.',
'sentence_text_to': '-- Parece que o assassinato de civis iraquianos transformou-se em um fenômeno comum e banal -- disse o presidente da Associação Iraquiana dos Direitos Humanos, Muayed al-Anbaki.'
}
```
### Data Fields
The dataset has the following fields:
* `sentence1`: the first sentence in the sentence pair (string).
* `sentence2`: the second sentence in the sentence pair (string).
* `label`: an integer indicating the relationship between the two sentences in the pair. The possible values are 0, 1, and 2, where 0 means that sentence1 is more simple than sentence2, 1 means that both sentences have the same level of complexity, and 2 means that sentence2 is more simple than sentence1 (int).
* `production_id`: an integer identifier for each sentence pair (int).
* `level`: a string indicating the level of simplification between the two sentences. The possible values are:
* 'ORI->NAT' (original to natural)
* 'NAT->STR' (natural to strong)
* 'ORI->STR' (original to strong) (string).
* `changed`: a string indicating whether the sentence was changed during the simplification process. The possible values are:
* 'S' (changed)
* 'N' (not changed) (string).
* `split`: a string indicating whether the sentence suffered a split in this simplification level. The possible values are:
* 'S' (split)
* 'N' (not split) (string).
* `sentence_text_from`: the raw text of the source sentence (string).
* `sentence_text_to`: the raw text of the target sentence (string).
### Data Splits
The dataset is split into three subsets: train, validation, and test. The sizes of each split are as follows:
| | Train | Validation | Test |
|--------------------|--------|------------|-------|
| Number of examples | 4,976 | 1,446 | 1,697 |
The authors did not provide standard splits. We created the splits ourselves while ensuring that sentence pairs from the same document did not appear in multiple splits.
## Additional Information
### Dataset Curators
The PorSimplesSent dataset was created by Sidney Evaldo Leal, with guidance from his advisors Dra. Sandra Maria Aluísio and Dra. Magali Sanches Duran, during his master's degree at ICMC-USP. The Interinstitutional Center for Computational Linguistics - NILC (Núcleo Interinstitucional de Linguística Computacional) also contributed to the creation of the dataset.
### Licensing Information
The PorSimplesSent dataset is released under the CC BY 4.0 license. The license terms can be found at https://creativecommons.org/licenses/by/4.0/.
### Citation Information
If you use this dataset in your work, please cite the following publication:\
```bibtex
@inproceedings{leal2018pss,
author = {Sidney Evaldo Leal and Magali Sanches Duran and Sandra Maria Aluíso},
title = {A Nontrivial Sentence Corpus for the Task of Sentence Readability Assessment in Portuguese},
booktitle = {Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018)},
year = {2018},
pages = {401-413},
month = {August},
date = {20-26},
address = {Santa Fe, New Mexico, USA},
}
```
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset. | ||
IlyaGusev/oasst1_ru_main_branch | 2023-09-15T20:58:01.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:ru",
"license:apache-2.0",
"region:us"
] | IlyaGusev | null | null | null | 3 | 161 | ---
language:
- ru
license: apache-2.0
size_categories:
- 1K<n<10K
task_categories:
- conversational
- text-generation
dataset_info:
features:
- name: messages
sequence:
- name: role
dtype: string
- name: content
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 2040115
num_examples: 614
download_size: 2105736
dataset_size: 2040115
---
* Based on [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1).
* Only Russian message trees, only main branches.
* Script: [get_oasst_ru.py](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/src/data_processing/get_oasst_ru.py)
|
ArtifactAI/arxiv-cs-ml-instruct-tune-50k | 2023-06-21T13:45:31.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"doi:10.57967/hf/0800",
"region:us"
] | ArtifactAI | null | null | null | 3 | 161 | ---
annotations_creators:
- no-annotation
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: arxiv-cs-ml-instruct-tune-50k
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: arxiv-cs-ml-instruct-tune-50k
---
# Dataset Card for "arxiv-cs-ml-instruct-tune-50k"
### Dataset Summary
The "ArtifactAI/arxiv-cs-ml-instruct-tune-50k" dataset consists of question-answer pairs derived from ArXiv abstracts from the following categories:
"cs.CL", "cs.AI", "cs.CV", "cs.HC", "cs.IR", "cs.RO", "cs.NE", "stat.ML". Questions are generated using the [t5-base model](https://huggingface.co/t5-base), while the answers are generated using the [GPT-3.5-turbo model](https://openai.com/chatgpt).
### Languages
English
## Dataset Structure
### Data Instances
#### train
- **Size of downloaded dataset files:** 38.4 MB
An example of 'train' looks as follows.
```
{
"question": "What is multitask learning?",
"answer": "Multitask learning is a technique in machine learning that allows a single model to simultaneously learn multiple related tasks.
Instead of training separate models for each task, multitask learning leverages shared patterns across tasks to improve overall performance.
This can lead to faster training, better accuracy, and more efficient use of computational resources.
For example, a model might be trained to simultaneously predict the sentiment of customer reviews and categorize them by topic.
Both tasks require analyzing natural language, so the shared knowledge can help improve performance on both tasks."
}
```
### Data Fields
The data fields present in the dataset are as follows:
- question: a string feature representing the question.
- answer: a string feature representing the answer.
#### plain_text
- `text`: a `string` feature.
### Data Splits
train: 50,063 question answer pairs
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
Question-answer pairs derived from [ArXiv](https://arxiv.org/) abstracts.
#### Initial Data Collection and Normalization
The "ArtifactAI/arxiv-cs-ml-instruct-tune-50k" dataset consists of question-answer pairs derived from ArXiv abstracts. Questions are generated from ArXiv papers in the following categories:
- cs.CL
- cs.AI
- cs.CV
- cs.HC
- cs.IR
- cs.RO
- cs.NE
- stat.ML
Questions are generated using the [t5-base model](https://huggingface.co/t5-base), while the answers are generated using the [GPT-3.5-turbo model](https://openai.com/chatgpt).
### Annotations
The dataset doesn't contain annotations.
### Personal and Sensitive Information
None
#### Notice policy
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
Clearly identify the copyrighted work claimed to be infringed.
Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
And contact us at the following email address: matt at artifactai.com and datasets at huggingface.co
#### Take down policy
The original authors will comply to legitimate requests by removing the affected sources from the next release of the corpus.
Hugging Face will also update this repository accordingly.
### Citation Information
```
@misc{arxiv-cs-ml-instruct-tune-50k,
title={arxiv-cs-ml-instruct-tune-50k},
author={Matthew Kenney},
year={2023}
}
```
|
loremipsum3658/and | 2023-08-24T21:29:56.000Z | [
"region:us"
] | loremipsum3658 | null | null | null | 0 | 161 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: nup
dtype: string
- name: data
dtype: string
- name: titulo
dtype: string
- name: andamento
dtype: string
- name: classificacao_andamento
sequence: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 13722868
num_examples: 19924
- name: test
num_bytes: 3071574
num_examples: 4270
- name: validation
num_bytes: 2943882
num_examples: 4269
download_size: 10133342
dataset_size: 19738324
---
# Dataset Card for "and"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mattymchen/lrs3-test | 2023-09-05T10:37:16.000Z | [
"region:us"
] | mattymchen | null | null | null | 0 | 161 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: idx
dtype: int64
- name: audio
sequence: int16
- name: video
sequence:
sequence:
sequence: uint8
- name: label
dtype: string
splits:
- name: train
num_bytes: 824374107
num_examples: 1321
download_size: 677311360
dataset_size: 824374107
---
# Dataset Card for "lrs3-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/023acaec | 2023-10-03T22:23:40.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 161 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 233
num_examples: 10
download_size: 1392
dataset_size: 233
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "023acaec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nielsr/rvlcdip-demo | 2022-03-08T12:11:13.000Z | [
"region:us"
] | nielsr | null | null | null | 0 | 160 | Entry not found |
Francesco/peanuts-sd4kf | 2023-03-30T09:30:58.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 0 | 160 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': peanuts
'1': with mold
'2': without mold
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: peanuts-sd4kf
tags:
- rf100
---
# Dataset Card for peanuts-sd4kf
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/peanuts-sd4kf
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
peanuts-sd4kf
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/peanuts-sd4kf
### Citation Information
```
@misc{ peanuts-sd4kf,
title = { peanuts sd4kf Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/peanuts-sd4kf } },
url = { https://universe.roboflow.com/object-detection/peanuts-sd4kf },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
CM/codexglue_code2text_go | 2023-04-22T01:51:07.000Z | [
"region:us"
] | CM | null | null | null | 0 | 160 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 342243143
num_examples: 167288
- name: validation
num_bytes: 13721860
num_examples: 7325
- name: test
num_bytes: 16328406
num_examples: 8122
download_size: 121340474
dataset_size: 372293409
---
# Dataset Card for "codexglue_code2text_go"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
teknium/GPTeacher-General-Instruct | 2023-04-29T23:27:46.000Z | [
"license:mit",
"region:us"
] | teknium | null | null | null | 28 | 160 | ---
license: mit
---
GPTeacher General-Instruct dataset is GPT-4 Generated self-instruct dataset.
There are multiple versions, with more or less similarity reductions.
The dedupe only dataset contains 18194 entries, with less the more similarity is reduced.
Format is identical to alpaca's, with a varyiable mix of Instruction/Input/Response, and Instruction/NullInput/Response fields.
Learn more on github here:
https://github.com/teknium1/GPTeacher |
DISCOX/DISCO-10K-random | 2023-06-20T14:25:17.000Z | [
"license:cc-by-4.0",
"region:us"
] | DISCOX | null | null | null | 1 | 160 | ---
license: cc-by-4.0
dataset_info:
features:
- name: video_url_youtube
dtype: string
- name: video_title_youtube
dtype: string
- name: track_name_spotify
dtype: string
- name: video_duration_youtube_sec
dtype: float64
- name: preview_url_spotify
dtype: string
- name: video_view_count_youtube
dtype: float64
- name: video_thumbnail_url_youtube
dtype: string
- name: search_query_youtube
dtype: string
- name: video_description_youtube
dtype: string
- name: track_id_spotify
dtype: string
- name: album_id_spotify
dtype: string
- name: artist_id_spotify
sequence: string
- name: track_duration_spotify_ms
dtype: int64
- name: primary_artist_name_spotify
dtype: string
- name: track_release_date_spotify
dtype: string
- name: explicit_content_spotify
dtype: bool
- name: similarity_duration
dtype: float64
- name: similarity_query_video_title
dtype: float64
- name: similarity_query_description
dtype: float64
- name: similarity_audio
dtype: float64
- name: audio_embedding_spotify
sequence: float32
- name: audio_embedding_youtube
sequence: float32
splits:
- name: train
num_bytes: 47861223.0
num_examples: 10000
download_size: 57725964
dataset_size: 47861223.0
---
### Getting Started
You can download the dataset using HuggingFace:
```python
from datasets import load_dataset
ds = load_dataset("DISCOX/DISCO-10K-random")
```
The dataset contains 10,000 random samples from the DISCO-10M dataset found [here](https://huggingface.co/datasets/DISCOX/DISCO-10M).
## Dataset Structure
The dataset contains the following features:
```json
{
'video_url_youtube',
'video_title_youtube',
'track_name_spotify',
'video_duration_youtube_sec',
'preview_url_spotify',
'video_view_count_youtube',
'video_thumbnail_url_youtube',
'search_query_youtube',
'video_description_youtube',
'track_id_spotify',
'album_id_spotify',
'artist_id_spotify',
'track_duration_spotify_ms',
'primary_artist_name_spotify',
'track_release_date_spotify',
'explicit_content_spotify',
'similarity_duration',
'similarity_query_video_title',
'similarity_query_description',
'similarity_audio',
'audio_embedding_spotify',
'audio_embedding_youtube',
}
```
More details about the dataset can be found [here](https://huggingface.co/datasets/DISCOX/DISCO-10M).
<!--
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
--> |
FudanSELab/ClassEval | 2023-09-04T06:35:53.000Z | [
"task_categories:text2text-generation",
"size_categories:n<1K",
"language:en",
"license:mit",
"code-generation",
"arxiv:2308.01861",
"region:us"
] | FudanSELab | FudanSELab ClassEval | @misc{du2023classeval,
title={ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation},
author={Xueying Du and Mingwei Liu and Kaixin Wang and Hanlin Wang and Junwei Liu and Yixuan Chen and Jiayi Feng and Chaofeng Sha and Xin Peng and Yiling Lou},
year={2023},
eprint={2308.01861},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 1 | 160 | ---
license: mit
language:
- en
size_categories:
- n<1K
tags:
- code-generation
task_categories:
- text2text-generation
pretty_name: ClassEval
configs:
- config_name: default
data_files:
- split: test
path: "ClassEval_data.json"
---
# Dataset Card for FudanSELab ClassEval
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/FudanSELab/ClassEval)
- **Paper:** [ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation](https://arxiv.org/abs/2308.01861)
### Dataset Summary
We manually build ClassEval of 100 class-level Python coding tasks, consists of 100 classes and 412 methods, and average 33.1 test cases per class.
For 100 class-level tasks, diversity is maintained by encompassing these tasks over a wide spectrum of topics, including Management Systems, Data Formatting, Mathematical Operations, Game Development, File Handing, Database Operations and Natural Language Processing.
For 412 methods, they have been constructed with diverse dependencies, including (i) Library Dependency, where the methods rely on specific external libraries; (ii) Field Dependency, in which the methods are contingent on class instance variables, or fields; (iii) Method Dependency, where the methods are dependent on other methods within the same class; and (iv) Standalone, wherein the methods operate independently without reliance on fields, other methods, or external libraries.
### Languages
The programming language is Python. The natural language used in the comments and docstrings is English.
## Dataset Structure
```python
from datasets import load_dataset
dataset = load_dataset("FudanSELab/ClassEval")
DatasetDict({
test: Dataset({
features: ['task_id', 'skeleton', 'test', 'solution_code', 'import_statement', 'class_description', 'methods_info',
'class_name', 'test_classes', 'class_constructor', 'fields'],
num_rows: 100
})
})
```
### Data Fields
The specific data fields for each task are delineated as follows:
* task_id: the unique identifier for each task.
* skeleton: the class skeleton, including all input descriptions in our class-level coding tasks.
* test: all test cases for the whole class.
* solution_code: the ground-truth class-level code for each task.
More fine-grained class-level information from the class skeleton, including:
* import_statement: the import statements for each task.
* class_name: the name of the class.
* class_description: a concise description of the purpose and functionality of the class.
* class_constructor: the whole constructor of the class.
* fields: the fields defined in the class_constructor.
Detailed information for each method in the "methods_info" field, including:
* method_name: the method signature.
* method_input: the method contract design, including all input descriptions in the method.
* test_code: the test cases for the method.
* solution_code: the ground-truth method-level code.
* dependencies: the dependency information of the method.
### Data Splits
The dataset only consists of a test split with 100 samples.
## Dataset Creation
### Source Data
Manually-crafted
## Additional Information
### Licensing Information
This repository is under [MIT](https://github.com/FudanSELab/ClassEval/blob/master/LICENSE) license. But the data is distributes through [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
### Citation Information
```
@misc{du2023classeval,
title={ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation},
author={Xueying Du and Mingwei Liu and Kaixin Wang and Hanlin Wang and Junwei Liu and Yixuan Chen and Jiayi Feng and Chaofeng Sha and Xin Peng and Yiling Lou},
year={2023},
eprint={2308.01861},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Xueying Du xueyingdu21@m.fudan.edu.cn
Mingwei Liu liumingwei@fudan.edu.cn
Kaixin Wang kxwang23@m.fudan.edu.cn
Hanlin Wang wanghanlin23@m.fudan.edu.cn
Junwei Liu jwliu22@m.fudan.edu.cn
Yixuan Chen 23212010005@m.fudan.edu.cn
Jiayi Feng 23210240148@m.fudan.edu.cn
Chaofeng Sha cfsha@fudan.edu.cn
Xin Peng pengxin@fudan.edu.cn
Yiling Lou yilinglou@fudan.edu.cn
|
Rowan/hellaswag | 2023-09-28T14:49:00.000Z | [
"language:en",
"arxiv:1905.07830",
"region:us"
] | Rowan | HellaSwag: Can a Machine Really Finish Your Sentence? is a new dataset for commonsense NLI. A paper was published at ACL2019. | @inproceedings{zellers2019hellaswag,
title={HellaSwag: Can a Machine Really Finish Your Sentence?},
author={Zellers, Rowan and Holtzman, Ari and Bisk, Yonatan and Farhadi, Ali and Choi, Yejin},
booktitle ={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
year={2019}
} | null | 29 | 159 | ---
language:
- en
paperswithcode_id: hellaswag
pretty_name: HellaSwag
dataset_info:
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 43232624
num_examples: 39905
- name: test
num_bytes: 10791853
num_examples: 10003
- name: validation
num_bytes: 11175717
num_examples: 10042
download_size: 71494896
dataset_size: 65200194
---
# Dataset Card for "hellaswag"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://rowanzellers.com/hellaswag/](https://rowanzellers.com/hellaswag/)
- **Repository:** [https://github.com/rowanz/hellaswag/](https://github.com/rowanz/hellaswag/)
- **Paper:** [HellaSwag: Can a Machine Really Finish Your Sentence?](https://arxiv.org/abs/1905.07830)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 71.49 MB
- **Size of the generated dataset:** 65.32 MB
- **Total amount of disk used:** 136.81 MB
### Dataset Summary
HellaSwag: Can a Machine Really Finish Your Sentence? is a new dataset for commonsense NLI. A paper was published at ACL2019.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 71.49 MB
- **Size of the generated dataset:** 65.32 MB
- **Total amount of disk used:** 136.81 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"activity_label": "Removing ice from car",
"ctx": "Then, the man writes over the snow covering the window of a car, and a woman wearing winter clothes smiles. then",
"ctx_a": "Then, the man writes over the snow covering the window of a car, and a woman wearing winter clothes smiles.",
"ctx_b": "then",
"endings": "[\", the man adds wax to the windshield and cuts it.\", \", a person board a ski lift, while two men supporting the head of the per...",
"ind": 4,
"label": "3",
"source_id": "activitynet~v_-1IBHYS3L-Y",
"split": "train",
"split_type": "indomain"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `ind`: a `int32` feature.
- `activity_label`: a `string` feature.
- `ctx_a`: a `string` feature.
- `ctx_b`: a `string` feature.
- `ctx`: a `string` feature.
- `endings`: a `list` of `string` features.
- `source_id`: a `string` feature.
- `split`: a `string` feature.
- `split_type`: a `string` feature.
- `label`: a `string` feature.
### Data Splits
| name |train|validation|test |
|-------|----:|---------:|----:|
|default|39905| 10042|10003|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
MIT https://github.com/rowanz/hellaswag/blob/master/LICENSE
### Citation Information
```
@inproceedings{zellers2019hellaswag,
title={HellaSwag: Can a Machine Really Finish Your Sentence?},
author={Zellers, Rowan and Holtzman, Ari and Bisk, Yonatan and Farhadi, Ali and Choi, Yejin},
booktitle ={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
year={2019}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset. |
nli_tr | 2023-06-01T14:59:47.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|snli",
"source_datasets:extended|multi_nli",
"language:tr",
"license:cc-by-3.0",
"license:cc-by-4.0",
"license:cc-by-sa-3.0",
"license:mit",
"license:other",
"region:us"
] | null | \
The Natural Language Inference in Turkish (NLI-TR) is a set of two large scale datasets that were obtained by translating the foundational NLI corpora (SNLI and MNLI) using Amazon Translate. | \
@inproceedings{budur-etal-2020-data,
title = "Data and Representation for Turkish Natural Language Inference",
author = "Budur, Emrah and
\"{O}zçelik, Rıza and
G\"{u}ng\"{o}r, Tunga",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
abstract = "Large annotated datasets in NLP are overwhelmingly in English. This is an obstacle to progress in other languages. Unfortunately, obtaining new annotated resources for each task in each language would be prohibitively expensive. At the same time, commercial machine translation systems are now robust. Can we leverage these systems to translate English-language datasets automatically? In this paper, we offer a positive response for natural language inference (NLI) in Turkish. We translated two large English NLI datasets into Turkish and had a team of experts validate their translation quality and fidelity to the original labels. Using these datasets, we address core issues of representation for Turkish NLI. We find that in-language embeddings are essential and that morphological parsing can be avoided where the training set is large. Finally, we show that models trained on our machine-translated datasets are successful on human-translated evaluation sets. We share all code, models, and data publicly.",
} | null | 5 | 159 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- tr
license:
- cc-by-3.0
- cc-by-4.0
- cc-by-sa-3.0
- mit
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|snli
- extended|multi_nli
task_categories:
- text-classification
task_ids:
- natural-language-inference
- semantic-similarity-scoring
- text-scoring
paperswithcode_id: nli-tr
pretty_name: Natural Language Inference in Turkish
license_details: Open Portion of the American National Corpus
dataset_info:
- config_name: snli_tr
features:
- name: idx
dtype: int32
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 71175743
num_examples: 550152
- name: validation
num_bytes: 1359639
num_examples: 10000
- name: test
num_bytes: 1355409
num_examples: 10000
download_size: 40328942
dataset_size: 73890791
- config_name: multinli_tr
features:
- name: idx
dtype: int32
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 75524150
num_examples: 392702
- name: validation_matched
num_bytes: 1908283
num_examples: 10000
- name: validation_mismatched
num_bytes: 2039392
num_examples: 10000
download_size: 75518512
dataset_size: 79471825
config_names:
- multinli_tr
- snli_tr
---
# Dataset Card for "nli_tr"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/boun-tabi/NLI-TR](https://github.com/boun-tabi/NLI-TR)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 115.85 MB
- **Size of the generated dataset:** 153.36 MB
- **Total amount of disk used:** 269.21 MB
### Dataset Summary
The Natural Language Inference in Turkish (NLI-TR) is a set of two large scale datasets that were obtained by translating the foundational NLI corpora (SNLI and MNLI) using Amazon Translate.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### multinli_tr
- **Size of downloaded dataset files:** 75.52 MB
- **Size of the generated dataset:** 79.47 MB
- **Total amount of disk used:** 154.99 MB
An example of 'validation_matched' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "Mrinal Sen'in çalışmalarının çoğu Avrupa koleksiyonlarında bulunabilir.",
"idx": 7,
"label": 1,
"premise": "\"Kalküta, sanatsal yaratıcılığa dair herhangi bir iddiaya sahip olan tek diğer üretim merkezi gibi görünüyor, ama ironik bir şek..."
}
```
#### snli_tr
- **Size of downloaded dataset files:** 40.33 MB
- **Size of the generated dataset:** 73.89 MB
- **Total amount of disk used:** 114.22 MB
An example of 'train' looks as follows.
```
{
"hypothesis": "Yaşlı bir adam, kızının işten çıkmasını bekçiyken suyunu içer.",
"idx": 9,
"label": 1,
"premise": "Parlak renkli gömlek çalışanları arka planda gülümseme iken yaşlı bir adam bir kahve dükkanında küçük bir masada onun portakal suyu ile oturur."
}
```
### Data Fields
The data fields are the same among all splits.
#### multinli_tr
- `idx`: a `int32` feature.
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### snli_tr
- `idx`: a `int32` feature.
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
### Data Splits
#### multinli_tr
| |train |validation_matched|validation_mismatched|
|-----------|-----:|-----------------:|--------------------:|
|multinli_tr|392702| 10000| 10000|
#### snli_tr
| |train |validation|test |
|-------|-----:|---------:|----:|
|snli_tr|550152| 10000|10000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{budur-etal-2020-data,
title = "Data and Representation for Turkish Natural Language Inference",
author = "Budur, Emrah and
"{O}zçelik, Rıza and
G"{u}ng"{o}r, Tunga",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
abstract = "Large annotated datasets in NLP are overwhelmingly in English. This is an obstacle to progress in other languages. Unfortunately, obtaining new annotated resources for each task in each language would be prohibitively expensive. At the same time, commercial machine translation systems are now robust. Can we leverage these systems to translate English-language datasets automatically? In this paper, we offer a positive response for natural language inference (NLI) in Turkish. We translated two large English NLI datasets into Turkish and had a team of experts validate their translation quality and fidelity to the original labels. Using these datasets, we address core issues of representation for Turkish NLI. We find that in-language embeddings are essential and that morphological parsing can be avoided where the training set is large. Finally, we show that models trained on our machine-translated datasets are successful on human-translated evaluation sets. We share all code, models, and data publicly.",
}
```
### Contributions
Thanks to [@e-budur](https://github.com/e-budur) for adding this dataset. |
distil-whisper/gigaspeech-l | 2023-09-25T10:28:52.000Z | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:other",
"region:us"
] | distil-whisper | GigaSpeech is an evolving, multi-domain English speech recognition corpus with 10,000 hours of high quality
labeled audio suitable for supervised training, and 40,000 hours of total audio suitable for semi-supervised
and unsupervised training. Around 40,000 hours of transcribed audio is first collected from audiobooks, podcasts
and YouTube, covering both read and spontaneous speaking styles, and a variety of topics, such as arts, science,
sports, etc. A new forced alignment and segmentation pipeline is proposed to create sentence segments suitable
for speech recognition training, and to filter out segments with low-quality transcription. For system training,
GigaSpeech provides five subsets of different sizes, 10h, 250h, 1000h, 2500h, and 10000h.
For our 10,000-hour XL training subset, we cap the word error rate at 4% during the filtering/validation stage,
and for all our other smaller training subsets, we cap it at 0%. The DEV and TEST evaluation sets, on the other hand,
are re-processed by professional human transcribers to ensure high transcription quality. | @article{DBLP:journals/corr/abs-2106-06909,
author = {Guoguo Chen and
Shuzhou Chai and
Guanbo Wang and
Jiayu Du and
Wei{-}Qiang Zhang and
Chao Weng and
Dan Su and
Daniel Povey and
Jan Trmal and
Junbo Zhang and
Mingjie Jin and
Sanjeev Khudanpur and
Shinji Watanabe and
Shuaijiang Zhao and
Wei Zou and
Xiangang Li and
Xuchen Yao and
Yongqing Wang and
Yujun Wang and
Zhao You and
Zhiyong Yan},
title = {GigaSpeech: An Evolving, Multi-domain {ASR} Corpus with 10, 000 Hours
of Transcribed Audio},
journal = {CoRR},
volume = {abs/2106.06909},
year = {2021},
url = {https://arxiv.org/abs/2106.06909},
eprinttype = {arXiv},
eprint = {2106.06909},
timestamp = {Wed, 29 Dec 2021 14:29:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-06909.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 0 | 159 | ---
license: other
task_categories:
- automatic-speech-recognition
language:
- en
extra_gated_prompt: |-
SpeechColab does not own the copyright of the audio files. For researchers and educators who wish to use the audio files for non-commercial research and/or educational purposes, we can provide access through the Hub under certain conditions and terms.
Terms of Access:
The "Researcher" has requested permission to use the GigaSpeech database (the "Database") at Tsinghua University. In exchange for such permission, Researcher hereby agrees to the following terms and conditions:
1. Researcher shall use the Database only for non-commercial research and educational purposes.
2. The SpeechColab team and Tsinghua University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.
3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the SpeechColab team and Tsinghua University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted audio files that he or she may create from the Database.
4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.
5. The SpeechColab team and Tsinghua University reserve the right to terminate Researcher's access to the Database at any time.
6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
Please also fill out the Google Form https://forms.gle/UuGQAPyscGRrUMLq6 to request access to the GigaSpeech dataset.
extra_gated_fields:
Name: text
Email: text
Organization: text
Address: text
I hereby confirm that I have requested access via the Google Form provided above: checkbox
I accept the terms of access: checkbox
---
# Distil Whisper: GigaSpeech
This is a variant of the [GigaSpeech](https://huggingface.co/datasets/speechcolab/gigaspeech) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/speechcolab/gigaspeech).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/gigaspeech-l", "l")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/gigaspeech-l", "l", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original [dataset card](https://huggingface.co/datasets/speechcolab/gigaspeech).
|
fedyanin/feud | 2023-07-25T12:01:51.000Z | [
"license:cc",
"region:us"
] | fedyanin | null | null | null | 0 | 159 | ---
license: cc
---
# Feud dataset
Dataset of question and answers that resemble family feud tv show style. There multiple possible answers for each question. Dataset is aimed to benhmark a balance between diversity and correctness of a language model |
ds4sd/FinTabNet_OTSL | 2023-08-31T16:01:59.000Z | [
"task_categories:object-detection",
"task_categories:table-to-text",
"size_categories:10K<n<100K",
"license:other",
"table-structure-recognition",
"table-understanding",
"PDF",
"arxiv:2305.03393",
"region:us"
] | ds4sd | null | null | null | 1 | 159 | ---
license: other
pretty_name: FinTabNet-OTSL
size_categories:
- 10K<n<100K
tags:
- table-structure-recognition
- table-understanding
- PDF
task_categories:
- object-detection
- table-to-text
---
# Dataset Card for FinTabNet_OTSL
## Dataset Description
- **Homepage:** https://ds4sd.github.io
- **Paper:** https://arxiv.org/pdf/2305.03393
### Dataset Summary
This dataset is a conversion of the original [FinTabNet](https://developer.ibm.com/exchanges/data/all/fintabnet/) into the OTSL format presented in our paper "Optimized Table Tokenization for Table Structure Recognition". The dataset includes the original annotations amongst new additions.
### Dataset Structure
* cells: origunal dataset cell groundtruth (content).
* otsl: new reduced table structure token format
* html: original dataset groundtruth HTML (structure).
* html_restored: generated HTML from OTSL.
* cols: grid column length.
* rows: grid row length.
* image: PIL image
### OTSL Vocabulary:
**OTSL**: new reduced table structure token format
More information on the OTSL table structure format and its concepts can be read from our paper.
Format of this dataset extends work presented in a paper, and introduces slight modifications:
* "fcel" - cell that has content in it
* "ecel" - cell that is empty
* "lcel" - left-looking cell (to handle horizontally merged cells)
* "ucel" - up-looking cell (to handle vertically merged cells)
* "xcel" - 2d span cells, in this dataset - covers entire area of a merged cell
* "nl" - new line token
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Additional Information
### Dataset Curators
The dataset is converted by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
Curators:
- Maksym Lysak, [@maxmnemonic](https://github.com/maxmnemonic)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Nikos Livathinos, [@nikos-livathinos](https://github.com/nikos-livathinos)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Citation Information
```bib
@misc{lysak2023optimized,
title={Optimized Table Tokenization for Table Structure Recognition},
author={Maksym Lysak and Ahmed Nassar and Nikolaos Livathinos and Christoph Auer and Peter Staar},
year={2023},
eprint={2305.03393},
archivePrefix={arXiv},
primaryClass={cs.CV}
}``` |
eduagarcia/generic_conll | 2023-08-29T02:59:05.000Z | [
"region:us"
] | eduagarcia | null | null | null | 0 | 159 | Entry not found |
hrithikpiyush/acl-arc | 2022-04-26T11:40:41.000Z | [
"license:apache-2.0",
"region:us"
] | hrithikpiyush | null | null | null | 0 | 158 | ---
license: apache-2.0
---
|
jonathanli/law-stack-exchange | 2023-02-23T16:37:19.000Z | [
"task_categories:text-classification",
"language:en",
"stackexchange",
"law",
"region:us"
] | jonathanli | null | null | null | 5 | 158 | ---
task_categories:
- text-classification
language:
- en
tags:
- stackexchange
- law
pretty_name: Law Stack Exchange
---
# Dataset Card for Law Stack Exchange Dataset
## Dataset Description
- **Paper: [Parameter-Efficient Legal Domain Adaptation](https://aclanthology.org/2022.nllp-1.10/)**
- **Point of Contact: jxl@queensu.ca**
### Dataset Summary
Dataset from the Law Stack Exchange, as used in "Parameter-Efficient Legal Domain Adaptation".
### Citation Information
```
@inproceedings{li-etal-2022-parameter,
title = "Parameter-Efficient Legal Domain Adaptation",
author = "Li, Jonathan and
Bhambhoria, Rohan and
Zhu, Xiaodan",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.nllp-1.10",
pages = "119--129",
}
``` |
jbpark0614/speechocean762 | 2022-10-24T09:43:54.000Z | [
"region:us"
] | jbpark0614 | null | null | null | 3 | 158 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: speaker_id_str
dtype: int64
- name: speaker_id
dtype: int64
- name: question_id
dtype: int64
- name: total_score
dtype: int64
- name: accuracy
dtype: int64
- name: completeness
dtype: float64
- name: fluency
dtype: int64
- name: prosodic
dtype: int64
- name: text
dtype: string
- name: audio
dtype: audio
- name: path
dtype: string
splits:
- name: test
num_bytes: 288402967.0
num_examples: 2500
- name: train
num_bytes: 290407029.0
num_examples: 2500
download_size: 0
dataset_size: 578809996.0
---
# Dataset Card for "speechocean762"
The datasets introduced in
- Zhang, Junbo, et al. "speechocean762: An open-source non-native english speech corpus for pronunciation assessment." arXiv preprint arXiv:2104.01378 (2021).
- Currently, phonetic-level evaluation is omitted (total sentence-level scores are just used.)
- The original full data link: https://github.com/jimbozhang/speechocean762
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jonathanli/hyperpartisan-longformer-split | 2022-12-31T16:08:16.000Z | [
"arxiv:2004.05150",
"region:us"
] | jonathanli | null | null | null | 0 | 158 | # Hyperpartisan news detection
This dataset has the hyperpartisan new dataset, processed and split exactly as it was for [longformer](https://arxiv.org/abs/2004.05150) experiments.
Code for processing was found at [here](https://github.com/allenai/longformer/blob/master/scripts/hp_preprocess.py).
|
Deysi/spam-detection-dataset | 2023-04-15T17:42:24.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] | Deysi | null | null | null | 5 | 158 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 3161821
num_examples: 8175
- name: test
num_bytes: 1094757
num_examples: 2725
download_size: 2578551
dataset_size: 4256578
license: apache-2.0
task_categories:
- text-classification
language:
- en
pretty_name: spam
size_categories:
- 10K<n<100K
---
# Dataset Card for "spam-detection-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
semaj83/ioqm | 2023-10-08T01:13:18.000Z | [
"license:mit",
"region:us"
] | semaj83 | null | null | null | 0 | 158 | ---
license: mit
viewer: false
---
This is a dataset of image generating prompts containing objects and quantifiers such as:
`2 cell phones and 1 oven and 2 remotes`
The objects were a subset of 10 random objects taken from the COCO dataset of 80-1 (79 classes): https://docs.ultralytics.com/datasets/detect/coco/#dataset-yaml
`mini_prompts.txt` contains the prompts, ~16k strings with 1-3 objects per image, 1-5 instances of the object per image
`mini_prompts_v2.txt` contains another subset of easier prompts excluding objects used in `mini_prompts.txt`, ~4k strings with 1-2 objects per image, 1-3 instances of the object per image
`coco_classes.txt` is the list of COCO objects sampled for the prompts
`create_prompts.py` is the python script used to generate the prompts, which can be rerun for a larger dataset or a different subset of classes if desired.
|
result-kand2-sdxl-wuerst-karlo/dbd855c1 | 2023-10-04T01:53:22.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 158 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 233
num_examples: 10
download_size: 1405
dataset_size: 233
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dbd855c1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
leslyarun/c4_200m_gec_train100k_test25k | 2022-10-26T07:59:31.000Z | [
"task_categories:text-generation",
"source_datasets:allenai/c4",
"language:en",
"grammatical-error-correction",
"region:us"
] | leslyarun | null | null | null | 2 | 157 | ---
language:
- en
source_datasets:
- allenai/c4
task_categories:
- text-generation
pretty_name: C4 200M Grammatical Error Correction Dataset
tags:
- grammatical-error-correction
---
# C4 200M
# Dataset Summary
C4 200M Sample Dataset adopted from https://huggingface.co/datasets/liweili/c4_200m
C4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks.
The corruption edits and scripts used to synthesize this dataset is referenced from: [C4_200M Synthetic Dataset](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction)
# Description
As discussed before, this dataset contains 185 million sentence pairs. Each article has these two attributes: `input` and `output`. Here is a sample of dataset:
```
{
"input": "Bitcoin is for $7,094 this morning, which CoinDesk says."
"output": "Bitcoin goes for $7,094 this morning, according to CoinDesk."
}
``` |
gokuls/wiki_book_corpus_complete_processed_bert_dataset | 2023-02-25T19:22:14.000Z | [
"region:us"
] | gokuls | null | null | null | 0 | 157 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 22201610400.0
num_examples: 6167114
download_size: 2763194793
dataset_size: 22201610400.0
---
# Dataset Card for "wiki_book_corpus_complete_processed_bert_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mhhmm/leetcode-solutions-python | 2023-04-27T06:40:41.000Z | [
"license:lgpl",
"region:us"
] | mhhmm | null | null | null | 14 | 157 | ---
license: lgpl
---
All credits belong to https://www.kaggle.com/datasets/erichartford/leetcode-solutions
I collected only python solutions:
```
id: <number>
code_with_data:
<
# Slug
# Title
# Difficulty
# Content
Code Answer in Python
# Explanation
>
code_only: < Code Answer in Python >
code_with_problem: <
# Content
Code
>
explanation_only: < Explanation >
```
I'm using this for code generation and code summarization so the data will have the format like above
|
Fsoft-AIC/the-vault-function | 2023-07-04T02:33:36.000Z | [
"task_categories:text-generation",
"multilinguality:multiprogramming languages",
"language:code",
"language:en",
"license:mit",
"arxiv:2305.06156",
"region:us"
] | Fsoft-AIC | The Vault is a multilingual code-text dataset with over 40 million pairs covering 10 popular programming languages.
It is the largest corpus containing parallel code-text data. By building upon The Stack, a massive raw code sample collection,
the Vault offers a comprehensive and clean resource for advancing research in code understanding and generation. It provides a
high-quality dataset that includes code-text pairs at multiple levels, such as class and inline-level, in addition to the function level.
The Vault can serve many purposes at multiple levels. | @article{manh2023vault,
title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation},
author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ},
journal={arXiv preprint arXiv:2305.06156},
year={2023}
} | null | 8 | 157 | ---
language:
- code
- en
multilinguality:
- multiprogramming languages
task_categories:
- text-generation
license: mit
dataset_info:
features:
- name: identifier
dtype: string
- name: return_type
dtype: string
- name: repo
dtype: string
- name: path
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
dtype: string
- name: original_docstring
dtype: string
- name: comment
dtype: string
- name: docstring_tokens
dtype: string
- name: docstring
dtype: string
- name: original_string
dtype: string
pretty_name: The Vault Function
viewer: true
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Statistics](#dataset-statistics)
- [Usage](#usage)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [FSoft-AI4Code/TheVault](https://github.com/FSoft-AI4Code/TheVault)
- **Paper:** [The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation](https://arxiv.org/abs/2305.06156)
- **Contact:** support.ailab@fpt.com
- **Website:** https://www.fpt-aicenter.com/ai-residency/
<p align="center">
<img src="https://raw.githubusercontent.com/FSoft-AI4Code/TheVault/main/assets/the-vault-4-logo-png.png" width="300px" alt="logo">
</p>
<div align="center">
# The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation
</div>
## Dataset Summary
The Vault dataset is a comprehensive, large-scale, multilingual parallel dataset that features high-quality code-text pairs derived from The Stack, the largest permissively-licensed source code dataset.
We provide The Vault which contains code snippets from 10 popular programming languages such as Java, JavaScript, Python, Ruby, Rust, Golang, C#, C++, C, and PHP. This dataset provides multiple code-snippet levels, metadata, and 11 docstring styles for enhanced usability and versatility.
## Supported Tasks
The Vault can be used for pretraining LLMs or downstream code-text interaction tasks. A number of tasks related to code understanding and geneartion can be constructed using The Vault such as *code summarization*, *text-to-code generation* and *code search*.
## Languages
The natural language text (docstring) is in English.
10 programming languages are supported in The Vault: `Python`, `Java`, `JavaScript`, `PHP`, `C`, `C#`, `C++`, `Go`, `Ruby`, `Rust`
## Dataset Structure
### Data Instances
```
{
"hexsha": "5c47f0b4c173a8fd03e4e633d9b3dd8211e67ad0",
"repo": "neumanna94/beepboop",
"path": "js/scripts.js",
"license": [
"MIT"
],
"language": "JavaScript",
"identifier": "beepBoopSelector",
"return_type": "<not_specific>",
"original_string": "function beepBoopSelector(inputString, bbFunction){\n if(bbFunction==1){\n return beepBoop(inputString);\n } else if(bbFunction==2){\n return beepBoop2(inputString);\n } else if(bbFunction==3){\n return beepBoop3(inputString);\n } else {\n }\n}",
"original_docstring": "//Determines what beepBoop function to use",
"docstring": "Determines what beepBoop function to use",
"docstring_tokens": [
"Determines",
"what",
"beepBoop",
"function",
"to",
"use"
],
"code": "function beepBoopSelector(inputString, bbFunction){\n if(bbFunction==1){\n return beepBoop(inputString);\n } else if(bbFunction==2){\n return beepBoop2(inputString);\n } else if(bbFunction==3){\n return beepBoop3(inputString);\n } else {\n }\n}",
"code_tokens": [
"function",
"beepBoopSelector",
"(",
"inputString",
",",
"bbFunction",
")",
"{",
"if",
"(",
"bbFunction",
"==",
"1",
")",
"{",
"return",
"beepBoop",
"(",
"inputString",
")",
";",
"}",
"else",
"if",
"(",
"bbFunction",
"==",
"2",
")",
"{",
"return",
"beepBoop2",
"(",
"inputString",
")",
";",
"}",
"else",
"if",
"(",
"bbFunction",
"==",
"3",
")",
"{",
"return",
"beepBoop3",
"(",
"inputString",
")",
";",
"}",
"else",
"{",
"}",
"}"
],
"short_docstring": "Determines what beepBoop function to use",
"short_docstring_tokens": [
"Determines",
"what",
"beepBoop",
"function",
"to",
"use"
],
"comment": [],
"parameters": [
{
"param": "inputString",
"type": null
},
{
"param": "bbFunction",
"type": null
}
],
"docstring_params": {
"returns": [],
"raises": [],
"params": [
{
"identifier": "inputString",
"type": null,
"docstring": null,
"docstring_tokens": [],
"default": null,
"is_optional": null
},
{
"identifier": "bbFunction",
"type": null,
"docstring": null,
"docstring_tokens": [],
"default": null,
"is_optional": null
}
],
"outlier_params": [],
"others": []
}
}
```
### Data Fields
Data fields for function level:
- **hexsha** (string): the unique git hash of file
- **repo** (string): the owner/repo
- **path** (string): the full path to the original file
- **license** (list): licenses in the repo
- **language** (string): the programming language
- **identifier** (string): the function or method name
- **return_type** (string): the type returned by the function
- **original_string** (string): original version of function/class node
- **original_docstring** (string): the raw string before tokenization or parsing
- **code** (string): the part of the original that is code
- **code_tokens** (list): tokenized version of `code`
- **short_docstring** (string): short, brief summarization (first line of the docstring)
- **short_docstring_tokens** (list): tokenized version of `short_docstring
- **docstring** (string): the top-level comment or docstring (docstring version without param’s doc, return, exception fields, etc)
- **docstring_tokens** (list): tokenized version of docstring
- **comment** (list): list of comments (line) inside the function/class
- **parameters** (list): List of parameters and its type (type can be None)
- **docstring_params** (dict): Dictionary of the parsed information from docstring
See [here](https://github.com/FSoft-AI4Code/TheVault/blob/main/data/README.md) for more details and examples.
### Data Splits
In this repo, The Vault is divided into 5 subsets, where three training versions are split based on size of the full training set, and the remains are validation set and test set (approximate 20,000 samples in each). The statistic for languages in each split set is illustrated in the following section.
Before split, the dataset is deduplicated. There are 3 versions of training set that are small (5%), medium (20%) and large (100%).
## Dataset Statistics
- Compare to other benchmarks
| Dataset | #Language | #Code-text pair |
|:--------------------------|----------:|-----------------:|
| PyMT5 | 1 | ≈ 7,700,000 |
| CoDesc | 1 | 4,211,516 |
| CodeSearchNet | 6 | 2,326,976 |
| CodeSearchNet (CodeXGLUE) | 6 | 1,005,474 |
| Deepcom | 1 | 424,028 |
| CONCODE | 1 | 2,184,310 |
| Funcom | 1 | 2,149,121 |
| CodeT5 | 8 | 3,158,313 |
| **The Vault** | **10** | **34,098,775** |
- Statistic for split sets
| | train/small | train/medium | train/full | validation | test | total |
|:-----------|------------:|-------------:|-----------:|-----------:|-------:|--------------:|
|Python | 370,657 | 1,952,110 | 7,772,647 | 30,992 | 21,652 | 7,825,291 |
|Java | 351,213 | 1,612,366 | 6,629,193 | 22,677 | 15,552 | 6,667,422 |
|JavaScript | 82,931 | 404,729 | 1,640,416 | 22,044 | 21,108 | 1,683,568 |
|PHP | 236,638 | 1,155,476 | 4,656,371 | 21,375 | 19,010 | 4,696,756 |
|C | 105,978 | 381,207 | 1,639,319 | 27,525 | 19,122 | 1,685,966 |
|C# | 141,090 | 783,166 | 3,305,891 | 24,787 | 19,638 | 3,350,316 |
|C++ | 87,420 | 410,907 | 1,671,268 | 20,011 | 18,169 | 1,709,448 |
|Go | 267,535 | 1,319,547 | 5,109,020 | 19,102 | 25,314 | 5,153,436 |
|Ruby | 23,921 | 112,574 | 424,339 | 17,338 | 19,908 | 461,585 |
|Rust | 35,367 | 224,015 | 825,130 | 16,716 | 23,141 | 864,987 |
|TOTAL | 1,702,750 | 8,356,097 |33,673,594 |222,567 |202,614 |**34,098,775** |
## Usage
You can load The Vault dataset using datasets library: ```pip install datasets```
```python
from datasets import load_dataset
# Load full function level dataset (34M samples)
dataset = load_dataset("Fsoft-AIC/the-vault-function")
# Load function level train/validation/test set
dataset = load_dataset("Fsoft-AIC/the-vault-function", split_set=["train"])
# Load "small" (or "medium", "full") version of function level training set
dataset = load_dataset("Fsoft-AIC/the-vault-function", split_set=["train/small"])
# specific language (e.g. Python)
dataset = load_dataset("Fsoft-AIC/the-vault-function", split_set=["train"], languages=['Python'])
# dataset streaming
data = load_dataset("Fsoft-AIC/the-vault-function", split_set= ["train"], streaming= True)
for sample in iter(data['train']):
print(sample)
```
A back up dataset can be downloaded in azure storage. See [Download The Vault from Azure blob storage](https://github.com/FSoft-AI4Code/TheVault#download-via-link).
## Additional information
### Licensing Information
MIT License
### Citation Information
```
@article{manh2023vault,
title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation},
author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ},
journal={arXiv preprint arXiv:2305.06156},
year={2023}
}
```
### Contributions
This dataset is developed by [FSOFT AI4Code team](https://github.com/FSoft-AI4Code). |
open-llm-leaderboard/details | 2023-08-25T09:32:19.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 157 | Entry not found |
tyzhu/squad_id_train_10_eval_10 | 2023-09-19T02:18:57.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 157 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 237881
num_examples: 150
- name: validation
num_bytes: 59860
num_examples: 48
download_size: 72567
dataset_size: 297741
---
# Dataset Card for "squad_id_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tomekkorbak/detoxify-pile-chunk3-100000-150000 | 2022-10-06T02:58:25.000Z | [
"region:us"
] | tomekkorbak | null | null | null | 0 | 156 | Entry not found |
HuggingFaceH4/self-instruct-seed | 2023-01-31T22:37:02.000Z | [
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"arxiv:2212.10560",
"region:us"
] | HuggingFaceH4 | null | null | null | 14 | 156 | ---
license: apache-2.0
task_categories:
- conversational
language:
- en
size_categories:
- n<1K
---
Manually created seed dataset used in bootstrapping in the Self-instruct paper https://arxiv.org/abs/2212.10560. This is part of the instruction fine-tuning datasets. |
NegarMov/DHI_test | 2023-09-21T08:05:32.000Z | [
"region:us"
] | NegarMov | null | null | null | 0 | 156 | Entry not found |
crd3 | 2022-11-18T19:47:20.000Z | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | null | Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset.
Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game.
The dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding
abstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player
collaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail,
and semantic ties to the previous dialogues. | @inproceedings{
title = {Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset},
author = {Rameshkumar, Revanth and Bailey, Peter},
year = {2020},
publisher = {Association for Computational Linguistics},
conference = {ACL}
} | null | 12 | 155 | ---
pretty_name: CRD3 (Critical Role Dungeons and Dragons Dataset)
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- summarization
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
size_categories:
- 10K<n<100K
paperswithcode_id: crd3
dataset_info:
features:
- name: chunk
dtype: string
- name: chunk_id
dtype: int32
- name: turn_start
dtype: int32
- name: turn_end
dtype: int32
- name: alignment_score
dtype: float32
- name: turns
list:
- name: names
sequence: string
- name: utterances
sequence: string
- name: number
dtype: int32
splits:
- name: train
num_bytes: 236605152
num_examples: 38969
- name: test
num_bytes: 40269203
num_examples: 7500
- name: validation
num_bytes: 41543528
num_examples: 6327
download_size: 117519820
dataset_size: 318417883
---
# Dataset Card for "crd3"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CRD3 homepage](https://github.com/RevanthRameshkumar/CRD3)
- **Repository:** [CRD3 repository](https://github.com/RevanthRameshkumar/CRD3)
- **Paper:** [Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset](https://www.aclweb.org/anthology/2020.acl-main.459/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset.
Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game.
The dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding
abstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player
collaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail,
and semantic ties to the previous dialogues.
### Supported Tasks and Leaderboards
`summarization`: The dataset can be used to train a model for abstractive summarization. A [fast abstractive summarization-RL](https://github.com/ChenRocks/fast_abs_rl) model was presented as a baseline, which achieves ROUGE-L-F1 of 25.18.
### Languages
The text in the dataset is in English, as spoken by actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"alignment_score": 3.679936647415161,
"chunk": "Wish them a Happy Birthday on their Facebook and Twitter pages! Also, as a reminder: D&D Beyond streams their weekly show (\"And Beyond\") every Wednesday on twitch.tv/dndbeyond.",
"chunk_id": 1,
"turn_end": 6,
"turn_num": 4,
"turn_start": 4,
"turns": {
"names": ["SAM"],
"utterances": ["Yesterday, guys, was D&D Beyond's first one--", "first one-year anniversary. Take two. Hey guys,", "yesterday was D&D Beyond's one-year anniversary.", "Wish them a happy birthday on their Facebook and", "Twitter pages."]
}
}
```
### Data Fields
The data fields are the same among all splits.
- `chunk`: a `string` feature.
- `chunk_id`: a `int32` feature.
- `turn_start`: a `int32` feature.
- `turn_end`: a `int32` feature.
- `alignment_score`: a `float32` feature.
- `turn_num`: a `int32` feature.
- `turns`: a dictionary feature containing:
- `names`: a `string` feature.
- `utterances`: a `string` feature.
### Data Splits
| name | train |validation| test |
|-------|------:|---------:|------:|
|default|38,969| 6,327|7,500|
## Dataset Creation
### Curation Rationale
Dialogue understanding and abstractive summarization remain both important and challenging problems for computational linguistics. Current paradigms in summarization modeling have specific failures in capturing semantics and pragmatics, content selection, rewriting, and evaluation in the domain of long, story-telling dialogue. CRD3 offers a linguistically rich dataset to explore these domains.
### Source Data
#### Initial Data Collection and Normalization
Dungeons and Dragons is a popular roleplaying game that is driven by structured storytelling. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons. This dataset consists of 159 episodes of the show, where the episodes are transcribed. Inconsistencies (e.g. spelling of speaker names) were manually resolved.
The abstractive summaries were collected from the [Critical Role Fandom wiki](https://criticalrole.fandom.com/)
#### Who are the source language producers?
The language producers are actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
CRTranscript provided transcripts of the show; contributors of the Critical Role Wiki provided the abstractive summaries.
### Licensing Information
This work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa-4.0]., as corresponding to the Critical Role Wiki https://criticalrole.fandom.com/
### Citation Information
```bibtex
@inproceedings{
title = {Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset},
author = {Rameshkumar, Revanth and Bailey, Peter},
year = {2020},
publisher = {Association for Computational Linguistics},
conference = {ACL}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset. |
fmplaza/EmoEvent | 2023-03-27T08:19:58.000Z | [
"language:en",
"language:es",
"license:apache-2.0",
"region:us"
] | fmplaza | EmoEvent is a multilingual emotion dataset of tweets based on different events that took place in April 2019.
Three annotators labeled the tweets following the six Ekman’s basic emotion model (anger, fear, sadness, joy, disgust, surprise) plus the “neutral or other emotions” category. | @inproceedings{plaza-del-arco-etal-2020-emoevent,
title = "{{E}mo{E}vent: A Multilingual Emotion Corpus based on different Events}",
author = "{Plaza-del-Arco}, {Flor Miriam} and Strapparava, Carlo and {Ure{~n}a-L{\’o}pez}, L. Alfonso and {Mart{\’i}n-Valdivia}, M. Teresa",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.186",
pages = "1492--1498",
language = "English",
ISBN = "979-10-95546-34-4" } | null | 6 | 155 | ---
license: apache-2.0
language:
- en
- es
---
# Dataset Card for Emoevent
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [EmoEvent dataset repository](https://github.com/fmplaza/EmoEvent)
- **Paper: EmoEvent:** [A Multilingual Emotion Corpus based on different Events](https://aclanthology.org/2020.lrec-1.186.pdf)
- **Leaderboard:** [Leaderboard for EmoEvent / Spanish version](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6385)
- **Point of Contact: fmplaza@ujaen.es**
### Dataset Summary
EmoEvent is a multilingual emotion dataset of tweets based on different events that took place in April 2019.
Three annotators labeled the tweets following the six Ekman’s basic emotion model (anger, fear, sadness, joy, disgust, surprise) plus the “neutral or other emotions” category. Morevoer, the tweets are annotated as offensive (OFF) or non-offensive (NO).
### Supported Tasks and Leaderboards
This dataset is intended for multi-class emotion classification and binary offensive classification.
Competition [EmoEvalEs task on emotion detection for Spanish at IberLEF 2021](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6385)
### Languages
- Spanish
- English
## Dataset Structure
### Data Instances
For each instance, there is a string for the id of the tweet, a string for the emotion class, a string for the offensive class, and a string for the event. See the []() to explore more examples.
```
{'id': 'a0c1a858-a9b8-4cb1-8a81-1602736ff5b8',
'event': 'GameOfThrones',
'tweet': 'ARYA DE MI VIDA. ERES MAS ÉPICA QUE EL GOL DE INIESTA JODER #JuegodeTronos #VivePoniente',
'offensive': 'NO',
'emotion': 'joy',
}
```
```
{'id': '3YCT0L9OMMFP7KWKQSTJRJO0YHUSN2a0c1a858-a9b8-4cb1-8a81-1602736ff5b8',
'event': 'GameOfThrones',
'tweet': 'The #NotreDameCathedralFire is indeed sad and people call all offered donations humane acts, but please if you have money to donate, donate to humans and help bring food to their tables and affordable education first. What more humane than that? #HumanityFirst',
'offensive': 'NO',
'emotion': 'sadness',
}
```
### Data Fields
- `id`: a string to identify the tweet
- `event`: a string containing the event associated with the tweet
- `tweet`: a string containing the text of the tweet
- `offensive`: a string containing the offensive gold label
- `emotion`: a string containing the emotion gold label
### Data Splits
The EmoEvent dataset has 2 subsets: EmoEvent_es (Spanish version) and EmoEvent_en (English version)
Each subset contains 3 splits: _train_, _validation_, and _test_. Below are the statistics subsets.
| EmoEvent_es | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 5,723 |
| Validation | 844 |
| Test | 1,656 |
| EmoEvent_en | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 5,112 |
| Validation | 744 |
| Test | 1,447 |
## Dataset Creation
### Source Data
Twitter
#### Who are the annotators?
Amazon Mechanical Turkers
## Additional Information
### Licensing Information
The EmoEvent dataset is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
@inproceedings{plaza-del-arco-etal-2020-emoevent,
title = "{{E}mo{E}vent: A Multilingual Emotion Corpus based on different Events}",
author = "{Plaza-del-Arco}, {Flor Miriam} and Strapparava, Carlo and {Ure{\~n}a-L{\’o}pez}, L. Alfonso and {Mart{\’i}n-Valdivia}, M. Teresa",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France", publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.186", pages = "1492--1498",
language = "English",
ISBN = "979-10-95546-34-4"
} |
atokforps/chunk-t1 | 2023-03-09T20:48:30.000Z | [
"region:us"
] | atokforps | null | null | null | 1 | 155 | Entry not found |
Muennighoff/python-bugs | 2023-03-22T07:46:03.000Z | [
"region:us"
] | Muennighoff | null | null | null | 2 | 155 | Entry not found |
Kyle1668/AG-Tweets | 2023-08-09T22:22:37.000Z | [
"region:us"
] | Kyle1668 | null | null | null | 0 | 155 | ---
pretty_name: AG News Tweets
---
\subsection{Motivation}
AG News is a four-way topic classification task introduced in \cite{Zhang2015CharacterlevelCN}. In this setup, a task model must classify whether a given news article is about world events (\textbf{\textit{World}}), sports and athletics (\textbf{\textit{Sports}}), business and economics (\textbf{\textit{Business}}), and scientific developments (\textbf{\textit{Sci/Tech}}). The test set on HuggingFace (\url{huggingface.co/datasets/ag_news}) is composed of 7,600 examples equally balanced across the four classes.
News topic classification presents a promising opportunity for largely isolating the effect of writing style shifts. Existing deep learning methods also perform well on this dataset with accuracy reaching higher than 90\% (\url{paperswithcode.com/sota/text-classification-on-ag-news}).
Another motivation for this particular task is the common risk of data augmentation inadvertently flipping the label/semantics of the text \cite{Bayer2021ASO}. Unlike other tasks such as sentiment classification or subtle hate speech, the topic of a news article is unlikely to change during augmentation, thus preserving the original label.
\subsection{Creation}
We used GPT-3.5 Turbo \cite{brown2020language} (6/7/23 version) for style transfer. We did an initial pass through all 7,600 examples using a conservative "V1" prompt and greedy decoding. Calls were made using the OpenAI Python SDK with top\_p and temperature set to zero. The data was then lightly preprocessed to reduce the number of examples that began with \textbf{BREAKING NEWS} flanked my emojis.
512 of the initial model responses did not result in satisfactory generations. These were typical cases where the generated text was almost indiscernible from the original text or the generation was entirely emojis. We called GPT-3.5 Turbo again with an updated prompt and hyperparameters (temperature=0.7, top\_p=0.9, frequency\_penalty=0.5, presence\_penalty=0.5) for these examples. Whereas all the first-pass generations did not have any instructions to the model as to the sentiment/mood of the hypothetical post author, we purposefully instructed the model to "\textit{Add some flare with humor, anger, or sarcasm.}" in the generation.
It's important to note that we did not enforce Twitter's character limit. These sequences should be considered as more broadly inspired by social media posts rather than following the exact specifications of Twitter posts. We also did not manually review every sequence in the dataset to confirm that the original label was preserved. GPT 3.5 Turbo also hallucinates facts, such as adding the hashtag \#Olympics2021 even though the original dataset was created in 2015. |
yentinglin/ntu_adl_recitation | 2023-09-21T02:18:47.000Z | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | yentinglin | null | null | null | 0 | 155 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
--- |
shubhamagarwal92/rw_2308_filtered | 2023-09-21T20:48:20.000Z | [
"region:us"
] | shubhamagarwal92 | null | null | null | 0 | 155 | ---
dataset_info:
features:
- name: aid
dtype: string
- name: mid
dtype: string
- name: abstract
dtype: string
- name: corpusid
dtype: int64
- name: text_except_rw
dtype: string
- name: title
dtype: string
- name: related_work
dtype: string
- name: original_related_work
dtype: string
- name: ref_abstract
struct:
- name: abstract
sequence: string
- name: cite_N
sequence: string
- name: corpursid
sequence: string
- name: ref_abstract_original
struct:
- name: abstract
sequence: string
- name: cite_N
sequence: string
- name: corpursid
sequence: string
- name: ref_abstract_full_text
struct:
- name: abstract
sequence: string
- name: all_para_text
sequence: string
- name: cite_N
sequence: string
- name: corpursid
sequence: string
- name: ref_abstract_full_text_original
struct:
- name: abstract
sequence: string
- name: all_para_text
sequence: string
- name: cite_N
sequence: string
- name: corpursid
sequence: string
- name: total_cites
dtype: int64
splits:
- name: test
num_bytes: 254996014
num_examples: 1000
download_size: 106899160
dataset_size: 254996014
---
# Dataset Card for "rw_2308_filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HumanCompatibleAI/ppo-seals-Ant-v1 | 2023-09-27T06:56:10.000Z | [
"region:us"
] | HumanCompatibleAI | null | null | null | 0 | 155 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float64
- name: acts
sequence:
sequence: float32
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float32
splits:
- name: train
num_bytes: 141011280
num_examples: 104
download_size: 41078990
dataset_size: 141011280
---
# Dataset Card for "ppo-seals-Ant-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
maritaca-ai/sst2_pt | 2023-02-10T13:40:00.000Z | [
"region:us"
] | maritaca-ai | The Stanford Sentiment Treebank consists of sentences from movie reviews and
human annotations of their sentiment. The task is to predict the sentiment of a
given sentence. We use the two-way (positive/negative) class split, and use only
sentence-level labels. | @inproceedings{socher2013recursive,
title={Recursive deep models for semantic compositionality over a sentiment treebank},
author={Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D and Ng, Andrew and Potts, Christopher},
booktitle={Proceedings of the 2013 conference on empirical methods in natural language processing},
pages={1631--1642},
year={2013}
} | null | 1 | 154 | Entry not found |
nbroad/mediasum | 2022-10-25T10:40:11.000Z | [
"task_categories:summarization",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-sa-4.0",
"arxiv:2103.06410",
"region:us"
] | nbroad | This large-scale media interview dataset contains 463.6K transcripts with abstractive summaries,
collected from interview transcripts and overview / topic descriptions from NPR and CNN. | @article{zhu2021mediasum,
title={MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization},
author={Zhu, Chenguang and Liu, Yang and Mei, Jie and Zeng, Michael},
journal={arXiv preprint arXiv:2103.06410},
year={2021}
} | null | 1 | 153 | ---
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
task_categories:
- summarization
---
# MediaSum
## Description
This large-scale media interview dataset contains 463.6K transcripts with abstractive summaries,
collected from interview transcripts and overview / topic descriptions from NPR and CNN.
### **NOTE: The authors have requested that this dataset be used for research purposes only**
## Homepage
https://github.com/zcgzcgzcg1/MediaSum
## Paper
https://arxiv.org/abs/2103.06410
## Authors
### Chenguang Zhu*, Yang Liu*, Jie Mei, Michael Zeng
#### Microsoft Cognitive Services Research Group
{chezhu,yaliu10,jimei,nzeng}@microsoft.com
## Citation
@article{zhu2021mediasum,
title={MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization},
author={Zhu, Chenguang and Liu, Yang and Mei, Jie and Zeng, Michael},
journal={arXiv preprint arXiv:2103.06410},
year={2021}
}
## Dataset size
Train: 443,596
Validation: 10,000
Test: 10,000
The splits were made by using the file located here: https://github.com/zcgzcgzcg1/MediaSum/tree/main/data
## Data details
- id (string): unique identifier
- program (string): the program this transcript came from
- date (string): date of program
- url (string): link to where audio and transcript are located
- title (string): title of the program. some datapoints do not have a title
- summary (string): summary of the program
- utt (list of string): list of utterances by the speakers in the program. corresponds with `speaker`
- speaker (list of string): list of speakers, corresponds with `utt`
Example:
```
{
"id": "NPR-11",
"program": "Day to Day",
"date": "2008-06-10",
"url": "https://www.npr.org/templates/story/story.php?storyId=91356794",
"title": "Researchers Find Discriminating Plants",
"summary": "The \"sea rocket\" shows preferential treatment to plants that are its kin. Evolutionary plant ecologist Susan Dudley of McMaster University in Ontario discusses her discovery.",
"utt": [
"This is Day to Day. I'm Madeleine Brand.",
"And I'm Alex Cohen.",
"Coming up, the question of who wrote a famous religious poem turns into a very unchristian battle.",
"First, remember the 1970s? People talked to their houseplants, played them classical music. They were convinced plants were sensuous beings and there was that 1979 movie, \"The Secret Life of Plants.\"",
"Only a few daring individuals, from the scientific establishment, have come forward with offers to replicate his experiments, or test his results. The great majority are content simply to condemn his efforts without taking the trouble to investigate their validity.",
...
"OK. Thank you.",
"That's Susan Dudley. She's an associate professor of biology at McMaster University in Hamilt on Ontario. She discovered that there is a social life of plants."
],
"speaker": [
"MADELEINE BRAND, host",
"ALEX COHEN, host",
"ALEX COHEN, host",
"MADELEINE BRAND, host",
"Unidentified Male",
..."
Professor SUSAN DUDLEY (Biology, McMaster University)",
"MADELEINE BRAND, host"
]
}
```
## Using the dataset
```python
from datasets import load_dataset
ds = load_dataset("nbroad/mediasum")
```
## Data location
https://drive.google.com/file/d/1ZAKZM1cGhEw2A4_n4bGGMYyF8iPjLZni/view?usp=sharing
## License
No license specified, but the authors have requested that this dataset be used for research purposes only. |
Ammok/apple_stock_price_from_1980-2021 | 2023-09-09T10:57:38.000Z | [
"task_categories:time-series-forecasting",
"task_categories:tabular-regression",
"language:en",
"license:odc-by",
"region:us"
] | Ammok | null | null | null | 0 | 153 | ---
license: odc-by
task_categories:
- time-series-forecasting
- tabular-regression
language:
- en
pretty_name: apple stock price from 1980-2021
--- |
atmallen/mmlu_binary | 2023-09-19T05:12:16.000Z | [
"region:us"
] | atmallen | null | null | null | 0 | 153 | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int32
- name: statement
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
splits:
- name: validation
num_bytes: 653717
num_examples: 1218
- name: test
num_bytes: 5979564
num_examples: 11526
download_size: 3456524
dataset_size: 6633281
---
# Dataset Card for "mmlu_binary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gfissore/arxiv-abstracts-2021 | 2022-10-27T17:08:00.000Z | [
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:text2text-generation",
"task_ids:explanation-generation",
"task_ids:text-simplification",
"task_ids:document-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:cc0-1.0",
"arxiv:1905.00075",
"region:us"
] | gfissore | null | null | null | 14 | 152 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: arxiv-abstracts-2021
size_categories:
- 1M<n<10M
source_datasets: []
task_categories:
- summarization
- text-retrieval
- text2text-generation
task_ids:
- explanation-generation
- text-simplification
- document-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
---
# Dataset Card for arxiv-abstracts-2021
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Clement et al., 2019, On the Use of ArXiv as a Dataset, https://arxiv.org/abs/1905.00075](https://arxiv.org/abs/1905.00075)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Giancarlo Fissore](mailto:giancarlo.fissore@gmail.com)
### Dataset Summary
A dataset of metadata including title and abstract for all arXiv articles up to the end of 2021 (~2 million papers).
Possible applications include trend analysis, paper recommender engines, category prediction, knowledge graph construction and semantic search interfaces.
In contrast to [arxiv_dataset](https://huggingface.co/datasets/arxiv_dataset), this dataset doesn't include papers submitted to arXiv after 2021 and it doesn't require any external download.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
Here's an example instance:
```
{
"id": "1706.03762",
"submitter": "Ashish Vaswani",
"authors": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion\n Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin",
"title": "Attention Is All You Need",
"comments": "15 pages, 5 figures",
"journal-ref": null,
"doi": null,
"abstract": " The dominant sequence transduction models are based on complex recurrent or\nconvolutional neural
networks in an encoder-decoder configuration. The best\nperforming models also connect the encoder and decoder through
an attention\nmechanism. We propose a new simple network architecture, the Transformer, based\nsolely on attention
mechanisms, dispensing with recurrence and convolutions\nentirely. Experiments on two machine translation tasks show
these models to be\nsuperior in quality while being more parallelizable and requiring significantly\nless time to
train. Our model achieves 28.4 BLEU on the WMT 2014\nEnglish-to-German translation task, improving over the existing
best results,\nincluding ensembles by over 2 BLEU. On the WMT 2014 English-to-French\ntranslation task, our model
establishes a new single-model state-of-the-art\nBLEU score of 41.8 after training for 3.5 days on eight GPUs, a small
fraction\nof the training costs of the best models from the literature. We show that the\nTransformer generalizes well
to other tasks by applying it successfully to\nEnglish constituency parsing both with large and limited training
data.\n",
"report-no": null,
"categories": [
"cs.CL cs.LG"
],
"versions": [
"v1",
"v2",
"v3",
"v4",
"v5"
]
}
```
### Data Fields
These fields are detailed on the [arXiv](https://arxiv.org/help/prep):
- `id`: ArXiv ID (can be used to access the paper)
- `submitter`: Who submitted the paper
- `authors`: Authors of the paper
- `title`: Title of the paper
- `comments`: Additional info, such as number of pages and figures
- `journal-ref`: Information about the journal the paper was published in
- `doi`: [Digital Object Identifier](https://www.doi.org)
- `report-no`: Report Number
- `abstract`: The abstract of the paper
- `categories`: Categories / tags in the ArXiv system
### Data Splits
No splits
## Dataset Creation
### Curation Rationale
For about 30 years, ArXiv has served the public and research communities by providing open access to scholarly articles, from the vast branches of physics to the many subdisciplines of computer science to everything in between, including math, statistics, electrical engineering, quantitative biology, and economics. This rich corpus of information offers significant, but sometimes overwhelming, depth. In these times of unique global challenges, efficient extraction of insights from data is essential. The `arxiv-abstracts-2021` dataset aims at making the arXiv more easily accessible for machine learning applications, by providing important metadata (including title and abstract) for ~2 million papers.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The language producers are members of the scientific community at large, but not necessarily affiliated to any institution.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The full names of the papers' authors are included in the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The original data is maintained by [ArXiv](https://arxiv.org/)
### Licensing Information
The data is under the [Creative Commons CC0 1.0 Universal Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
```
@misc{clement2019arxiv,
title={On the Use of ArXiv as a Dataset},
author={Colin B. Clement and Matthew Bierbaum and Kevin P. O'Keeffe and Alexander A. Alemi},
year={2019},
eprint={1905.00075},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
``` |
BeIR/webis-touche2020-qrels | 2022-10-23T06:07:03.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 0 | 152 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
forta/malicious-smart-contract-dataset | 2023-01-10T22:03:23.000Z | [
"task_categories:token-classification",
"size_categories:100K<n<1M",
"license:mit",
"smart contract",
"ethereum",
"blockchain",
"security",
"region:us"
] | forta | null | null | null | 9 | 152 | ---
license: mit
task_categories:
- token-classification
tags:
- smart contract
- ethereum
- blockchain
- security
pretty_name: Malicious Smart Contract Classification Dataset
size_categories:
- 100K<n<1M
---
# Malicious Smart Contract Classification Dataset
This dataset includes malicious and benign smart contracts deployed on Ethereum.
Code used to collect this data: [data collection notebook](https://github.com/forta-network/starter-kits/blob/main/malicious-smart-contract-ml-py/data_collection.ipynb)
For more details on how this dataset can be used, please check out this blog: [How Forta’s Predictive ML Models Detect Attacks Before Exploitation](https://forta.org/blog/how-fortas-predictive-ml-models-detect-attacks-before-exploitation/) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.