id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
Bingsu/KSS_Dataset | 2022-07-02T00:10:10.000Z | [
"task_categories:text-to-speech",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ko",
"license:cc-by-nc-sa-4.0",
"region:us"
] | Bingsu | null | null | 3 | 23 | 2022-04-19T06:59:21 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ko
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: Korean Single Speaker Speech Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-to-speech
task_ids: []
---
## Dataset Description
- **Homepage:** [Korean Single Speaker Speech Dataset](https://www.kaggle.com/datasets/bryanpark/korean-single-speaker-speech-dataset)
- **Repository:** [Kyubyong/kss](https://github.com/Kyubyong/kss)
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
# Description of the original author
### KSS Dataset: Korean Single speaker Speech Dataset
KSS Dataset is designed for the Korean text-to-speech task. It consists of audio files recorded by a professional female voice actoress and their aligned text extracted from my books. As a copyright holder, by courtesy of the publishers, I release this dataset to the public. To my best knowledge, this is the first publicly available speech dataset for Korean.
### File Format
Each line in `transcript.v.1.3.txt` is delimited by `|` into six fields.
- A. Audio file path
- B. Original script
- C. Expanded script
- D. Decomposed script
- E. Audio duration (seconds)
- F. English translation
e.g.,
1/1_0470.wav|저는 보통 20분 정도 낮잠을 잡니다.|저는 보통 이십 분 정도 낮잠을 잡니다.|저는 보통 이십 분 정도 낮잠을 잡니다.|4.1|I usually take a nap for 20 minutes.
### Specification
- Audio File Type: wav
- Total Running Time: 12+ hours
- Sample Rate: 44,100 KHZ
- Number of Audio Files: 12,853
- Sources
- |1| [Kyubyong Park, 500 Basic Korean Verbs, Tuttle Publishing, 2015.](https://www.amazon.com/500-Basic-Korean-Verbs-Comprehensive/dp/0804846057/ref=sr_1_1?s=books&ie=UTF8&qid=1522911616&sr=1-1&keywords=kyubyong+park)|
- |2| [Kyubyong Park, 500 Basic Korean Adjectives 2nd Ed., Youkrak, 2015.](http://www.hanbooks.com/500bakoad.html)|
- |3| [Kyubyong Park, Essential Korean Vocabulary, Tuttle Publishing, 2015.](https://www.amazon.com/Essential-Korean-Vocabulary-Phrases-Fluently/dp/0804843252/ref=sr_1_3?s=books&ie=UTF8&qid=1522911806&sr=1-3&keywords=kyubyong+park)|
- |4| [Kyubyong Park, Tuttle Learner's Korean-English Dictionary, Tuttle Publishing, 2012.](https://www.amazon.com/Tuttle-Learners-Korean-English-Dictionary-Essential/dp/0804841500/ref=sr_1_8?s=books&ie=UTF8&qid=1522911806&sr=1-8&keywords=kyubyong+park)|
### License
NC-SA 4.0. You CANNOT use this dataset for ANY COMMERCIAL purpose. Otherwise, you can freely use this.
### Citation
If you want to cite KSS Dataset, please refer to this:
Kyubyong Park, KSS Dataset: Korean Single speaker Speech Dataset, https://kaggle.com/bryanpark/korean-single-speaker-speech-dataset, 2018
### Reference
Check out [this](https://github.com/Kyubyong/kss) for a project using this KSS Dataset.
### Contact
You can contact me at kbpark.linguist@gmail.com.
April, 2018.
Kyubyong Park
### Dataset Summary
12,853 Korean audio files with transcription.
### Supported Tasks and Leaderboards
text-to-speech
### Languages
korean
## Dataset Structure
### Data Instances
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/KSS_Dataset")
>>> dataset["train"].features
{'audio': Audio(sampling_rate=44100, mono=True, decode=True, id=None),
'original_script': Value(dtype='string', id=None),
'expanded_script': Value(dtype='string', id=None),
'decomposed_script': Value(dtype='string', id=None),
'duration': Value(dtype='float32', id=None),
'english_translation': Value(dtype='string', id=None)}
```
```python
>>> dataset["train"][0]
{'audio': {'path': None,
'array': array([ 0.00000000e+00, 3.05175781e-05, -4.57763672e-05, ...,
0.00000000e+00, -3.05175781e-05, -3.05175781e-05]),
'sampling_rate': 44100},
'original_script': '그는 괜찮은 척하려고 애쓰는 것 같았다.',
'expanded_script': '그는 괜찮은 척하려고 애쓰는 것 같았다.',
'decomposed_script': '그는 괜찮은 척하려고 애쓰는 것 같았다.',
'duration': 3.5,
'english_translation': 'He seemed to be pretending to be okay.'}
```
### Data Splits
| | train |
|---------------|------:|
| # of examples | 12853 | | 4,170 | [
[
-0.01473236083984375,
-0.0290374755859375,
0.005924224853515625,
0.033050537109375,
-0.0323486328125,
-0.0017061233520507812,
-0.0343017578125,
-0.0010595321655273438,
0.033721923828125,
0.034698486328125,
-0.04132080078125,
-0.06610107421875,
-0.032806396484375... |
lcampillos/ctebmsp | 2022-07-23T22:48:56.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | lcampillos | null | null | 1 | 23 | 2022-06-21T09:35:11 | ---
license: cc-by-4.0
language:
- es
multilinguality:
- monolingual
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name:
- CT-EBM-SP
---
# CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.lllf.uam.es/ESP/nlpmedterm_en.html
- **Repository:** http://www.lllf.uam.es/ESP/nlpdata/wp2/CT-EBM-SP.zip
- **Paper:** Campillos-Llanos, L., Valverde-Mateos, A., Capllonch-Carrión, A., & Moreno-Sandoval, A. (2021). A clinical trials corpus annotated with UMLS entities to enhance the access to evidence-based medicine. BMC medical informatics and decision making, 21(1), 1-19
- **Point of Contact:** leonardo.campillos AT gmail.com
### Dataset Summary
The [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/) is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
- 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
- 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos
If you use the CT-EBM-SP resource, please, cite as follows:
```
@article{campillosetal-midm2021,
title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
journal = {BMC Medical Informatics and Decision Making},
volume={21},
number={1},
pages={1--19},
year={2021},
publisher={BioMed Central}
}
```
### Supported Tasks
Medical Named Entity Recognition
### Languages
Spanish
## Dataset Structure
### Data Instances
- 292 173 tokens
- 46 699 entities of the following [Unified Medical Language System (UMLS)](https://www.nlm.nih.gov/research/umls/index.html) semantic groups:
- ANAT (anatomy and body parts): 6728 entities
- CHEM (chemical and pharmacological substances): 9224 entities
- DISO (pathologic conditions): 13 067 entities
- PROC (therapeutic and diagnostic procedures, and laboratory analyses): 17 680 entities
### Data Splits
- Train: 175 203 tokens, 28 101 entities
- Development: 58 670 tokens, 9629 entities
- Test: 58 300 tokens, 8969 entities
## Dataset Creation
### Source Data
- Abstracts from journals published under a Creative Commons license, available in [PubMed](https://pubmed.ncbi.nlm.nih.gov/) or the [Scientific Electronic Library Online (SciELO)](https://scielo.org/es/)
- Clinical trials announcements published in the [European Clinical Trials Register](https://www.clinicaltrialsregister.eu) and [Repositorio Español de Estudios Clínicos](https://reec.aemps.es)
### Annotations
#### Who are the annotators?
- Leonardo Campillos-Llanos, Computational Linguist, Consejo Superior de Investigaciones Científicas
- Adrián Capllonch-Carrión, Medical Doctor, Centro de Salud Retiro, Hospital Universitario Gregorio Marañón
- Ana Valverde-Mateos, Medical Lexicographer, Spanish Royal Academy of Medicine
## Considerations for Using the Data
**Disclosure**: This dataset is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision.
This resource is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.
The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of this dataset.
**Descargo de responsabilidad**: Este conjunto de datos se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas
La finalidad de este modelo es generalista, y puede tener sesgos y/u otro tipo de distorsiones indeseables.
El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos datos. | 5,207 | [
[
-0.0164031982421875,
-0.0455322265625,
0.0418701171875,
0.0335693359375,
-0.02874755859375,
-0.0145416259765625,
-0.012725830078125,
-0.032989501953125,
0.045257568359375,
0.03802490234375,
-0.0214691162109375,
-0.0697021484375,
-0.06475830078125,
0.03256225... |
holylovenia/TITML-IDN | 2022-10-25T06:23:17.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:id",
"license:other",
"speech-recognition",
"region:us"
] | holylovenia | null | null | 0 | 23 | 2022-07-04T06:25:01 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- id
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: 'TITML-IDN: A large vocabulary continuous speech recognition system for
Indonesian language'
tags:
- speech-recognition
---
# IndoLVCSR
TITML-IDN (Tokyo Institute of Technology Multilingual - Indonesian) is collected and proposed by the authors of "A Large Vocabulary Continuous Speech Recognition System for Indonesian Language". The text transcriptions are obtained from newspaper and magazine articles. The speech is recorded from 20 speakers (11 males and 9 females).
# How to cite
If you use this dataset, you have to cite this paper:
```
@inproceedings{lestari2006titmlidn,
title={A large vocabulary continuous speech recognition system for Indonesian language},
author={Lestari, Dessi Puji and Iwano, Koji and Furui, Sadaoki},
booktitle={15th Indonesian Scientific Conference in Japan Proceedings},
pages={17--22},
year={2006}
}
``` | 1,119 | [
[
-0.0185394287109375,
-0.03173828125,
0.0019044876098632812,
0.03607177734375,
-0.0379638671875,
0.00101470947265625,
-0.03216552734375,
-0.030517578125,
0.0273284912109375,
0.032958984375,
-0.0029125213623046875,
-0.0213623046875,
-0.03857421875,
0.040771484... |
embedding-data/altlex | 2022-08-02T01:53:24.000Z | [
"language:en",
"license:mit",
"region:us"
] | embedding-data | null | null | 0 | 23 | 2022-07-07T23:00:22 | ---
license: mit
language:
- en
paperswithcode_id: embedding-data/altlex
pretty_name: altlex
---
# Dataset Card for "altlex"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/chridey/altlex](https://github.com/chridey/altlex)
- **Repository:** [More Information Needed](https://github.com/chridey/altlex)
- **Paper:** [https://aclanthology.org/P16-1135.pdf](https://aclanthology.org/P16-1135.pdf)
- **Point of Contact:** [Christopher Hidey](ch3085@columbia.edu)
### Dataset Summary
Git repository for software associated with the 2016 ACL paper "Identifying Causal Relations Using Parallel Wikipedia Articles."
Disclaimer: The team releasing altlex did not upload the dataset to the Hub and did not write a dataset card.
These steps were done by the Hugging Face team.
### Supported Tasks
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
### Languages
- English.
## Dataset Structure
Each example in the dataset contains a pair of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value":
```
{"set": [sentence_1, sentence_2]}
{"set": [sentence_1, sentence_2]}
...
{"set": [sentence_1, sentence_2]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/altlex")
```
The dataset is loaded as a `DatasetDict` and has the format:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: 112696
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/chridey/altlex)
#### Who are the source language producers?
[More Information Needed](https://github.com/chridey/altlex)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/chridey/altlex)
#### Who are the annotators?
[More Information Needed](https://github.com/chridey/altlex)
### Personal and Sensitive Information
[More Information Needed](https://github.com/chridey/altlex)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/chridey/altlex)
### Discussion of Biases
[More Information Needed](https://github.com/chridey/altlex)
### Other Known Limitations
[More Information Needed](https://github.com/chridey/altlex)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/chridey/altlex)
### Licensing Information
[More Information Needed](https://github.com/chridey/altlex)
### Citation Information
### Contributions
- [@chridey](https://github.com/chridey/altlex/commits?author=chridey) for adding this dataset to Github.
---
| 4,130 | [
[
-0.0277252197265625,
-0.041351318359375,
0.0285491943359375,
0.0036640167236328125,
0.0094146728515625,
-0.00586700439453125,
-0.020538330078125,
-0.028289794921875,
0.0220947265625,
0.041778564453125,
-0.055023193359375,
-0.058135986328125,
-0.04266357421875,
... |
biglam/clmet_3_1 | 2022-07-18T02:14:38.000Z | [
"task_categories:text-classification",
"task_categories:fill-mask",
"task_ids:multi-label-classification",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categorie... | biglam | The Corpus of Late Modern English Texts, version 3.1 (CLMET3.1) has been created by Hendrik De Smet,
Susanne Flach, Hans-Jürgen Diller and Jukka Tyrkkö, as an offshoot of a bigger project developing a database of text
descriptors (Diller, De Smet & Tyrkkö 2011). CLMET3.1 is a principled collection of public domain texts drawn from
various online archiving projects. This dataset can be used for part-of-speech tagging, NER and text classification | @article{de2015corpus,
title={Corpus of Late Modern English texts (version 3.1)},
author={De Smet, Hendrik and Flach, Susanne and Tyrkk{\"o}, Jukka and Diller, Hans-J{\"u}rgen},
year={2015}
} | 0 | 23 | 2022-07-17T23:27:04 | ---
annotations_creators:
- expert-generated
- machine-generated
language:
- 'en'
language_creators:
- found
paperswithcode_id: null
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: 'Corpus of Late Modern English Texts v3.1'
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
- fill-mask
task_ids:
- multi-label-classification
- masked-language-modeling
---
# Dataset Card for clmet_3_1
**NOTES**:
- Some of the annotations in the `class` and `pos` configs are not properly formed. These are indicated with warning messages when the dataset is loaded.
- In addition to the classes mentioned in the README for the dataset, there is an additional class in the `class` dataset called `QUOT`. As far as I can tell, this is used for tagging all quotation marks
- When the `class` and `pos` configs are loaded, the available class/pos tags are shown at the top
## Dataset Statistics:
The following table summarises the corpus make-up:
|PERIOD | #authors | #texts |CQP3.1 | non-PUNC |
|-----------|----------|---------------------|--------|---------|
|1710-1780 | 51 | 88 | 12,182,064 | 10,415,721|
|1780-1850 | 70 | 99 | 13,300,457 | 11,269,977|
|1850-1920 | 91 | 146 | 14,858,239 | 12,657,159|
|TOTAL | 212 | 333 | 40,340,760 | 34,342,857|
|GENRE (all tokens):| | | |
|---|---|---|---|
| | **1710-1780**| **1780-1850** | **1850-1920** |
|Narrative fiction | 5,405,645 | 5,780,352 | 7,561,339 |
|Narrative non-fiction | 2,145,946 | 2,261,485 | 1,097,487 |
|Drama | 523,318 | 441,040 | 763,352 |
|Letters | 1,208,219 | 842,795 | 554,046 |
|Treatise | 1,263,090 | 1,927,272 | 2,030,210 |
|Other | 1,635,846 | 2,047,513 | 2,851,805 |
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** http://fedora.clarin-d.uni-saarland.de/clmet/clmet.html
- **Repository:** [Needs More Information]
- **Paper:** https://icame.info/icame_static/ij29/ij29-page69-82.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Henrik De Smet](https://www.arts.kuleuven.be/ling/func/members/hendrik-desmet/func)
### Dataset Summary
The Corpus of Late Modern English Texts, version 3.1 (CLMET3.1) has been created by Hendrik De Smet, Susanne Flach, Hans-J�rgen Diller and Jukka Tyrkk�, as an offshoot of a bigger project developing a database of text descriptors (Diller, De Smet & Tyrkk� 2011). CLMET3.1 is a principled collection of public domain texts drawn from various online archiving projects. In total, the corpus contains some 34 million words of running text. It incorporates CLMET, CLMETEV, and CLMET3.0, and has been compiled following roughly the same principles, that is:
- The corpus covers the period 17101920, divided into three 70-year sub-periods.
- The texts making up the corpus have all been written by British and Irish authors who are native speakers of English.
- The corpus never contains more than three texts by the same author.
- The texts within each sub-period have been written by authors born within a correspondingly restricted sub-period.
### Supported Tasks and Leaderboards
- `named-entity-recognition`: Since this dataset is tagged, it can be used for performing NER
- `text-classification`: Each text comes with the date of the text and can be used to perform stylistic classification of texts
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`
## Dataset Structure
### Data Instances
A `plain` sample looks as follows:
```
{'text': "\nFAME AND THE POET\n \nDRAMATIS PERSONAE�\n \nHarry de Reves , a Poet .\n \n( This name , though of course of French origin , has become anglicised and is pronounced de Reevs . )\n \nDick Prattle , a Lieutenant-Major of the Royal Horse Marines .\n \nFame .\n \nScene\n \nThe Poet 's rooms in London .\nWindows in back .\nA high screen in a corner .\n \nTime : February 30th .\n \nThe Poet is sitting at a table writing .\n \n[ Enter Dick Prattle .\n \nPrattle : Hullo , Harry .\n \nde
Reves : Hullo , Dick .\nGood Lord , where are you from ?\n \nPrattle ( casually ) : The ends of the earth .\n \nde Reves : Well , I 'm damned !\n \nPrattle : Thought I 'd drop in and see how you were getting on .\n \nde Reves : Well , that 's splendid .\nWhat are you doing in London ?\n \nPrattle : Well , I wanted to see if I could get one or two decent ties to wear - you can get nothing out there - then I thought I 'd have a look and see how London was getting on .\n \nde Reves : Splendid !\nHow 's everybody ?\n \nPrattle : All going strong .\n \nde Reves : That 's good .\n \nPrattle ( seeing paper and ink ) : But what are you doing ?\n \nde Reves : Writing .\n \nPrattle : Writing ?\nI did n't know you wrote .\n \nde Reves : Yes , I
've taken to it rather .\n \nPrattle : I say - writing 's no good .\nWhat do you write ?\n \nde Reves : Oh , poetry .\n \nPrattle : Poetry !\nGood Lord !\n \nde Reves : Yes , that sort of thing , you know .\n \nPrattle : Good Lord !\nDo you make any money by it ?\n \nde Reves : No .\nHardly any .\n \nPrattle : I say - why do n't you chuck it ?\n \nde Reves : Oh , I do n't know .\nSome people seem to like my stuff , rather .\nThat 's why I go on .\n \nPrattle : I 'd chuck it if there 's no money in it .\n \nde Reves : Ah , but then it 's hardly in your line , is it ?\nYou 'd hardly approve of poetry if there was money in it .\n \nPrattle : Oh , I do n't say that .\nIf I could make as much by poetry as I can by betting I do n't say I would n't try the poetry touch , only - -\n \nde Reves : Only what ?\n \nPrattle : Oh , I do n't know .\nOnly there seems more sense in betting , somehow .\n \nde Reves : Well , yes .\nI suppose it 's easier to tell what an earthly horse is going to
do , than to tell what Pegasus - -\n \nPrattle : What 's Pegasus ?\n \nde Reves : Oh , the winged horse of poets .\n \nPrattle : I say !\nYou do n't believe in a winged horse , do you ?\n \nde Reves : In our trade we believe in all fabulous things
.\nThey all represent some large truth to us .\nAn emblem like Pegasus is as real a thing to a poet as a Derby winner would be to you .\n \nPrattle : I say .\n( Give me a cigarette .\nThanks . )\nWhat ?\nThen you 'd believe in nymphs and fauns , and Pan , and all those kind of birds ?\n \nde Reves : Yes .\nYes .\nIn all of them .\n \nPrattle : Good Lord !\n \nde Reves : You believe in the Lord Mayor of London , do n't you ?\n \nPrattle : Yes , of course ; but what has - -\n \nde Reves : Four million people or so made him Lord Mayor , did n't they ?\nAnd he represents to them the wealth and dignity and tradition of - -\n \nPrattle : Yes ; but , I say , what has all this - -\n \nde Reves : Well , he stands for an idea to them , and they made him Lord Mayor , and so he is one ...\n \nPrattle : Well , of course he is .\n \nde Reves : In the same way Pan has been made what he is by millions ; by millions to whom he represents world-old traditions .\n \nPrattle ( rising from his chair and stepping backwards , laughing and looking at the Poet in a kind of assumed wonder ) : I say ... I say ... You old heathen ... but Good Lord ...\n \n[ He bumps into the high screen behind , pushing it back a little .\n \nde Reves : Look out !\nLook out !\n \nPrattle : What ?\nWhat 's the matter ?\n \nde Reves : The screen !\n \nPrattle : Oh , sorry , yes .\nI 'll put it right .\n \n[ He is about to go round behind it .\n \nde Reves : No , do n't go round there .\n \nPrattle : What ?\nWhy not ?\n \nde Reves : Oh , you would n't understand .\n \nPrattle : Would n't understand ?\nWhy , what have you got ?\n \nde Reves : Oh , one of those things ... You would n't understand .\n \nPrattle : Of course I 'd understand .\nLet 's have a look .\n \n[ The Poet walks towards Prattle and the screen .\nHe protests no further .\nPrattle looks round the corner of the screen .\n \nAn altar .\n \nde Reves ( removing the screen altogether ) : That is all .\nWhat do you make of it ?\n \n[ An
altar of Greek design , shaped like a pedestal , is revealed .\nPapers litter the floor all about it .\n \nPrattle : I say - you always were an untidy devil .\n \nde Reves : Well , what do you make of it ?\n \nPrattle : It reminds me of your room at Eton .\n \nde Reves : My room at Eton ?\n \nPrattle : Yes , you always had papers all over your floor .\n \nde Reves : Oh , yes - -\n \nPrattle : And what are these ?\n \nde Reves : All these are poems ; and this is my altar to Fame .\n \nPrattle : To Fame ?\n \nde Reves : The same that Homer knew .\n \nPrattle : Good Lord !\n \nde Reves : Keats never saw her .\nShelley died too young .\nShe came late at the best of times , now scarcely ever .\n \nPrattle : But , my dear fellow , you do n't mean that you think there really is such a person ?\n \nde Reves : I offer all my songs to her .\n \nPrattle : But you do n't mean you think you could actually see Fame ?\n \nde Reves : We poets personify abstract things , and not poets only but
sculptors7 and painters too .\nAll the great things of the world are those abstract things .\n \nPrattle : But what I mean is , they 're not really there , like you or me .\n \nde Reves : To us these things are more real than men , they outlive generations , they watch the passing of kingdoms : we go by them like dust ; they are still there , unmoved , unsmiling .\n \nPrattle : But , but , you ca n't think that you could see Fame , you do n't expect to see it ?\n \nde Reves : Not to me .\nNever to me .\nShe of the golden trumpet and Greek dress will never appear to me ... We all have our dreams .\n \nPrattle : I say - what have you been doing all day ?\n \nde Reves : I ?\nOh , only writing a sonnet .\n \nPrattle : Is it a long one ?\n \nde Reves : Not very .\n \nPrattle : About how long is it ?\n \nde Reves : About fourteen lines .\n \nPrattle ( impressively ) : I tell you what it is .\n \nde Reves : Yes ?\n \nPrattle : I tell you what .\nYou 've been overworking yourself .\nI
once got like that on board the Sandhurst , working for the passing-out exam .\nI got so bad that I could have seen anything .\n \nde Reves : Seen anything ?\n \nPrattle : Lord , yes ; horned pigs , snakes with wings ; anything ; one of your winged horses even .\nThey gave me some stuff called bromide for it .\nYou take a rest .\n \nde Reves : But my dear fellow , you do n't understand at all .\nI merely said that abstract things are to a poet as near and real and visible as one of your bookmakers or barmaids .\n \nPrattle : I know .\nYou take a rest .\n \nde Reves : Well , perhaps I will .\nI 'd come with you to that musical comedy you 're going to see , only I 'm a bit tired after writing this ; it 's a tedious job .\nI 'll come another night .\n \nPrattle : How do you know I 'm going to see a musical comedy ?\n \nde Reves : Well , where would you go ?\nHamlet 's 8 on at the Lord Chamberlain 's .\nYou 're not going there .\n \nPrattle : Do I look like it ?\n \nde Reves : No .\n \nPrattle : Well , you 're quite right .\nI 'm going to see `` The Girl from Bedlam . ''\nSo long .\nI must push off now .\nIt 's getting late .\nYou take a rest .\nDo n't add another line to that sonnet ; fourteen 's quite enough .\nYou take a
rest .\nDo n't have any dinner to-night , just rest .\nI was like that once myself .\nSo long .\n \nde Reves : So long .\n \n[ Exit Prattle .\nde Reves returns to his table and sits down .\n \nGood old Dick !\nHe 's the same as ever .\nLord , how time passes .\n \nHe takes his pen and his sonnet and makes a few alterations .\n \nWell , that 's finished .\nI ca n't do any more to it .\n \n[ He rises and goes to the screen ; he draws back part of it and goes up to the altar .\nHe is about to place his sonnet reverently at the foot of the altar amongst his other verses .\n \nNo , I will not put it there .\nThis one is worthy of the altar .\n \n[ He places the sonnet upon the altar itself .\n \nIf that sonnet does not give me fame , nothing that I have done before will give it to me , nothing that I ever will do .\n \n[ He replaces the screen and returns to his chair at the table .\nTwilight is coming on .\nHe sits with his elbow on the table , his head on his hand , or however the actor pleases .\n \nWell , well .\nFancy seeing Dick again .\nWell , Dick enjoys his life , so he 's no fool .\nWhat was that he said ?\n`` There 's no money in poetry .\nYou 'd better chuck it . ''\nTen years ' work and what have I to show for it ?\nThe admiration of men who care for poetry , and how many of them are there ?\nThere 's a bigger demand for smoked glasses to look at eclipses of the sun .\nWhy should Fame come to me ?\nHave n't I given up my days for her ?\nThat is enough to keep her away .\nI am a poet ; that is enough reason for her to slight me .\nProud and aloof and cold as marble , what does Fame care for us ?\nYes , Dick is right .\nIt 's a poor game chasing illusions , hunting the intangible , pursuing dreams .\nDreams ?\nWhy , we are ourselves dreams .\n \n[ He leans back in his chair .\n \nWe are such stuff As dreams are made on , and our little life Is rounded with a sleep .\n[ He is silent for a while .\nSuddenly he lifts his head .\n \nMy room at Eton , Dick said .\nAn untidy mess .\n \n[ As he lifts his head and says these words , twilight gives place to broad daylight , merely as a hint that the author of the play may have been mistaken , and the whole thing may have been no more than a poet
's dream .\n \nSo it was , and it 's an untidy mess there ( looking at screen ) too .\nDick 's right .\nI 'll tidy it up .\nI 'll burn the whole damned heap ,\n \n[ He advances impetuously towards the screen .\n \nevery damned poem that I was ever
fool enough to waste my time on .\n \n[ He pushes back the screen .\nFame in a Greek dress with a long golden trumpet in her hand is seen standing motionless on the altar like a marble goddess .\n \nSo ... you have come !\n \n[ For a while he stands thunderstruck .\nThen he approaches the altar .\n \nDivine fair lady , you have come .\n \n[ He holds up his hand to her and leads her down from the altar and into the centre of the stage .\nAt whatever moment the actor finds it most convenient , he repossesses himself of the sonnet that he had placed on the altar .\nHe now offers it to Fame .\n \nThis is my sonnet .\nIs it well done ?\n \n[ Fame takes it and reads it in silence , while the Poet watches her rapturously .\n \nFame : You 're a bit of all right .\n \nde Reves : What ?\n \nFame : Some poet .\n \nde Reves : I - I - scarcely ... understand .\n \nFame : You 're IT .\n \nde Reves : But ... it is not possible ... are you she that knew Homer ?\n \nFame : Homer ?\nLord , yes .\nBlind old bat , ' e could n't see a yard .\n \nde Reves : O Heavens !\n \n[ Fame walks beautifully to the window .\nShe opens it and puts her head out .\n \nFame ( in a voice with which a woman in an upper storey would cry for help if the house was well alight ) : Hi !\nHi !\nBoys !\nHi !\nSay , folks !\nHi !\n \n[ The murmur of a gathering crowd is heard .\nFame blows her trumpet .\n \nFame : Hi , he 's a poet !\n( Quickly , over her shoulder . )\nWhat 's your name ?\n \nde Reves : De Reves .\n \nFame : His name 's de Reves .\n \nde Reves : Harry de Reves .\n \nFame : His pals call him Harry .\n \nThe Crowd : Hooray !\nHooray !\nHooray !\n \nFame : Say , what 's your favourite colour ?\n \nde Reves : I ... I ... I do n't quite understand .\n \nFame : Well , which do you like best , green or blue ?\n \nde Reves : Oh - er - blue .\n \n[ She blows her trumpet out of the window .\n \nNo - er - I think green .\n \nFame : Green is his favourite colour .\n \nThe Crowd : Hooray !\nHooray !\nHooray !\n \nFame : ` Ere , tell us something .\nThey want to know all about yer .\n \nde Reves : Would n't 9 you perhaps ... would they care to hear my sonnet , if you would - er ...\n \nFame ( picking up quill ) : Here , what 's this ?\n \nde Reves : Oh , that 's my pen .\n \nFame ( after another blast on her trumpet ) : He writes with a quill .\n \n[ Cheers from the Crowd .\n \nFame ( going to a cupboard ) : Here , what have you got in here ?\n \nde Reves : Oh ... er ... those are my breakfast things .\n \nFame ( finding a dirty plate ) : What have yer had on this one ?\n \nde Reves ( mournfully ) : Oh , eggs and bacon .\n \nFame ( at the window ) : He has eggs and bacon for breakfast .\n \nThe Crowd : Hip hip hip , hooray !\nHip hip hip , hooray !\nHip hip hip , hooray !\nFame : Hi , and what 's this ?\n \nde Reves ( miserably ) : Oh , a golf stick .\n \nFame : He 's a man 's man !\nHe 's a virile man !\nHe 's a manly man !\n \n[ Wild cheers from the Crowd , this time only from women 's voices .\n \nde Reves : Oh , this is terrible .\nThis is terrible .\nThis is terrible .\n \n[ Fame gives another peal on her horn .\nShe is about to speak .\n \nde Reves ( solemnly and mournfully ) : One moment , one moment ...\n \nFame : Well , out with it .\n \nde Reves : For ten years , divine lady , I have worshipped you , offering all my songs ... I find ... I find I am not worthy ...\n \nFame : Oh , you 're all right .\n \nde Reves : No , no , I am not worthy .\nIt can not be .\nIt can not possibly be .\nOthers deserve you more .\nI must say it !\nI can not possibly love you .\nOthers are worthy .\nYou will find others .\nBut I , no , no , no .\nIt can not be .\nIt can not be .\nOh , pardon me , but it must not .\n \n[ Meanwhile Fame has been lighting one of his cigarettes .\nShe sits in a comfortable chair , leans right back , and puts her feet right up on the table amongst the poet 's papers .\n \nOh , I fear I offend you .\nBut - it can not be .\n \nFame : Oh , that 's all right , old bird ; no offence .\nI ai n't going to leave you .\n \nde Reves : But - but - but - I do not understand .\n \nFame : I 've come to stay , I have .\n \n[ She blows a puff of smoke through her trumpet .\n \nCURTAIN .\n", 'genre': 'Drama', 'subgenre': 'drama', 'year': '1919', 'quarter_cent': '1900-1924', 'decade': '1910s', 'title': 'Fame and the poet', 'author': 'Dunsany [Edward John Moreton Drax Plunkett]', 'notes': '', 'comments': 'selected from larger
file', 'period': '1850-1920', 'id': '317'}
```
A `pos` sample looks as follows:
```
{'text': ['FAME', 'AND', 'THE', 'POET', 'DRAMATIS', 'PERSONAE�', 'Harry', 'de', 'Reves', ',', 'a', 'Poet', '.', '(', 'This', 'name', ',', 'though', 'of', 'course', 'of', 'French', 'origin', ',', 'has', 'become', 'anglicised', 'and', 'is', 'pronounced', 'de', 'Reevs', '.', ')', 'Dick', 'Prattle', ',', 'a', 'Lieutenant-Major', 'of', 'the', 'Royal', 'Horse', 'Marines', '.', 'Fame', '.', 'Scene', 'The', 'Poet', "'s", 'rooms', 'in', 'London', '.', 'Windows', 'in', 'back', '.', 'A', 'high', 'screen', 'in', 'a', 'corner', '.', 'Time', ':', 'February', '30th', '.', 'The', 'Poet', 'is', 'sitting', 'at', 'a', 'table', 'writing', '.', '[', 'Enter', 'Dick', 'Prattle', '.', 'Prattle', ':', 'Hullo', ',', 'Harry', '.', 'de', 'Reves', ':', 'Hullo', ',', 'Dick', '.', 'Good', 'Lord', ',', 'where', 'are', 'you', 'from', '?', 'Prattle', '(', 'casually', ')', ':', 'The', 'ends', 'of', 'the', 'earth', '.', 'de', 'Reves', ':', 'Well', ',', 'I', "'m", 'damned', '!', 'Prattle', ':', 'Thought', 'I', "'d", 'drop', 'in', 'and', 'see', 'how', 'you', 'were', 'getting', 'on', '.', 'de', 'Reves', ':', 'Well', ',', 'that', "'s", 'splendid', '.', 'What', 'are', 'you', 'doing', 'in', 'London', '?', 'Prattle', ':', 'Well', ',', 'I', 'wanted', 'to', 'see',
'if', 'I', 'could', 'get', 'one', 'or', 'two', 'decent', 'ties', 'to', 'wear', '-', 'you', 'can', 'get', 'nothing', 'out', 'there', '-', 'then', 'I', 'thought', 'I', "'d", 'have', 'a', 'look', 'and', 'see', 'how', 'London', 'was', 'getting', 'on',
'.', 'de', 'Reves', ':', 'Splendid', '!', 'How', "'s", 'everybody', '?', 'Prattle', ':', 'All', 'going', 'strong', '.', 'de', 'Reves', ':', 'That', "'s", 'good', '.', 'Prattle', '(', 'seeing', 'paper', 'and', 'ink', ')', ':', 'But', 'what', 'are',
'you', 'doing', '?', 'de', 'Reves', ':', 'Writing', '.', 'Prattle', ':', 'Writing', '?', 'I', 'did', "n't", 'know', 'you', 'wrote', '.', 'de', 'Reves', ':', 'Yes', ',', 'I', "'ve", 'taken', 'to', 'it', 'rather', '.', 'Prattle', ':', 'I', 'say', '-', 'writing', "'s", 'no', 'good', '.', 'What', 'do', 'you', 'write', '?', 'de', 'Reves', ':', 'Oh', ',', 'poetry', '.', 'Prattle', ':', 'Poetry', '!', 'Good', 'Lord', '!', 'de', 'Reves', ':', 'Yes', ',', 'that', 'sort', 'of', 'thing', ',', 'you', 'know', '.', 'Prattle', ':', 'Good', 'Lord', '!', 'Do', 'you', 'make', 'any', 'money', 'by', 'it', '?', 'de', 'Reves', ':', 'No', '.', 'Hardly', 'any', '.', 'Prattle', ':', 'I', 'say', '-', 'why', 'do', "n't", 'you', 'chuck', 'it', '?', 'de', 'Reves', ':', 'Oh', ',', 'I', 'do', "n't", 'know', '.', 'Some', 'people', 'seem', 'to', 'like', 'my', 'stuff', ',', 'rather', '.', 'That', "'s", 'why', 'I', 'go', 'on', '.', 'Prattle', ':', 'I', "'d", 'chuck', 'it', 'if', 'there', "'s", 'no', 'money', 'in', 'it', '.', 'de', 'Reves', ':', 'Ah', ',', 'but', 'then', 'it', "'s", 'hardly', 'in', 'your', 'line', ',', 'is', 'it', '?', 'You', "'d", 'hardly', 'approve', 'of', 'poetry', 'if', 'there', 'was', 'money', 'in', 'it', '.', 'Prattle', ':', 'Oh', ',', 'I', 'do', "n't", 'say', 'that', '.', 'If', 'I', 'could', 'make', 'as', 'much', 'by', 'poetry', 'as', 'I', 'can', 'by', 'betting', 'I', 'do', "n't", 'say', 'I', 'would', "n't", 'try', 'the', 'poetry', 'touch', ',', 'only', '-', '-', 'de', 'Reves', ':', 'Only', 'what', '?', 'Prattle', ':', 'Oh', ',', 'I', 'do', "n't", 'know', '.', 'Only', 'there', 'seems', 'more', 'sense', 'in', 'betting', ',', 'somehow', '.', 'de', 'Reves', ':', 'Well', ',', 'yes', '.', 'I', 'suppose', 'it', "'s", 'easier', 'to', 'tell', 'what', 'an', 'earthly', 'horse', 'is', 'going', 'to', 'do', ',', 'than', 'to', 'tell', 'what', 'Pegasus', '-', '-', 'Prattle', ':', 'What', "'s", 'Pegasus', '?', 'de', 'Reves', ':', 'Oh', ',', 'the', 'winged', 'horse', 'of', 'poets', '.', 'Prattle', ':', 'I', 'say', '!', 'You', 'do', "n't", 'believe', 'in', 'a', 'winged', 'horse', ',', 'do', 'you', '?', 'de', 'Reves', ':', 'In', 'our', 'trade', 'we', 'believe', 'in', 'all', 'fabulous', 'things', '.', 'They', 'all', 'represent', 'some', 'large', 'truth', 'to', 'us', '.', 'An', 'emblem', 'like', 'Pegasus', 'is', 'as', 'real', 'a', 'thing', 'to', 'a', 'poet', 'as', 'a', 'Derby', 'winner', 'would', 'be', 'to', 'you', '.', 'Prattle', ':', 'I', 'say', '.', '(', 'Give', 'me', 'a', 'cigarette', '.', 'Thanks', '.', ')', 'What', '?', 'Then', 'you', "'d", 'believe', 'in', 'nymphs', 'and', 'fauns', ',', 'and', 'Pan', ',', 'and', 'all', 'those', 'kind', 'of', 'birds', '?', 'de', 'Reves', ':', 'Yes', '.', 'Yes', '.', 'In',
'all', 'of', 'them', '.', 'Prattle', ':', 'Good', 'Lord', '!', 'de', 'Reves', ':', 'You', 'believe', 'in', 'the', 'Lord', 'Mayor', 'of', 'London', ',', 'do', "n't", 'you', '?', 'Prattle', ':', 'Yes', ',', 'of', 'course', ';', 'but', 'what', 'has',
'-', '-', 'de', 'Reves', ':', 'Four', 'million', 'people', 'or', 'so', 'made', 'him', 'Lord', 'Mayor', ',', 'did', "n't", 'they', '?', 'And', 'he', 'represents', 'to', 'them', 'the', 'wealth', 'and', 'dignity', 'and', 'tradition', 'of', '-', '-', 'Prattle', ':', 'Yes', ';', 'but', ',', 'I', 'say', ',', 'what', 'has', 'all', 'this', '-', '-', 'de', 'Reves', ':', 'Well', ',', 'he', 'stands', 'for', 'an', 'idea', 'to', 'them', ',', 'and', 'they', 'made', 'him', 'Lord', 'Mayor', ',', 'and', 'so', 'he', 'is', 'one', '...', 'Prattle', ':', 'Well', ',', 'of', 'course', 'he', 'is', '.', 'de', 'Reves', ':', 'In', 'the', 'same', 'way', 'Pan', 'has', 'been', 'made', 'what', 'he', 'is', 'by', 'millions', ';', 'by', 'millions', 'to', 'whom', 'he', 'represents', 'world-old', 'traditions', '.', 'Prattle', '(', 'rising', 'from', 'his', 'chair', 'and', 'stepping', 'backwards', ',', 'laughing', 'and', 'looking', 'at', 'the', 'Poet', 'in', 'a', 'kind', 'of', 'assumed', 'wonder', ')', ':', 'I', 'say', '...', 'I', 'say', '...', 'You', 'old', 'heathen', '...', 'but', 'Good', 'Lord', '...', '[', 'He', 'bumps', 'into', 'the', 'high', 'screen', 'behind', ',', 'pushing', 'it', 'back', 'a', 'little', '.', 'de', 'Reves', ':', 'Look', 'out', '!', 'Look', 'out', '!', 'Prattle', ':', 'What', '?', 'What', "'s", 'the', 'matter', '?', 'de', 'Reves', ':', 'The', 'screen', '!', 'Prattle', ':', 'Oh', ',', 'sorry', ',', 'yes', '.', 'I', "'ll", 'put', 'it', 'right', '.', '[', 'He', 'is', 'about', 'to', 'go', 'round', 'behind', 'it', '.', 'de', 'Reves', ':', 'No', ',', 'do', "n't", 'go', 'round', 'there', '.', 'Prattle', ':', 'What', '?', 'Why', 'not', '?', 'de', 'Reves', ':', 'Oh', ',', 'you', 'would', "n't", 'understand', '.', 'Prattle', ':', 'Would', "n't", 'understand', '?', 'Why', ',', 'what', 'have', 'you', 'got', '?', 'de', 'Reves', ':', 'Oh', ',', 'one', 'of', 'those', 'things', '...', 'You', 'would', "n't", 'understand', '.', 'Prattle', ':', 'Of', 'course', 'I', "'d", 'understand', '.', 'Let', "'s", 'have', 'a', 'look', '.', '[', 'The', 'Poet', 'walks', 'towards', 'Prattle', 'and', 'the', 'screen', '.', 'He', 'protests', 'no', 'further', '.', 'Prattle', 'looks', 'round', 'the', 'corner', 'of', 'the', 'screen', '.', 'An', 'altar', '.', 'de', 'Reves', '(', 'removing', 'the', 'screen', 'altogether', ')', ':', 'That', 'is', 'all', '.', 'What', 'do', 'you', 'make', 'of', 'it', '?', '[', 'An', 'altar', 'of', 'Greek', 'design', ',', 'shaped', 'like', 'a', 'pedestal', ',', 'is', 'revealed', '.', 'Papers', 'litter', 'the', 'floor', 'all', 'about', 'it', '.', 'Prattle', ':', 'I', 'say', '-', 'you', 'always', 'were', 'an', 'untidy', 'devil', '.', 'de', 'Reves', ':', 'Well', ',', 'what', 'do', 'you', 'make', 'of', 'it', '?', 'Prattle', ':', 'It', 'reminds', 'me', 'of', 'your', 'room', 'at', 'Eton', '.', 'de', 'Reves', ':', 'My', 'room', 'at', 'Eton', '?', 'Prattle', ':', 'Yes', ',', 'you', 'always', 'had', 'papers', 'all', 'over', 'your', 'floor', '.', 'de', 'Reves', ':', 'Oh', ',', 'yes', '-', '-', 'Prattle', ':', 'And', 'what', 'are', 'these', '?', 'de', 'Reves', ':', 'All', 'these', 'are', 'poems', ';', 'and', 'this', 'is', 'my', 'altar', 'to', 'Fame', '.', 'Prattle', ':', 'To', 'Fame', '?', 'de', 'Reves', ':', 'The', 'same', 'that', 'Homer', 'knew', '.', 'Prattle', ':', 'Good', 'Lord', '!', 'de', 'Reves', ':', 'Keats', 'never', 'saw', 'her', '.', 'Shelley', 'died', 'too', 'young', '.', 'She', 'came', 'late', 'at', 'the', 'best', 'of', 'times', ',', 'now', 'scarcely', 'ever', '.', 'Prattle', ':', 'But', ',', 'my', 'dear', 'fellow', ',', 'you', 'do', "n't", 'mean', 'that', 'you', 'think', 'there', 'really', 'is', 'such', 'a', 'person', '?', 'de', 'Reves', ':', 'I', 'offer', 'all', 'my', 'songs', 'to', 'her', '.', 'Prattle', ':', 'But', 'you', 'do', "n't", 'mean', 'you', 'think', 'you', 'could', 'actually', 'see', 'Fame', '?', 'de', 'Reves', ':', 'We', 'poets', 'personify', 'abstract', 'things', ',', 'and', 'not', 'poets', 'only', 'but', 'sculptors7', 'and', 'painters', 'too', '.', 'All', 'the', 'great', 'things', 'of', 'the', 'world', 'are', 'those', 'abstract', 'things', '.', 'Prattle', ':', 'But', 'what', 'I', 'mean', 'is', ',', 'they', "'re", 'not', 'really', 'there', ',', 'like', 'you', 'or', 'me', '.', 'de', 'Reves', ':', 'To', 'us', 'these', 'things', 'are', 'more', 'real', 'than', 'men', ',', 'they', 'outlive', 'generations', ',', 'they', 'watch', 'the', 'passing', 'of', 'kingdoms', ':', 'we', 'go', 'by', 'them', 'like', 'dust', ';', 'they', 'are', 'still', 'there', ',', 'unmoved', ',', 'unsmiling', '.', 'Prattle', ':', 'But', ',', 'but', ',', 'you', 'ca', "n't", 'think', 'that', 'you', 'could', 'see', 'Fame', ',', 'you', 'do', "n't", 'expect', 'to', 'see',
'it', '?', 'de', 'Reves', ':', 'Not', 'to', 'me', '.', 'Never', 'to', 'me', '.', 'She', 'of', 'the', 'golden', 'trumpet', 'and', 'Greek', 'dress', 'will', 'never', 'appear', 'to', 'me', '...', 'We', 'all', 'have', 'our', 'dreams', '.', 'Prattle', ':', 'I', 'say', '-', 'what', 'have', 'you', 'been', 'doing', 'all', 'day', '?', 'de', 'Reves', ':', 'I', '?', 'Oh', ',', 'only', 'writing', 'a', 'sonnet', '.', 'Prattle', ':', 'Is', 'it', 'a', 'long', 'one', '?', 'de', 'Reves', ':', 'Not', 'very',
'.', 'Prattle', ':', 'About', 'how', 'long', 'is', 'it', '?', 'de', 'Reves', ':', 'About', 'fourteen', 'lines', '.', 'Prattle', '(', 'impressively', ')', ':', 'I', 'tell', 'you', 'what', 'it', 'is', '.', 'de', 'Reves', ':', 'Yes', '?', 'Prattle', ':', 'I', 'tell', 'you', 'what', '.', 'You', "'ve", 'been', 'overworking', 'yourself', '.', 'I', 'once', 'got', 'like', 'that', 'on', 'board', 'the', 'Sandhurst', ',', 'working', 'for', 'the', 'passing-out', 'exam', '.', 'I', 'got', 'so', 'bad', 'that', 'I', 'could', 'have', 'seen', 'anything', '.', 'de', 'Reves', ':', 'Seen', 'anything', '?', 'Prattle', ':', 'Lord', ',', 'yes', ';', 'horned', 'pigs', ',', 'snakes', 'with', 'wings', ';', 'anything', ';', 'one', 'of', 'your', 'winged', 'horses', 'even', '.', 'They', 'gave', 'me', 'some', 'stuff', 'called', 'bromide', 'for', 'it', '.', 'You', 'take', 'a', 'rest', '.', 'de', 'Reves', ':', 'But', 'my', 'dear', 'fellow', ',', 'you', 'do', "n't", 'understand', 'at', 'all', '.', 'I', 'merely', 'said', 'that', 'abstract', 'things', 'are', 'to', 'a', 'poet', 'as', 'near', 'and', 'real', 'and', 'visible', 'as', 'one', 'of', 'your', 'bookmakers', 'or', 'barmaids', '.', 'Prattle', ':', 'I', 'know', '.', 'You', 'take', 'a', 'rest', '.', 'de', 'Reves', ':', 'Well', ',', 'perhaps', 'I', 'will', '.', 'I', "'d", 'come', 'with', 'you', 'to', 'that', 'musical', 'comedy', 'you', "'re", 'going', 'to', 'see', ',', 'only', 'I', "'m", 'a', 'bit', 'tired', 'after', 'writing', 'this', ';', 'it', "'s", 'a', 'tedious', 'job', '.', 'I', "'ll", 'come', 'another', 'night', '.', 'Prattle', ':', 'How', 'do', 'you', 'know', 'I', "'m", 'going', 'to', 'see', 'a', 'musical', 'comedy', '?', 'de', 'Reves', ':', 'Well', ',', 'where', 'would', 'you', 'go', '?', 'Hamlet', "'s", '8', 'on', 'at', 'the', 'Lord', 'Chamberlain', "'s", '.', 'You', "'re", 'not', 'going', 'there', '.', 'Prattle', ':', 'Do', 'I', 'look', 'like', 'it', '?', 'de', 'Reves', ':', 'No', '.', 'Prattle', ':', 'Well', ',', 'you', "'re", 'quite', 'right', '.', 'I', "'m", 'going', 'to', 'see', '``', 'The', 'Girl', 'from', 'Bedlam', '.', "''", 'So', 'long', '.', 'I', 'must', 'push', 'off', 'now', '.', 'It', "'s", 'getting', 'late', '.', 'You', 'take', 'a', 'rest', '.', 'Do', "n't", 'add', 'another', 'line', 'to', 'that', 'sonnet', ';', 'fourteen', "'s", 'quite', 'enough', '.', 'You', 'take', 'a', 'rest', '.', 'Do', "n't", 'have', 'any', 'dinner', 'to-night', ',', 'just', 'rest', '.', 'I', 'was', 'like', 'that', 'once', 'myself', '.', 'So', 'long', '.', 'de', 'Reves', ':', 'So', 'long', '.', '[', 'Exit', 'Prattle', '.', 'de', 'Reves', 'returns', 'to', 'his', 'table', 'and', 'sits', 'down', '.', 'Good', 'old', 'Dick', '!', 'He', "'s", 'the', 'same', 'as', 'ever', '.', 'Lord', ',', 'how', 'time', 'passes', '.', 'He', 'takes', 'his', 'pen', 'and', 'his', 'sonnet', 'and', 'makes', 'a', 'few', 'alterations', '.', 'Well', ',', 'that', "'s", 'finished', '.', 'I', 'ca', "n't", 'do', 'any', 'more', 'to', 'it', '.', '[', 'He', 'rises', 'and', 'goes', 'to', 'the', 'screen', ';', 'he', 'draws', 'back', 'part', 'of', 'it', 'and', 'goes', 'up', 'to', 'the', 'altar', '.', 'He', 'is', 'about', 'to', 'place', 'his', 'sonnet', 'reverently', 'at', 'the', 'foot', 'of', 'the', 'altar', 'amongst', 'his', 'other', 'verses', '.', 'No', ',', 'I', 'will', 'not', 'put', 'it', 'there', '.', 'This', 'one', 'is', 'worthy', 'of', 'the', 'altar', '.', '[', 'He', 'places', 'the', 'sonnet', 'upon', 'the', 'altar', 'itself', '.', 'If', 'that', 'sonnet', 'does', 'not', 'give', 'me', 'fame', ',', 'nothing', 'that', 'I', 'have', 'done', 'before', 'will', 'give', 'it', 'to', 'me', ',', 'nothing', 'that', 'I', 'ever', 'will', 'do', '.', '[', 'He', 'replaces', 'the', 'screen', 'and', 'returns', 'to', 'his', 'chair', 'at', 'the', 'table', '.', 'Twilight', 'is', 'coming', 'on', '.', 'He', 'sits', 'with', 'his', 'elbow', 'on', 'the', 'table', ',', 'his', 'head', 'on', 'his', 'hand', ',', 'or', 'however', 'the', 'actor', 'pleases', '.', 'Well', ',', 'well', '.', 'Fancy', 'seeing', 'Dick', 'again', '.', 'Well', ',', 'Dick', 'enjoys', 'his', 'life', ',', 'so', 'he', "'s", 'no', 'fool', '.', 'What', 'was', 'that', 'he', 'said', '?', '``', 'There', "'s", 'no', 'money', 'in', 'poetry', '.', 'You', "'d", 'better', 'chuck', 'it', '.', "''", 'Ten', 'years', "'", 'work', 'and', 'what', 'have', 'I', 'to', 'show', 'for', 'it', '?', 'The', 'admiration', 'of', 'men', 'who', 'care', 'for', 'poetry', ',', 'and', 'how', 'many', 'of', 'them', 'are', 'there', '?', 'There', "'s", 'a', 'bigger', 'demand', 'for', 'smoked', 'glasses', 'to', 'look', 'at', 'eclipses', 'of', 'the', 'sun', '.', 'Why', 'should', 'Fame', 'come', 'to', 'me', '?', 'Have', "n't", 'I', 'given', 'up', 'my', 'days', 'for', 'her', '?', 'That', 'is', 'enough', 'to', 'keep', 'her', 'away', '.', 'I', 'am', 'a', 'poet', ';', 'that', 'is', 'enough', 'reason', 'for', 'her', 'to', 'slight', 'me', '.', 'Proud', 'and', 'aloof', 'and', 'cold', 'as', 'marble', ',', 'what', 'does', 'Fame', 'care', 'for', 'us', '?', 'Yes', ',', 'Dick', 'is', 'right', '.', 'It', "'s", 'a', 'poor', 'game', 'chasing', 'illusions', ',', 'hunting', 'the', 'intangible', ',', 'pursuing', 'dreams', '.', 'Dreams', '?', 'Why', ',', 'we', 'are', 'ourselves', 'dreams', '.', '[', 'He', 'leans', 'back', 'in', 'his', 'chair', '.', 'We', 'are', 'such', 'stuff', 'As', 'dreams', 'are', 'made', 'on', ',', 'and', 'our', 'little', 'life', 'Is', 'rounded', 'with', 'a', 'sleep', '.', '[', 'He', 'is', 'silent', 'for', 'a', 'while', '.', 'Suddenly', 'he', 'lifts', 'his', 'head', '.', 'My', 'room', 'at', 'Eton', ',', 'Dick', 'said', '.', 'An', 'untidy', 'mess', '.', '[', 'As', 'he', 'lifts', 'his', 'head', 'and', 'says', 'these', 'words', ',', 'twilight', 'gives', 'place', 'to', 'broad', 'daylight', ',', 'merely', 'as', 'a', 'hint', 'that', 'the', 'author', 'of', 'the', 'play', 'may', 'have', 'been', 'mistaken', ',', 'and', 'the', 'whole', 'thing', 'may', 'have', 'been', 'no', 'more', 'than', 'a', 'poet', "'s", 'dream', '.', 'So', 'it', 'was', ',', 'and', 'it', "'s", 'an', 'untidy', 'mess', 'there', '(', 'looking', 'at', 'screen', ')', 'too', '.', 'Dick', "'s", 'right', '.', 'I', "'ll", 'tidy', 'it', 'up', '.', 'I', "'ll", 'burn', 'the', 'whole', 'damned', 'heap', ',', '[', 'He', 'advances', 'impetuously', 'towards', 'the', 'screen', '.', 'every', 'damned', 'poem', 'that', 'I', 'was', 'ever', 'fool', 'enough', 'to', 'waste', 'my', 'time', 'on', '.', '[', 'He', 'pushes', 'back', 'the', 'screen', '.', 'Fame', 'in', 'a', 'Greek', 'dress', 'with', 'a', 'long', 'golden', 'trumpet', 'in', 'her', 'hand', 'is', 'seen', 'standing', 'motionless', 'on', 'the', 'altar', 'like', 'a', 'marble', 'goddess', '.', 'So', '...', 'you', 'have', 'come', '!', '[', 'For', 'a', 'while', 'he', 'stands', 'thunderstruck', '.', 'Then', 'he', 'approaches', 'the', 'altar', '.', 'Divine', 'fair', 'lady', ',', 'you', 'have', 'come', '.', '[', 'He', 'holds', 'up', 'his', 'hand', 'to', 'her', 'and', 'leads', 'her', 'down', 'from', 'the', 'altar', 'and', 'into', 'the', 'centre', 'of', 'the', 'stage', '.', 'At', 'whatever', 'moment', 'the', 'actor', 'finds', 'it', 'most', 'convenient', ',', 'he', 'repossesses', 'himself', 'of',
'the', 'sonnet', 'that', 'he', 'had', 'placed', 'on', 'the', 'altar', '.', 'He', 'now', 'offers', 'it', 'to', 'Fame', '.', 'This', 'is', 'my', 'sonnet', '.', 'Is', 'it', 'well', 'done', '?', '[', 'Fame', 'takes', 'it', 'and', 'reads', 'it', 'in', 'silence', ',', 'while', 'the', 'Poet', 'watches', 'her', 'rapturously', '.', 'Fame', ':', 'You', "'re", 'a', 'bit', 'of', 'all', 'right', '.', 'de', 'Reves', ':', 'What', '?', 'Fame', ':', 'Some', 'poet', '.', 'de', 'Reves', ':', 'I', '-', 'I', '-', 'scarcely', '...', 'understand', '.', 'Fame', ':', 'You', "'re", 'IT', '.', 'de', 'Reves', ':', 'But', '...', 'it', 'is', 'not', 'possible', '...', 'are', 'you', 'she', 'that', 'knew', 'Homer', '?', 'Fame', ':', 'Homer', '?', 'Lord', ',', 'yes',
'.', 'Blind', 'old', 'bat', ',', "'", 'e', 'could', "n't", 'see', 'a', 'yard', '.', 'de', 'Reves', ':', 'O', 'Heavens', '!', '[', 'Fame', 'walks', 'beautifully', 'to', 'the', 'window', '.', 'She', 'opens', 'it', 'and', 'puts', 'her', 'head', 'out', '.', 'Fame', '(', 'in', 'a', 'voice', 'with', 'which', 'a', 'woman', 'in', 'an', 'upper', 'storey', 'would', 'cry', 'for', 'help', 'if', 'the', 'house', 'was', 'well', 'alight', ')', ':', 'Hi', '!', 'Hi', '!', 'Boys', '!', 'Hi', '!', 'Say', ',', 'folks', '!', 'Hi', '!', '[', 'The', 'murmur', 'of', 'a', 'gathering', 'crowd', 'is', 'heard', '.', 'Fame', 'blows', 'her', 'trumpet', '.', 'Fame', ':', 'Hi', ',', 'he', "'s", 'a', 'poet', '!', '(', 'Quickly', ',', 'over', 'her', 'shoulder', '.', ')', 'What', "'s", 'your', 'name', '?', 'de', 'Reves', ':', 'De', 'Reves', '.', 'Fame', ':', 'His', 'name', "'s", 'de', 'Reves', '.', 'de', 'Reves', ':', 'Harry', 'de', 'Reves', '.', 'Fame', ':', 'His', 'pals', 'call', 'him', 'Harry', '.', 'The', 'Crowd', ':', 'Hooray', '!', 'Hooray', '!', 'Hooray', '!', 'Fame', ':', 'Say', ',', 'what', "'s", 'your', 'favourite', 'colour', '?', 'de', 'Reves', ':', 'I', '...', 'I', '...', 'I', 'do', "n't", 'quite', 'understand', '.', 'Fame', ':', 'Well', ',', 'which', 'do', 'you', 'like', 'best', ',', 'green', 'or', 'blue', '?', 'de', 'Reves', ':', 'Oh', '-', 'er', '-', 'blue', '.', '[', 'She', 'blows', 'her', 'trumpet', 'out', 'of', 'the', 'window', '.', 'No', '-', 'er', '-', 'I', 'think', 'green', '.', 'Fame', ':', 'Green', 'is', 'his', 'favourite', 'colour', '.', 'The', 'Crowd', ':', 'Hooray', '!', 'Hooray', '!', 'Hooray', '!', 'Fame', ':', '`', 'Ere', ',', 'tell', 'us', 'something', '.', 'They', 'want', 'to', 'know', 'all', 'about', 'yer', '.', 'de', 'Reves', ':', 'Would', "n't", '9', 'you', 'perhaps', '...', 'would', 'they', 'care', 'to', 'hear', 'my', 'sonnet', ',', 'if', 'you', 'would', '-', 'er', '...', 'Fame', '(', 'picking', 'up', 'quill', ')', ':', 'Here', ',', 'what', "'s", 'this', '?', 'de', 'Reves', ':', 'Oh', ',', 'that', "'s", 'my', 'pen', '.', 'Fame', '(', 'after', 'another', 'blast', 'on', 'her', 'trumpet', ')', ':', 'He', 'writes', 'with', 'a', 'quill', '.', '[', 'Cheers', 'from', 'the', 'Crowd', '.', 'Fame', '(',
'going', 'to', 'a', 'cupboard', ')', ':', 'Here', ',', 'what', 'have', 'you', 'got', 'in', 'here', '?', 'de', 'Reves', ':', 'Oh', '...', 'er', '...', 'those', 'are', 'my', 'breakfast', 'things', '.', 'Fame', '(', 'finding', 'a', 'dirty', 'plate', ')', ':', 'What', 'have', 'yer', 'had', 'on', 'this', 'one', '?', 'de', 'Reves', '(', 'mournfully', ')', ':', 'Oh', ',', 'eggs', 'and', 'bacon', '.', 'Fame', '(', 'at', 'the', 'window', ')', ':', 'He', 'has', 'eggs', 'and', 'bacon', 'for', 'breakfast', '.', 'The', 'Crowd', ':', 'Hip', 'hip', 'hip', ',', 'hooray', '!', 'Hip', 'hip', 'hip', ',', 'hooray', '!', 'Hip', 'hip', 'hip', ',', 'hooray', '!', 'Fame', ':', 'Hi', ',', 'and', 'what', "'s", 'this', '?', 'de', 'Reves', '(', 'miserably', ')', ':', 'Oh', ',', 'a', 'golf', 'stick', '.', 'Fame', ':', 'He', "'s", 'a', 'man', "'s", 'man', '!', 'He', "'s", 'a', 'virile', 'man', '!', 'He', "'s", 'a', 'manly', 'man', '!', '[', 'Wild', 'cheers', 'from', 'the', 'Crowd', ',', 'this', 'time', 'only', 'from', 'women', "'s", 'voices', '.', 'de', 'Reves', ':', 'Oh', ',', 'this', 'is', 'terrible', '.', 'This', 'is', 'terrible', '.', 'This', 'is', 'terrible', '.', '[', 'Fame', 'gives', 'another', 'peal', 'on', 'her', 'horn', '.', 'She', 'is', 'about', 'to', 'speak', '.', 'de', 'Reves', '(', 'solemnly', 'and', 'mournfully', ')', ':', 'One', 'moment', ',', 'one', 'moment', '...', 'Fame', ':', 'Well', ',', 'out', 'with', 'it', '.', 'de', 'Reves', ':', 'For', 'ten', 'years', ',', 'divine', 'lady', ',', 'I', 'have', 'worshipped', 'you', ',', 'offering', 'all', 'my', 'songs', '...', 'I', 'find', '...', 'I', 'find', 'I', 'am', 'not', 'worthy', '...', 'Fame', ':', 'Oh', ',', 'you', "'re", 'all', 'right', '.', 'de', 'Reves', ':', 'No', ',', 'no', ',', 'I', 'am', 'not', 'worthy', '.', 'It', 'can', 'not', 'be', '.', 'It', 'can', 'not', 'possibly', 'be', '.', 'Others', 'deserve', 'you', 'more', '.', 'I', 'must', 'say', 'it', '!', 'I', 'can', 'not', 'possibly', 'love', 'you', '.', 'Others', 'are', 'worthy', '.', 'You', 'will', 'find', 'others', '.', 'But', 'I', ',', 'no', ',', 'no', ',', 'no', '.', 'It', 'can', 'not', 'be', '.', 'It', 'can', 'not', 'be', '.', 'Oh', ',', 'pardon', 'me', ',', 'but', 'it', 'must', 'not', '.', '[', 'Meanwhile', 'Fame', 'has', 'been', 'lighting', 'one', 'of', 'his', 'cigarettes', '.', 'She', 'sits', 'in', 'a', 'comfortable', 'chair', ',', 'leans', 'right', 'back', ',', 'and', 'puts', 'her', 'feet', 'right', 'up', 'on', 'the', 'table', 'amongst', 'the', 'poet', "'s", 'papers', '.', 'Oh', ',', 'I', 'fear', 'I', 'offend', 'you', '.', 'But', '-', 'it', 'can', 'not', 'be', '.', 'Fame', ':', 'Oh', ',', 'that', "'s", 'all', 'right', ',', 'old', 'bird', ';', 'no', 'offence', '.', 'I', 'ai', "n't", 'going', 'to', 'leave', 'you', '.', 'de', 'Reves', ':', 'But', '-', 'but', '-', 'but', '-', 'I', 'do', 'not', 'understand', '.', 'Fame', ':', 'I', "'ve", 'come', 'to', 'stay', ',', 'I', 'have', '.', '[', 'She', 'blows', 'a', 'puff', 'of', 'smoke', 'through', 'her', 'trumpet', '.', 'CURTAIN', '.'], 'pos_tags': [10, 0, 2, 12, 12, 12, 12, 12, 12, 38, 2, 12, 38, 41, 2, 10, 38, 18, 5, 10, 5, 6, 10, 38, 30, 29, 29, 0, 30, 6, 12, 12, 38, 42, 12, 12, 38, 2, 12, 5, 2, 12, 12, 13, 38, 12, 38, 10, 2, 12, 15, 11, 5, 12, 38, 11, 5, 18, 38, 2, 6, 10, 5, 2, 10, 38, 10, 38, 12, 6, 38, 2, 12, 30, 28, 5, 2, 10, 10, 38, 41, 12, 12, 12, 38, 10, 38, 12, 38, 12, 38, 12, 12, 38, 12, 38, 12, 38, 6, 12, 38, 35, 31, 16, 5, 22, 10, 41, 18, 42, 38, 2, 11, 5, 2, 10, 38, 12, 12, 38, 25, 38, 16, 31, 29, 22, 10, 38, 27, 16, 9, 26, 21, 0, 26, 35, 16, 27, 28, 5, 38, 12, 12, 38, 25, 38, 32, 30, 6, 38, 33, 31, 16, 28, 5, 12, 22, 10, 38, 18, 38, 16, 27, 24, 26, 5, 16, 9, 26, 1, 0, 1, 6, 11, 24, 26, 38, 16, 9, 26, 10, 21, 18, 38, 18, 16, 27, 16, 9, 26, 2, 10, 0, 26, 35, 12, 27, 28, 5, 38, 12, 12, 38, 6, 22, 35, 30, 10, 22, 10, 38, 2, 28, 6, 38, 12, 12, 38, 32, 30, 6, 38, 10, 41, 28, 10, 0, 10, 42, 38, 0, 33, 31, 16, 28, 22, 12, 12, 38, 28, 38, 10, 38, 28,
22, 16, 27, 36, 26, 16, 27, 38, 12, 12, 38, 25, 38, 16, 31, 29, 24, 16, 18, 38, 10, 38, 16, 31, 38, 28, 30, 18, 6, 38, 33, 31, 16, 26, 22, 12, 12, 38, 25, 38, 10, 38, 10, 38, 10, 22, 6, 12, 22, 12, 12, 38, 25, 38, 2, 10, 5, 10, 38, 16, 31, 38, 10,
38, 6, 12, 22, 26, 16, 26, 2, 10, 5, 16, 22, 12, 12, 38, 25, 38, 18, 18, 38, 10, 38, 16, 31, 38, 35, 31, 36, 16, 31, 16, 22, 12, 12, 38, 25, 38, 16, 31, 36, 26, 38, 2, 11, 31, 24, 26, 17, 10, 38, 18, 38, 2, 30, 35, 16, 31, 5, 38, 10, 38, 16, 9, 26, 16, 5, 3, 30, 2, 10, 5, 16, 38, 12, 12, 38, 25, 38, 0, 18, 16, 30, 18, 5, 17, 10, 38, 30, 16, 22, 16, 9, 18, 26, 5, 10, 5, 3, 27, 10, 5, 16, 38, 10, 38, 25, 38, 16, 31, 36, 26, 2, 38, 5, 16, 9, 26, 18, 18, 5, 10, 5, 16, 31, 5, 28, 16, 31, 36, 26,
16, 9, 36, 26, 2, 10, 10, 38, 18, 38, 38, 12, 12, 38, 18, 33, 22, 10, 38, 25, 38, 16, 31, 36, 26, 38, 18, 3, 30, 7, 10, 5, 28, 38, 18, 38, 12, 12, 38, 25, 38, 25, 38, 16, 31, 16, 30, 7, 24, 26, 33, 2, 6, 10, 30, 28, 24, 26, 38, 5, 24, 26, 33, 12, 38, 38, 10, 38, 33, 30, 12, 22, 12, 12, 38, 25, 38, 2, 29, 10, 5, 11, 38, 10, 38, 16, 31, 22, 16, 31, 36, 26, 5, 2, 29, 10, 38, 31, 16, 22, 12, 12, 38, 5, 17, 10, 16, 31, 5, 2, 6, 11, 38, 16, 18, 31, 2, 6, 10, 24, 16, 38, 2, 10, 5, 12, 30, 18, 6, 2, 10, 24, 2, 10, 5, 2, 12, 10, 9, 26, 24, 16, 38, 10, 38, 16, 31, 38, 41, 26, 16, 2, 10, 38, 11, 38, 42, 33, 22, 18, 16, 9, 26, 5, 11, 0, 11, 38, 0, 12, 38, 0, 14, 2, 10, 5, 11, 22, 12, 12, 38, 25, 38, 25, 38, 5, 2, 5, 16, 38, 10, 38, 6, 12, 22, 12, 12, 38, 16, 31, 5, 2, 12, 12, 5, 12, 38, 31, 36, 16, 22, 10, 38, 25, 38, 5, 10, 38, 0, 33, 30, 38, 38, 12, 12, 38, 1, 1, 11, 0, 18, 27, 16, 12, 12, 38, 27, 36, 16, 22, 0, 16, 30, 24, 16, 2, 10, 0, 10, 0, 10, 5, 38, 38, 10, 38, 25, 38, 0, 38, 16, 31, 38, 33, 30, 14, 2, 38, 38, 12, 12, 38, 25, 38, 16, 30, 5, 2, 10, 24, 16, 38, 0, 16, 27, 16, 12, 12, 38, 0, 18, 16, 30, 1, -1, 10, 38, 18, 38, 5, 10, 16, 30, 38, 12, 12, 38, 5, 2, 6, 10, 12, 30, 29, 29, 33, 16, 30, 5, 11, 38, 5, 11, 24, 33, 16, 30, 6, 11, 38, 10, 41, 28, 5, 17, 10, 0, 28, 18, 38, 28, 0, 28, 5, 2, 12, 5, 2, 10, 5, 6, 10, 42, 38, 16, 31, -1, 16, 31, -1, 16, 6, 11, -1, 0, 12, 12, -1, 41, 16, 30, 5, 2, 6, 10, 18, 38, 28, 16, 18, 2, 6, 38, 12, 12, 38, 31, 21, 22, 26, 21, 22, 10, 38, 33, 22, 33, 30, 2, 10, 22, 12, 12, 38, 2, 10, 22, 10, 38, 25, 38, 18, 38, 25, 38, 16, 9, 26, 16, 18, 38, 41, 16, 30, 18, 24, 26, 10, 5, 16, 38, 12, 12, 38, 25, 38, 31, 36, 26, 10, 18, 38, 10, 38, 33, 22, 35, 36, 22, 12, 12, 38, 25, 38, 16, 9, 36, 26, 38, 10, 38, 9, 36, 26, 22, 35, 38, 33, 31, 16, 27, 22, 12, 12, 38, 25, 38, 1, 5, 2, 11, -1, 16, 9, 36, 26, 38, 10, 38, 5, 10, 16, 9, 26, 38, 26, 30, 26, 2, 10, 38, 41, 12, 12, 30, 5, 12, 0, 2, 10, 38, 16, 30, 18, 7, 38, 10, 11, 31, 2, 10,
5, 2, 10, 38, 2, 10, 38, 12, 12, 41, 28, 2, 10, 18, 42, 38, 32, 30, 18, 38, 33, 31, 16, 26, 5, 16, 22, 41, 2, 10, 5, 6, 10, 38, 29, 5, 2, 10, 38, 30, 29, 38, 11, 31, 2, 10, 18, 5, 16, 38, 10, 38, 16, 31, 38, 16, 18, 27, 2, 6, 10, 38, 12, 12, 38, 25, 38, 33, 31, 16, 26, 5, 16, 22, 10, 38, 16, 30, 16, 5, 17, 10, 5, 12, 38, 12, 12, 38, 17, 10, 5, 12, 22, 10, 38, 25, 38, 16, 18, 27, 11, 18, 5, 17, 10, 38, 12, 12, 38, 25, 38, 25, 38, 38, 10, 38, 0, 33, 31, 2, 22, 12, 12, 38, 14, 2, 31, 11, 38, 0, 2, 30, 17, 10, 24, 12, 38, 10, 38, 24, 12, 22, 12, 12, 38, 2, 6, 5, 12, 27, 38, 10, 38, 6, 12, 22, 12, 12, 38, 12, 18, 27, 16, 38, 12, 27, 18, 6, 38, 16, 27, 18, 5, 2, 8, 5, 11, 38, 18, 18, 18, 38, 10, 38, 0, 38, 17, 6, 10, 38, 16, 31, 36, 26, 5,
16, 31, 3, 18, 30, 14, 2, 10, 22, 12, 12, 38, 16, 31, 14, 17, 11, 24, 16, 38, 10, 38, 0, 16, 31, 36, 26, 16, 31, 16, 9, 18, 26, 12, 22, 12, 12, 38, 16, 11, 31, 6, 11, 38, 0, 36, 11, 6, 0, 6, 0, 11, 18, 38, 14, 2, 6, 11, 5, 2, 10, 31, 2, 6, 11, 38,
10, 38, 0, 33, 16, 31, 30, 38, 16, 31, 36, 18, 18, 38, 5, 16, 0, 16, 38, 12, 12, 38, 24, 16, 2, 11, 31, 19, 6, 5, 11, 38, 16, 31, 11, 38, 16, 31, 2, 10, 5, 11, 38, 16, 31, 5, 16, 31, 10, 38, 16, 31, 18, 18, 38, 6, 38, 12, 38, 10, 38, 0, 38, 18, 38, 16, 9, 36, 26, 5, 16, 9, 26, 12, 38, 16, 31, 36, 26, 24, 26, 16, 22, 12, 12, 38, 36, 24, 16, 38, 18, 24, 16, 38, 16, 5, 2, 6, 10, 0, 6, 10, 9, 18, 26, 24, 16, -1, 16, 18, 31, 17, 11, 38, 10, 38, 16, 31, 38, 33, 31, 16, 29, 28, 2, 10, 22, 12, 12, 38, 16, 22, 25, 38, 18, 28, 2, 10, 38, 10, 38, 30, 16, 2, 6, 1, 22, 12, 12, 38, 36, 18, 38, 10, 38, 18, 35, 18, 30, 16, 22, 12, 12, 38, 5, 10, 11, 38, 10, 41, 18, 42, 38, 16, 26, 16, 33, 16, 30, 38, 12, 12, 38, 25, 22, 10, 38, 16, 26, 16, 33, 38, 16, 31, 29, 28, 16, 38, 16, 18, 27, 5, 5, 5, 10, 2, 12, 38, 28, 5, 2, 6, 10, 38, 16, 27, 18, 6, 5, 16, 9, 26, 29, 10, 38, 12, 12, 38, 29, 10, 22, 10, 38, 12, 38, 25, 38, 29, 11, 38, 11, 5, 11, 38, 10, 38, 1, 5, 17, 29, 11, 18, 38, 16, 27, 16, 2, 10,
27, 10, 5, 16, 38, 16, 31, 2, 10, 38, 12, 12, 38, 0, 17, 6, 10, 38, 16, 31, 36, 26, 5, 2, 38, 16, 18, 27, 5, 6, 11, 31, 24, 2, 10, 5, 6, 0, 6, 0, 6, 5, 1, 5, 17, 11, 0, 11, 38, 10, 38, 16, 31, 38, 16, 31, 2, 10, 38, 12, 12, 38, 25, 38, 18, 16, 9, 38, 16, 9, 26, 5, 16, 24, 2, 6, 10, 16, 31, 28, 24, 26, 38, 18, 16, 31, 2, 10, 29, 5, 28, 2, 38, 16, 30, 2, 6, 10, 38, 16, 9, 26, 2, 10, 38, 10, 38, 35, 31, 16, 31, 16, 31, 28, 24, 26, 2, 6, 10, 22, 12, 12, 38, 25, 38, 35, 9, 16, 26, 22, 12, 30, 1,
5, 5, 2, 12, 12, 15, 38, 16, 31, 36, 28, 18, 38, 10, 38, 31, 16, 31, 5, 16, 22, 12, 12, 38, 25, 38, 10, 38, 18, 38, 16, 31, 18, 6, 38, 16, 31, 28, 24, 26, 39, 2, 12, 5, 12, 38, 40, 18, 18, 38, 16, 9, 26, 21, 18, 38, 16, 30, 28, 18, 38, 16, 31, 2, 10, 38, 31, 36, 26, 2, 10, 24, 2, 10, 38, 10, 30, 18, 6, 38, 16, 31, 2, 10, 38, 31, 36, 26, 2, 10, 10, 38, 18, 10, 38, 16, 27, 6, 5, 5, 16, 38, 18, 18, 38, 12, 12, 38, 18, 18, 38, 41, 10, 12, 38, 12, 12, 30, 24, 17, 10, 0, 30, 21, 38, 6, 6, 12, 22,
16, 30, 2, 6, 18, 18, 38, 12, 38, 35, 10, 30, 38, 16, 30, 17, 10, 0, 17, 10, 0, 30, 2, 6, 11, 38, 18, 38, 32, 30, 29, 38, 16, 9, 36, 26, 2, 19, 24, 16, 38, 41, 16, 30, 0, 30, 24, 2, 10, 38, 16, 30, 18, 10, 5, 16, 0, 30, 21, 24, 2, 10, 38, 16, 30, 18, 24, 26, 17, 10, 18, 5, 2, 10, 5, 2, 10, 5, 17, 6, 11, 38, 25, 38, 16, 9, 36, 26, 16, 18, 38, 2, 1, 30, 6, 5, 2, 10, 38, 41, 16, 30, 2, 10, 5, 2, 10, 16, 38, 5, 2, 10, 30, 36, 26, 16, 10, 38, 10, 5, 16, 31, 29, 18, 9, 26, 16, 24, 16, 38, 10, 5, 16, 18, 9, 26, 38, 41, 16, 30, 2, 10, 0, 11, 24, 17, 10, 5, 2, 10, 38, 10, 30, 28, 21, 38, 16, 30, 5, 17, 10, 5, 2, 10, 38, 17, 10, 5, 17, 10, 38, 0, 18, 2, 10, 30, 38, 25, 38, 25, 38, 6, 28, 12, 18, 38, 18, 38, 12, 30, 17, 10, 38, 18, 16, 30, 2, 10, 38, 33, 27, 5, 16, 27, 22, 39, 3, 30, 2, 10, 5, 10, 38, 16, 9, 19, 26, 16, 38, 40, 1, 11, 15, 10, 0, 33, 31, 16, 24, 26, 5, 16, 22, 2, 10, 5, 11, 33, 31, 5, 10, 38, 0, 35, 6, 5, 16, 31, 18, 22, 3, 30, 2, 7, 10, 5, 29, 11, 24, 26, 5, 11, 5, 2, 10, 38, 35, 9, 12, 26, 24, 16, 22, 31, 36, 16, 29, 21, 17, 11, 5, 16, 22, 2, 30, 6, 24, 26, 16, 21, 38, 16, 31, 2, 10, 38, 32, 30, 18, 10, 5, 16, 24, 26, 16, 38, 6, 0, 6, 0, 6, 5, 10, 38, 33, 30, 12, 10, 5, 16, 22, 25, 38, 12, 30, 6, 38, 16, 30, 2, 6, 10, 28, 11, 38, 28, 2, 10, 38, 28, 11, 38, 11, 22, 35, 38, 16, 31, 16, 30, 38, 41, 16, 30, 18, 5, 17, 10, 38, 16, 31, 6, 10, 5, 11, 31, 29, 5, 38, 0, 17, 6, 10, 30, 29, 5, 2, 10, 38, 41, 16, 30, 6, 5, 2, 10, 38, 18, 16, 30, 17, 10, 38, 17, 10, 5,
12, 38, 12, 27, 38, 2, 6, 10, 38, 41, 5, 16, 30, 17, 10, 0, 30, 2, 11, 38, 10, 30, 10, 24, 6, 10, 38, 18, 5, 2, 10, 5, 2, 10, 5, 2, 10, 9, 26, 29, 29, 38, 0, 2, 6, 10, 9, 26, 29, 18, 7, 5, 2, 10, 15, 10, 38, 18, 16, 27, 38, 0, 16, 30, 2, 6, 10, 18, 41, 28, 5, 10, 42, 18, 38, 12, 15, 10, 38, 16, 9, 26, 16, 21, 38, 16, 9, 26, 2, 6, 6, 10, 38, 41, 16, 30, 18, 5, 2, 10, 38, 2, 6, 10, 5, 16, 27, 18, 6, 18, 24, 26, 17, 10, 21, 38, 41, 16, 30, 18, 2, 10, 38, 10, 5, 2, 6, 10, 5, 2, 6, 6, 10, 5, 17,
10, 30, 29, 28, 6, 5, 2, 10, 5, 2, 10, 10, 38, 18, -1, 16, 31, 29, 22, 41, 5, 2, 5, 16, 30, 6, 38, 18, 16, 30, 2, 10, 38, 12, 6, 10, 38, 16, 31, 29, 38, 41, 16, 30, 21, 17, 10, 24, 16, 0, 30, 16, 21, 5, 2, 10, 0, 5, 2, 10, 5, 2, 10, 38, 5, 32, 10,
2, 10, 30, 16, 20, 6, 38, 16, 30, 16, 5, 2, 10, 5, 16, 27, 29, 5, 2, 10, 38, 16, 18, 30, 16, 24, 12, 38, 2, 30, 17, 10, 38, 30, 16, 18, 29, 22, 41, 12, 30, 16, 0, 30, 16, 5, 10, 38, 5, 2, 12, 30, 16, 18, 38, 10, 38, 16, 31, 2, 10, 5, 2, 10, 38, 12, 12, 38, 33, 22, 10, 38, 2, 10, 38, 12, 12, 38, 16, 38, 16, 38, 18, -1, 26, 38, 10, 38, 16, 31, 16, 38, 12, 12, 38, 0, -1, 16, 30, 36, 6, -1, 31, 16, 16, 32, 27, 12, 22, 10, 38, 10, 22, 12, 38, 25, 38, 6, 6, 10, 38, 40, 12, 9, 36, 26, 2, 10, 38, 12, 12, 38, 12, 12, 22, 41, 12, 30, 18, 24, 2, 10, 38, 16, 30, 16, 0, 30, 17, 10, 21, 38, 12, 41, 5, 2, 10, 5, 32, 2, 10, 5, 2, 6, 10, 9, 26, 5, 10, 5, 2, 10, 27, 18, 6, 42, 38, 25, 22, 25, 22, 13, 22, 25, 22, 26, 38, 11, 22, 25, 22, 41, 2, 10, 5, 2, 10, 10, 30, 29, 38, 12, 30, 17, 10, 38, 12, 38, 25, 38, 16, 30, 2, 10, 22, 41, 18, 38, 5, 17, 10, 38, 42, 33, 30, 17, 10, 22, 12, 12, 38, 12, 12, 38, 10, 38, 16, 31, 30, 12, 12, 38, 12, 12, 38, 12, 12, 12, 38, 10, 38, 16, 30, 26, 16, 12, 38, 2, 10, 38, 11, 22, 11, 22, 11, 22, 10, 38, 26, 38, 33, 30, 17, 6, 10, 22, 12, 12, 38, 16, -1, 16, -1, 16, 31, 36, 18, 26, 38, 10, 38, 18, 38, 32, 31, 16, 5, 8, 38, 6, 0, 6, 22, 12, 12, 38, 25, 38, 25, 38, 6, 38, 41, 16, 30, 17, 10, 21, 5, 2, 10, 38, 25, 38, 25, 38, 16, 31, 6, 38, 10, 38, 12, 30, 17, 6, 10, 38, 2, 10, 38, 11, 22, 11, 22, 11, 22, 12, 38, 39, 6, 38, 26, 16, 10, 38, 16, 31, 24, 26, 2, 18, 6, 38, 12, 12, 38, 9, 36, 1, 16, 18, -1, 9, 16, 26, 24, 26, 17, 10, 38, 5, 16, 9, 38, 25, -1, 12, 41, 28, 21, 10, 42, 38, 18, 38, 33, 30, 2, 22, 12, 12, 38, 25, 38, 32, 30, 17, 10, 38, 12, 41, 5, 2, 10, 5, 16, 31, 42, 38, 16, 30, 5, 2, 10, 38, 41, 12, 5, 2, 10, 38, 12, 41, 28, 24, 2, 10, 42, 38, 18, 38, 33, 31, 16, 29, 5, 18, 22, 12, 12, 38,
25, -1, 25, -1, 2, 31, 17, 10, 11, 38, 12, 41, 28, 2, 6, 10, 42, 38, 33, 31, 18, 29, 5, 2, 1, 22, 12, 12, 41, 18, 42, 38, 25, 38, 11, 0, 10, 38, 12, 41, 5, 2, 10, 42, 38, 16, 30, 11, 0, 10, 5, 10, 38, 2, 10, 38, 6, 10, 10, 38, 11, 22, 6, 6, 10, 38, 11, 22, 6, 6, 10, 38, 11, 22, 12, 38, 25, 38, 0, 33, 30, 2, 22, 12, 12, 41, 18, 42, 38, 25, 38, 2, 10, 10, 38, 10, 38, 16, 30, 2, 10, 15, 10, 22, 16, 30, 2, 6, 10, 22, 16, 30, 2, 6, 10, 22, 41, 12, 11, 5, 2, 12, 38, 2, 10, 18, 5, 11, 15, 11, 38, 12, 12, 38, 25, 38, 2, 30, 6, 38, 2, 30, 6, 38, 2, 30, 6, 38, 41, 12, 30, 2, 10, 5, 17, 10, 38, 16, 30, 18, 24, 26, 38, 12, 12, 41, 18, 0, 18, 42, 38, 1, 10, 38, 1, 10, -1, 10, 38, 18, 38, 18, 5, 16, 38, 12, 12, 38, 5, 1, 11, 38, 6, 10, 38, 16, 31,
29, 16, 38, 28, 14, 17, 11, -1, 16, 31, -1, 16, 31, 16, 31, 36, 6, -1, 12, 38, 25, 38, 16, 31, 2, 10, 38, 12, 12, 38, 25, 38, 25, 38, 16, 31, 36, 6, 38, 16, 9, 36, 26, 38, 16, 31, 36, 18, 26, 38, 11, 31, 16, 7, 38, 16, 9, 26, 16, 22, 16, 31, 36, 18, 26, 16, 38, 11, 31, 6, 38, 16, 9, 26, 11, 38, 0, 16, 38, 25, 38, 25, 38, 25, 38, 16, 9, 36, 26, 38, 16, 9, 36, 26, 38, 25, 38, 26, 16, 38, 0, 16, 9, 36, 38, 41, 18, 12, 30, 29, 28, 1, 5, 17, 11, 38, 16, 30, 5, 2, 6, 10, 38, 30, 18, 18, 38, 0, 30, 17, 11, 18, 18, 5, 2, 10, 5, 2, 10, 15, 11, 38, 25, 38, 16, 31, 16, 26, 16, 38, 0, 38, 16, 9, 36, 26, 38, 12, 38, 25, 38, 32, 30, 18, 6, 38, 6, 10, 38, 2, 10, 38, 16, 31, 36, 28, 24, 26, 16, 38, 12, 12, 38, 0, 38, 18, 38, 18, 38, 16, 31, 36, 26, 38, 10, 38, 16, 31, 29, 24, 26, 38, 16, 31, 38, 41, 16, 30, 2, 10, 5, 10, 5, 17, 10, 38, 10, 38], 'genre': 'Drama', 'subgenre': 'drama', 'year': '1919', 'quarter_cent': '1900-1924', 'decade': '1910s', 'title': 'Fame and the poet', 'author': 'Dunsany [Edward John Moreton Drax Plunkett]', 'notes': '', 'comments': 'selected from larger file', 'period': '1850-1920', 'id': '317'}
```
### Data Fields
There are three configs in this dataset- `plain`, `class` and `pos`. `plain` is a simple text dataset whereas `pos` and `class` are both annotated datasets containing pos tagging. A `plain` data point has the following fields:
```
{
"text": The text in the sample("string"),
"genre": The genre of the text("string"),
"subgenre": The subgenre of the text("string"),
"year": The year the text was produced("string"),
"quarter_cent": The quarter century in which the text was produced("string"),
"decade": The decade the text was produced("string"),
"title": The title of the text("string"),
"author": The author of the text("string"),
"notes": Notes about the text, if any("string"),
"comments": Commentsabout the text, if any("string"),
"period": 70-year period during which the text was produced("string"),
"id": Unqiue identifier("string"),
}
```
A typical `pos`/`class` data point has the following fields:
```
{
"text": The tokens in the sample(list("string")),
"pos_tags": Corresponding POS tags for the tokens (list("string"))
"genre": The genre of the text("string"),
"subgenre": The subgenre of the text("string"),
"year": The year the text was produced("string"),
"quarter_cent": The quarter century in which the text was produced("string"),
"decade": The decade the text was produced("string"),
"title": The title of the text("string"),
"author": The author of the text("string"),
"notes": Notes about the text, if any("string"),
"comments": Commentsabout the text, if any("string"),
"period": 70-year period during which the text was produced("string"),
"id": Unqiue identifier("string"),
}
```
### Data Splits
Train: 333
## Dataset Creation
### Curation Rationale
The Corpus of Late Modern English Texts (CLMET) is a corpus of roughly 35 million words of
British English from 17101920, grouped into three 70-year periods (De Smet 2005; Diller et
al. 2011). The history, versions and specifics of corpus composition can be followed up by
referring to the CLMET3.0 website. CLMET3.0 is currently distributed in three formats: (i)
plain text, (ii) plain text with one sentence per line, and (iii) a tagged version (one sentence
per line).
Version CLMET3.1 is the result of making CLMET available in a CQP format for use in
CWB and CQPweb-based corpus environments (Evert & Hardie 2011; Evert 2010a). While
there is no change to the selection of texts, CLMET3.1 includes additions and changes in
linguistic annotation. The changes in CLMET3.1 are of three general types: (a) retokenization
and retagging, (b) fixing of some systematic issues that come with historical data, and (c)
enhancing annotation by adding lemmas and simplified part-of-speech class tags
### Source Data
#### Initial Data Collection and Normalization
The initial data is from OCR of texts in English from 1710-1920
#### Who are the source language producers?
The text was produced by the authors of the original work and then OCRd
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
This dataset does not contain any personal information as these are historic texts. Some content might be sensitive
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
Dealing with historical data, tagging remains problematic in all areas, and should be treated
with caution (especially with noun recognition) and/or combined with more coarse-grained
class queries. Also bear in mind that the lemmas for unknown items are in lower
case, while proper names that the tagger did recognize are not necessarily all lower case. In
addition, lemmatization may not be consistent, e.g. in the area of -ize/ise spellings; these were
not homogenized to preserve as much of the original orthography as possible.
## Additional Information
### Dataset Curators
The Corpus of Late Modern English Texts, version 3.1 (CLMET3.1) has been created by Hendrik De Smet, Susanne Flach, Hans-J�rgen Diller and Jukka Tyrkk�
### Licensing Information
Creative Commons Attribution Non Commercial Share Alike 4.0 International
### Citation Information
[Needs More Information] | 59,840 | [
[
-0.05096435546875,
-0.03021240234375,
0.027679443359375,
0.008148193359375,
-0.0295562744140625,
-0.0166015625,
-0.00799560546875,
-0.039703369140625,
0.043548583984375,
0.055267333984375,
-0.045013427734375,
-0.04559326171875,
-0.0296630859375,
0.0265655517... |
autoevaluate/autoeval-staging-eval-project-sms_spam-216c1ded-12215630 | 2022-08-02T10:41:15.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 23 | 2022-08-02T10:40:39 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- sms_spam
eval_info:
task: binary_classification
model: Rhuax/MiniLMv2-L12-H384-distilled-finetuned-spam-detection
metrics: []
dataset_name: sms_spam
dataset_config: plain_text
dataset_split: train
col_mapping:
text: sms
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: Rhuax/MiniLMv2-L12-H384-distilled-finetuned-spam-detection
* Dataset: sms_spam
* Config: plain_text
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Al-Ip](https://huggingface.co/Al-Ip) for evaluating this model. | 896 | [
[
-0.0272216796875,
-0.03448486328125,
0.01192474365234375,
0.0150604248046875,
-0.0095367431640625,
-0.00780487060546875,
-0.0009756088256835938,
-0.032470703125,
-0.000598907470703125,
0.036376953125,
-0.064208984375,
-0.024139404296875,
-0.06341552734375,
0... |
ai-forever/school_notebooks_RU | 2023-02-09T18:27:24.000Z | [
"task_categories:image-segmentation",
"task_categories:object-detection",
"source_datasets:original",
"language:ru",
"license:mit",
"optical-character-recognition",
"text-detection",
"ocr",
"region:us"
] | ai-forever | null | null | 7 | 23 | 2022-09-08T10:06:32 | ---
language:
- ru
license:
- mit
source_datasets:
- original
task_categories:
- image-segmentation
- object-detection
task_ids: []
tags:
- optical-character-recognition
- text-detection
- ocr
---
# School Notebooks Dataset
The images of school notebooks with handwritten notes in Russian.
The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.
## Annotation format
The annotation is in COCO format. The `annotation.json` should have the following dictionaries:
- `annotation["categories"]` - a list of dicts with a categories info (categotiy names and indexes).
- `annotation["images"]` - a list of dictionaries with a description of images, each dictionary must contain fields:
- `file_name` - name of the image file.
- `id` for image id.
- `annotation["annotations"]` - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields:
- `image_id` - the index of the image on which the polygon is located.
- `category_id` - the polygon’s category index.
- `attributes` - dict with some additional annotation information. In the `translation` subdict you can find text translation for the line.
- `segmentation` - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y. | 1,416 | [
[
-0.02130126953125,
-0.041717529296875,
0.0221099853515625,
0.0048675537109375,
-0.0386962890625,
0.0192718505859375,
-0.01216888427734375,
-0.016998291015625,
0.022918701171875,
0.038330078125,
-0.0252532958984375,
-0.058807373046875,
-0.05133056640625,
0.01... |
farleyknight/big_patent_5_percent | 2022-09-19T21:58:56.000Z | [
"region:us"
] | farleyknight | null | null | 0 | 23 | 2022-09-19T21:58:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tomekkorbak/detoxify-pile-chunk3-900000-950000 | 2022-10-04T23:47:24.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 23 | 2022-10-04T17:53:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
bigbio/genetag | 2022-12-22T15:44:38.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | Named entity recognition (NER) is an important first step for text mining the biomedical literature.
Evaluating the performance of biomedical NER systems is impossible without a standardized test corpus.
The annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity
of gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE®
sentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition.. | @article{Tanabe2005,
author = {Lorraine Tanabe and Natalie Xie and Lynne H Thom and Wayne Matten and W John Wilbur},
title = {{GENETAG}: a tagged corpus for gene/protein named entity recognition},
journal = {{BMC} Bioinformatics},
volume = {6},
year = {2005},
url = {https://doi.org/10.1186/1471-2105-6-S1-S3},
doi = {10.1186/1471-2105-6-s1-s3},
biburl = {},
bibsource = {}
} | 2 | 23 | 2022-11-13T22:08:32 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: NCBI_LICENSE
pretty_name: GENETAG
homepage: https://github.com/openbiocorpora/genetag
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for GENETAG
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/genetag
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
Named entity recognition (NER) is an important first step for text mining the biomedical literature.
Evaluating the performance of biomedical NER systems is impossible without a standardized test corpus.
The annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity
of gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE®
sentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition..
## Citation Information
```
@article{Tanabe2005,
author = {Lorraine Tanabe and Natalie Xie and Lynne H Thom and Wayne Matten and W John Wilbur},
title = {{GENETAG}: a tagged corpus for gene/protein named entity recognition},
journal = {{BMC} Bioinformatics},
volume = {6},
year = {2005},
url = {https://doi.org/10.1186/1471-2105-6-S1-S3},
doi = {10.1186/1471-2105-6-s1-s3},
biburl = {},
bibsource = {}
}
```
| 1,440 | [
[
-0.036041259765625,
-0.03369140625,
0.0072174072265625,
-0.0099334716796875,
-0.0235748291015625,
-0.0032215118408203125,
-0.0096435546875,
-0.051239013671875,
0.043670654296875,
0.0227813720703125,
-0.020660400390625,
-0.036865234375,
-0.049041748046875,
0.... |
bigbio/tmvar_v3 | 2023-02-17T14:55:58.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"arxiv:2204.03637",
"region:us"
] | bigbio | This dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds and dbsnp normalizations for each of them. In addition, it contains variant normalization options such as allele-specific identifiers from the ClinGen Allele Registry It can be used for NER tasks and NED tasks, This dataset does NOT have splits. | @misc{https://doi.org/10.48550/arxiv.2204.03637,
title = {tmVar 3.0: an improved variant concept recognition and normalization tool},
author = {
Wei, Chih-Hsuan and Allot, Alexis and Riehle, Kevin and Milosavljevic,
Aleksandar and Lu, Zhiyong
},
year = 2022,
publisher = {arXiv},
doi = {10.48550/ARXIV.2204.03637},
url = {https://arxiv.org/abs/2204.03637},
copyright = {Creative Commons Attribution 4.0 International},
keywords = {
Computation and Language (cs.CL), FOS: Computer and information sciences,
FOS: Computer and information sciences
}
} | 1 | 23 | 2022-11-13T22:12:35 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: tmVar v3
homepage: https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/tmvar/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for tmVar v3
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/tmvar/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
This dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds and dbsnp normalizations for each of them. In addition, it contains variant normalization options such as allele-specific identifiers from the ClinGen Allele Registry It can be used for NER tasks and NED tasks, This dataset does NOT have splits.
## Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2204.03637,
title = {tmVar 3.0: an improved variant concept recognition and normalization tool},
author = {
Wei, Chih-Hsuan and Allot, Alexis and Riehle, Kevin and Milosavljevic,
Aleksandar and Lu, Zhiyong
},
year = 2022,
publisher = {arXiv},
doi = {10.48550/ARXIV.2204.03637},
url = {https://arxiv.org/abs/2204.03637},
copyright = {Creative Commons Attribution 4.0 International},
keywords = {
Computation and Language (cs.CL), FOS: Computer and information sciences,
FOS: Computer and information sciences
}
}
```
| 1,545 | [
[
-0.0178985595703125,
-0.034576416015625,
0.0228118896484375,
0.0027141571044921875,
-0.033599853515625,
-0.0008139610290527344,
-0.0227203369140625,
-0.020172119140625,
0.0124053955078125,
0.041015625,
-0.03314208984375,
-0.06390380859375,
-0.049896240234375,
... |
osanseviero/twitter-airline-sentiment | 2022-11-16T22:31:48.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | osanseviero | null | null | 0 | 23 | 2022-11-16T22:31:43 | ---
license:
- cc-by-nc-sa-4.0
converted_from: kaggle
kaggle_id: crowdflower/twitter-airline-sentiment
---
# Dataset Card for Twitter US Airline Sentiment
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/crowdflower/twitter-airline-sentiment
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
*This data originally came from [Crowdflower's Data for Everyone library](http://www.crowdflower.com/data-for-everyone).*
As the original source says,
> A sentiment analysis job about the problems of each major U.S. airline. Twitter data was scraped from February of 2015 and contributors were asked to first classify positive, negative, and neutral tweets, followed by categorizing negative reasons (such as "late flight" or "rude service").
The data we're providing on Kaggle is a slightly reformatted version of the original source. It includes both a CSV file and SQLite database. The code that does these transformations is [available on GitHub](https://github.com/benhamner/crowdflower-airline-twitter-sentiment)
For example, it contains whether the sentiment of the tweets in this set was positive, neutral, or negative for six US airlines:
[](https://www.kaggle.com/benhamner/d/crowdflower/twitter-airline-sentiment/exploring-airline-twitter-sentiment-data)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@crowdflower](https://kaggle.com/crowdflower)
### Licensing Information
The license for this dataset is cc-by-nc-sa-4.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] | 3,751 | [
[
-0.032867431640625,
-0.03375244140625,
0.002262115478515625,
0.035736083984375,
-0.0215301513671875,
0.011322021484375,
-0.0220947265625,
-0.024932861328125,
0.0567626953125,
0.023773193359375,
-0.07196044921875,
-0.057281494140625,
-0.040374755859375,
0.005... |
ai4bharat/kathbath | 2022-12-09T09:59:48.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"license:mit",
"arxiv:2208.11761",
"region:us"
] | ai4bharat | null | @misc{https://doi.org/10.48550/arxiv.2208.11761,
doi = {10.48550/ARXIV.2208.11761},
url = {https://arxiv.org/abs/2208.11761},
author = {Javed, Tahir and Bhogale, Kaushal Santosh and Raman, Abhigyan and Kunchukuttan, Anoop and Kumar, Pratyush and Khapra, Mitesh M.},
title = {IndicSUPERB: A Speech Processing Universal Performance Benchmark for Indian languages},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
} | 2 | 23 | 2022-12-04T13:28:53 | ---
annotations_creators:
- expert-generated
language_bcp47:
- bn,gu,kn,hi,ml,mr,or,pa,sn,ta,te,ur
language_creators:
- machine-generated
license:
- mit
multilinguality:
- multilingual
pretty_name: Kathbath
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for Kathbath
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:https://ai4bharat.org/indic-superb**
- **Repository:https://github.com/AI4Bharat/IndicSUPERB**
- **Paper:https://arxiv.org/pdf/2208.11761.pdf**
- **Point of Contact:tahirjmakhdoomi@gmail.com**
### Dataset Summary
Kathbath is an human-labeled ASR dataset containing 1,684 hours of labelled speech data across 12 Indian languages from 1,218 contributors located in 203 districts in India
### Languages
- Bengali
- Gujarati
- Kannada
- Hindi
- Malayalam
- Marathi
- Odia
- Punjabi
- Sanskrit
- Tamil
- Telugu
- Urdu
## Dataset Structure
```
Audio Data
data
├── bengali
│ ├── <split_name>
│ │ ├── 844424931537866-594-f.m4a
│ │ ├── 844424931029859-973-f.m4a
│ │ ├── ...
├── gujarati
├── ...
Transcripts
data
├── bengali
│ ├── <split_name>
│ │ ├── transcription_n2w.txt
├── gujarati
├── ...
```
### Licensing Information
The IndicSUPERB dataset is released under this licensing scheme:
- We do not own any of the raw text used in creating this dataset.
- The text data comes from the IndicCorp dataset which is a crawl of publicly available websites.
- The audio transcriptions of the raw text and labelled annotations of the datasets have been created by us.
- We license the actual packaging of all this data under the Creative Commons CC0 license (“no rights reserved”).
- To the extent possible under law, AI4Bharat has waived all copyright and related or neighboring rights to the IndicSUPERB dataset.
- This work is published from: India.
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2208.11761,
doi = {10.48550/ARXIV.2208.11761},
url = {https://arxiv.org/abs/2208.11761},
author = {Javed, Tahir and Bhogale, Kaushal Santosh and Raman, Abhigyan and Kunchukuttan, Anoop and Kumar, Pratyush and Khapra, Mitesh M.},
title = {IndicSUPERB: A Speech Processing Universal Performance Benchmark for Indian languages},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
We would like to thank the Ministry of Electronics and Information Technology (MeitY) of the Government of India and the Centre for Development of Advanced Computing (C-DAC), Pune for generously supporting this work and providing us access to multiple GPU nodes on the Param Siddhi Supercomputer. We would like to thank the EkStep Foundation and Nilekani Philanthropies for their generous grant which went into hiring human resources as well as cloud resources needed for this work. We would like to thank DesiCrew for connecting us to native speakers for collecting data. We would like to thank Vivek Seshadri from Karya Inc. for helping setup the data collection infrastructure on the Karya platform. We would like to thank all the members of AI4Bharat team in helping create the Query by Example dataset. | 4,303 | [
[
-0.0226593017578125,
-0.0307159423828125,
-0.0035381317138671875,
0.031280517578125,
-0.0322265625,
0.027008056640625,
-0.00921630859375,
-0.026031494140625,
0.0203094482421875,
0.0258941650390625,
-0.03265380859375,
-0.045257568359375,
-0.042724609375,
0.01... |
jamescalam/ml-qa | 2023-01-04T12:26:06.000Z | [
"region:us"
] | jamescalam | null | null | 0 | 23 | 2023-01-04T12:21:40 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jonathan-roberts1/NWPU-RESISC45 | 2023-03-31T16:57:43.000Z | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | 0 | 23 | 2023-01-20T15:46:31 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': airport
'2': baseball diamond
'3': basketball court
'4': beach
'5': bridge
'6': chaparral
'7': church
'8': circular farmland
'9': cloud
'10': commercial area
'11': dense residential
'12': desert
'13': forest
'14': freeway
'15': golf course
'16': ground track field
'17': harbor
'18': industrial area
'19': intersection
'20': island
'21': lake
'22': meadow
'23': medium residential
'24': mobile home park
'25': mountain
'26': overpass
'27': palace
'28': parking lot
'29': railway
'30': railway station
'31': rectangular farmland
'32': river
'33': roundabout
'34': runway
'35': sea ice
'36': ship
'37': snowberg
'38': sparse residential
'39': stadium
'40': storage tank
'41': tennis court
'42': terrace
'43': thermal power station
'44': wetland
splits:
- name: train
num_bytes: 381151705
num_examples: 31500
download_size: 424827902
dataset_size: 381151705
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "NWPU-RESISC45"
## Dataset Description
- **Paper** [Remote sensing image scene classification: Benchmark and state of the art](https://ieeexplore.ieee.org/iel7/5/8045830/07891544.pdf)
### Licensing Information
[CC-BY-SA]
## Citation Information
[Remote sensing image scene classification: Benchmark and state of the art](https://ieeexplore.ieee.org/iel7/5/8045830/07891544.pdf)
```
@article{cheng2017remote,
title = {Remote sensing image scene classification: Benchmark and state of the art},
author = {Cheng, Gong and Han, Junwei and Lu, Xiaoqiang},
year = 2017,
journal = {Proceedings of the IEEE},
publisher = {IEEE},
volume = 105,
number = 10,
pages = {1865--1883}
}
``` | 2,320 | [
[
-0.042266845703125,
0.00949859619140625,
-0.003879547119140625,
0.005222320556640625,
-0.028564453125,
-0.0148773193359375,
0.01873779296875,
-0.038482666015625,
-0.03228759765625,
0.01824951171875,
-0.03936767578125,
-0.046905517578125,
-0.0216522216796875,
... |
NeelNanda/pile-tokenized-10b | 2023-01-24T20:52:44.000Z | [
"region:us"
] | NeelNanda | null | null | 0 | 23 | 2023-01-24T17:14:07 | ---
dataset_info:
features:
- name: tokens
sequence: uint16
splits:
- name: train
num_bytes: 22153340700
num_examples: 10795975
download_size: 19746448291
dataset_size: 22153340700
---
# Dataset Card for "pile-tokenized-10b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 381 | [
[
-0.048187255859375,
-0.0275421142578125,
-0.000736236572265625,
0.03411865234375,
-0.0266571044921875,
0.0126190185546875,
0.029144287109375,
-0.01165008544921875,
0.07501220703125,
0.042510986328125,
-0.040313720703125,
-0.046783447265625,
-0.05718994140625,
... |
artem9k/ai-text-detection-pile | 2023-02-27T03:37:54.000Z | [
"license:mit",
"region:us"
] | artem9k | null | null | 2 | 23 | 2023-02-27T02:52:29 | ---
license: mit
---
# Dataset Card for AI Text Dectection Pile
## Dataset Description
- **Point of Contact:artem9k@gmail.com
### Dataset Summary
This is a large scale dataset intended for AI Text Detection tasks, geared toward long-form text and essays. It contains samples of both human text and AI-generated text from GPT2, GPT3, ChatGPT, GPTJ.
Here is the (tentative) breakdown:
#### Human Text
| Dataset | Num Samples | Link |
| ----------- | ----------- | ----------- |
| Reddit WritingPromps | 570k | [Link](https://www.kaggle.com/datasets/ratthachat/writing-prompts) |
| OpenAI Webtext | 260k | [Link](https://github.com/openai/gpt-2-output-dataset) |
| HC3 (Human Responses) | 58k | [Link](https://huggingface.co/datasets/Hello-SimpleAI/HC3) |
| ivypanda-essays | TODO | TODO |
| **Total** | **990k** | **-** |
#### AI-Generated Text
| Model | Dataset | Num Samples | Link |
| ----------- | ----------- | ----------- | ----------- |
| GPT2 | OpenAI gpt2-output-dataset | 260k | [Link](https://github.com/openai/gpt-2-output-dataset) |
| GPT3 | pairwise-davinci | 44k | TODO |
| GPT3 | synthetic-instruct-davinci-pairwise | 30k | [Link](https://huggingface.co/datasets/Dahoas/instruct-synthetic-prompt-responses) |
| GPTJ | synthetic-instruct-gptj-pairwise | 44k | [Link](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) |
| ChatGPT | Scraped from twitter | 5k | **-** |
| ChatGPT | HC3 (ChatGPT Responses) | 27k | [Link](https://huggingface.co/datasets/Hello-SimpleAI/HC3) |
| ChatGPT | ChatGPT Prompts/emergentmind | 500 | [Link](https://huggingface.co/datasets/MohamedRashad/ChatGPT-prompts/tree/main) |
| **Total** | **340k** | **-** | **-** |
### Supported Tasks and Leaderboards
Text Classification, AI Text Detection.
### Languages
English.
### Data Fields
TEXT: The text of the sample.
SOURCE: either "human" or "ai" | 1,872 | [
[
-0.0309600830078125,
-0.04949951171875,
0.0177764892578125,
-0.00038933753967285156,
-0.01404571533203125,
0.007556915283203125,
-0.00620269775390625,
-0.04620361328125,
0.010650634765625,
0.038970947265625,
-0.043731689453125,
-0.061920166015625,
-0.05032348632... |
AyoubChLin/CNN_News_Articles_2011-2022 | 2023-04-10T15:29:24.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"region:us"
] | AyoubChLin | null | null | 2 | 23 | 2023-03-19T11:01:10 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
pretty_name: CNN News Article from 20211 to 2022
size_categories:
- n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': business
'1': entertainment
'2': health
'3': news
'4': politics
'5': sport
splits:
- name: train
num_examples: 32218
- name: test
num_examples: 5686
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
---
# CNN News Articles 2011-2022 Dataset
## Introduction
This dataset contains CNN News Articles from 2011 to 2022 after basic cleaning. The dataset includes the following information:
Category
Full text
The data was downloaded from Kaggle at this URL: https://www.kaggle.com/datasets/hadasu92/cnn-articles-after-basic-cleaning. The dataset was split into two sets:
Train set with 32,218 examples
Test set with 5,686 examples
## Usage
This dataset can be used for different natural language processing tasks such as text classification, text summarization, named entity recognition, and more. The dataset is available in Hugging Face Datasets with the ID AyoubChLin/CNN_News_Articles_2011-2022.
## Acknowledgements
The data was collected by the Kaggle user [hadasu92](https://github.com/hadasu). The splitting of the dataset into train and test sets was performed by [CHERGUELAINE Ayoub](https://www.linkedin.com/in/ayoub-cherguelaine/) & [BOUBEKRI Faycal](https://www.linkedin.com/in/faycal-boubekri-832848199/). | 1,774 | [
[
-0.034027099609375,
-0.05712890625,
0.00815582275390625,
0.0253143310546875,
-0.03436279296875,
-0.01102447509765625,
-0.0214385986328125,
-0.0321044921875,
0.01171112060546875,
0.0418701171875,
-0.04364013671875,
-0.0308685302734375,
-0.053009033203125,
0.0... |
paulofinardi/OIG_small_chip2_portuguese_brasil | 2023-03-19T23:16:11.000Z | [
"task_categories:conversational",
"task_categories:text2text-generation",
"language:pt",
"region:us"
] | paulofinardi | null | null | 8 | 23 | 2023-03-19T22:45:05 | ---
dataset_info:
features:
- name: user
dtype: string
- name: chip2
dtype: string
splits:
- name: train
num_examples: 210289
task_categories:
- conversational
- text2text-generation
language:
- pt
---
# Dataset Card for "OIG_small_chip2_portuguese_brasil"
This dataset was translated to Portuguese-Brasil from [here](https://huggingface.co/datasets/0-hero/OIG-small-chip2)
The data was translated with *MarianMT* model and weights [Helsinki-NLP/opus-mt-en-ROMANCE](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE)
The full details to replicate the translation are here: [translation_notebook](https://github.com/finardi/tutos/blob/master/translate_Laion_OIG.ipynb)
---
license: apache-2.0
--- | 728 | [
[
0.0001728534698486328,
-0.037445068359375,
0.00835418701171875,
0.033294677734375,
-0.0445556640625,
-0.0175933837890625,
-0.0163421630859375,
-0.046661376953125,
0.03302001953125,
0.04376220703125,
-0.031707763671875,
-0.046173095703125,
-0.04730224609375,
... |
breadlicker45/musenet-encoders-12k | 2023-03-21T22:03:18.000Z | [
"region:us"
] | breadlicker45 | null | null | 1 | 23 | 2023-03-21T21:54:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
artemkramov/coreference-dataset-ua | 2023-04-02T11:54:35.000Z | [
"task_categories:token-classification",
"size_categories:10K<n<100K",
"language:uk",
"coreference-resolution",
"coreference",
"anaphora",
"region:us"
] | artemkramov | null | null | 4 | 23 | 2023-04-01T13:07:36 | ---
task_categories:
- token-classification
language:
- uk
pretty_name: 'Silver Ukrainian Coreference Dataset '
tags:
- coreference-resolution
- coreference
- anaphora
size_categories:
- 10K<n<100K
---
# Silver Ukrainian Coreference Dataset
## Dataset Description
### Dataset Summary
A silver coreference resolution dataset for the Ukrainian language. The dataset was generated automatically with the usage of the word alignment method from the following English dataset: https://github.com/d5555/Coreference-dataset.
The word alignment method was implemented by Andrii Kursin (aqrsn@ukr.net).
### Languages
- Ukrainian
## Dataset Structure
### Data Fields
Each sample of the dataset consists of the following fields:
- **doc_key** - document identifier.
- **clusters** - list of clusters, where each cluster consists of the list of mentions. Each mention is represented as a list of two indices: the first index denotes the first word of the mention, the second index denotes the last word of the mention.
- **sentences** - list of sentences where each sentence is represented as a list of words.
- **tokens** - list of words.
- **speakers** - list of speakers which is currently filled with dummy input.
### Data Splits
The dataset is divided into two parts:
- training set;
- validation set.
A test set is absent as far as the dataset is generated automatically.
## Dataset Creation
### Source Data
The dataset was created from the following dataset: https://github.com/d5555/Coreference-dataset.
### Contributions
The code for the translation of samples with further alignment was created by Andrii Kursin (aqrsn@ukr.net). The dataset was generated by Artem Kramov (https://www.linkedin.com/in/artem-kramov-0b3731100/). | 1,739 | [
[
-0.0116729736328125,
-0.0020198822021484375,
0.0167999267578125,
-0.025726318359375,
-0.026397705078125,
0.0191802978515625,
-0.019134521484375,
-0.0159454345703125,
0.0167694091796875,
0.02093505859375,
-0.038421630859375,
-0.07177734375,
-0.0386962890625,
... |
liuyanchen1015/MULTI_VALUE_sst2_negative_concord | 2023-04-03T19:48:02.000Z | [
"region:us"
] | liuyanchen1015 | null | null | 0 | 23 | 2023-04-03T19:47:58 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 6956
num_examples: 48
- name: test
num_bytes: 12384
num_examples: 84
- name: train
num_bytes: 165604
num_examples: 1366
download_size: 95983
dataset_size: 184944
---
# Dataset Card for "MULTI_VALUE_sst2_negative_concord"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 584 | [
[
-0.0286712646484375,
-0.01107025146484375,
0.0220794677734375,
0.018890380859375,
-0.041534423828125,
0.003383636474609375,
0.017669677734375,
-0.00457763671875,
0.060760498046875,
0.0198974609375,
-0.048004150390625,
-0.05413818359375,
-0.04827880859375,
-0... |
Svetlana0303/1500_aug_ds | 2023-04-10T15:21:03.000Z | [
"region:us"
] | Svetlana0303 | null | null | 0 | 23 | 2023-04-10T15:18:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ruanchaves/rerelem | 2023-04-14T11:01:24.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|harem",
"language:pt",
"relation extraction,",
"region:us"
] | ruanchaves | 2 | 23 | 2023-04-11T07:18:00 | ---
annotations_creators:
- expert-generated
language:
- pt
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: ReRelEM
size_categories:
- 1K<n<10K
source_datasets:
- extended|harem
tags:
- relation extraction,
task_categories:
- text-classification
task_ids: []
---
# Dataset Card for ReRelEM
## Dataset Description
- **Paper:** [Relation detection between named entities: report of a shared task](https://aclanthology.org/W09-2421.pdf)
- **Point of Contact:** [Hugo Gonçalo Oliveira](hroliv@dei.uc.pt)
### Dataset Summary
The ReRelEM dataset is designed for the detection and classification of relations between named entities in Portuguese text. It contains 2226 training, 701 validation, and 805 test instances. Each instance contains two sentences with two entities enclosed by the tags [E1] and [E2]. The dataset provides a fourfold relationship classification: identity, included-in, located-in, and other (which is detailed into twenty different relations).
It's important to note that, although we maintained more than 99% of the original instances, this is not a full representation of the original ReRelEM dataset.
The dataset was split into train, validation, and test sets, after which 21 instances with relation types not included in the training set were dropped from the test set. Furthermore, 7 instances from the original dataset that had formatting errors and could not be resolved into post-processed records were also dropped.
### Supported Tasks and Leaderboards
- Relation extraction: The primary task of this dataset is to classify relations between named entities.
### Languages
- Portuguese
## Dataset Structure
### Data Instances
An example data instance from the dataset:
```json
{
"docid": "cver",
"sentence1": "O PRESIDENTE Sarkozy abriu a Conferência de Dadores realizada em Paris com uma frase grandiloquente sobre a necessidade urgente de criar um Estado palestiniano no fim de 2008 . O Presidente ou é mentiroso ou finge-se ignorante, ou as duas coisas. Depois do falhanço esperado da cimeira de Annapolis , um modo de [E2]Condoleezza Rice[/E2] salvar a face e de a Administração | Administração americana e a Europa continuarem a fingir que estão interessadas em resolver o conflito israelo-palestiniano e de lavarem as mãos de tudo o resto, Sarkozy não pode ignorar que o momento para pronunciamentos débeis é o menos adequado. Tony Blair , depois de ter minado todo o processo de paz do Médio Oriente ao ordenar a invasão do Iraque de braço dado com [E1]Bush[/E1] , continua a emitir piedades deste género, e diz que está na altura de resolver o problema e que ele pode ser resolvido. Blair não sabe o que diz.",
"sentence2": "nan",
"label": "relacao_profissional",
"same_text": true
}
```
### Data Fields
- `docid`: Document ID of both sentences (sentence1 and sentence2)
- `sentence1`: The first sentence with an entity span enclosed by the tags [E1] and [/E1]
- `sentence2`: The second sentence with an entity span enclosed by the tags [E2] and [/E2]
- `label`: The type of relation between the entities
- `same_text`: True if both entity spans appear in the same sentence. If True, `sentence2` will be empty.
### Data Splits
| | train | validation | test |
|--------|-------|------------|------|
| Instances | 2226 | 701 | 805 |
The dataset was divided in a manner that ensured sentences from the same document did not appear in more than one split.
### Citation Information
```bibtex
@inproceedings{freitas2009relation,
title={Relation detection between named entities: report of a shared task},
author={Freitas, Cl{\\'a}udia and Santos, Diana and Mota, Cristina and Oliveira, Hugo Gon{\\c{c}}alo and Carvalho, Paula},
booktitle={Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009)},
pages={129--137},
year={2009}
}
```
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset. | 4,008 | [
[
-0.0277557373046875,
-0.047393798828125,
0.03924560546875,
0.035980224609375,
-0.0180511474609375,
-0.00408172607421875,
-0.01074981689453125,
-0.0394287109375,
0.03912353515625,
0.046600341796875,
-0.046600341796875,
-0.066162109375,
-0.0615234375,
0.024871... | ||
climatebert/climate_sentiment | 2023-04-18T14:37:00.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | climatebert | null | null | 1 | 23 | 2023-04-11T13:11:01 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: ClimateSentiment
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': risk
'1': neutral
'2': opportunity
splits:
- name: train
num_bytes: 492077
num_examples: 1000
- name: test
num_bytes: 174265
num_examples: 320
download_size: 373638
dataset_size: 666342
---
# Dataset Card for climate_sentiment
## Dataset Description
- **Homepage:** [climatebert.ai](https://climatebert.ai)
- **Repository:**
- **Paper:** [papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435)
- **Leaderboard:**
- **Point of Contact:** [Nicolas Webersinke](mailto:nicolas.webersinke@fau.de)
### Dataset Summary
We introduce an expert-annotated dataset for classifying climate-related sentiment of climate-related paragraphs in corporate disclosures.
### Supported Tasks and Leaderboards
The dataset supports a ternary sentiment classification task of whether a given climate-related paragraph has sentiment opportunity, neutral, or risk.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
'text': '− Scope 3: Optional scope that includes indirect emissions associated with the goods and services supply chain produced outside the organization. Included are emissions from the transport of products from our logistics centres to stores (downstream) performed by external logistics operators (air, land and sea transport) as well as the emissions associated with electricity consumption in franchise stores.',
'label': 1
}
```
### Data Fields
- text: a climate-related paragraph extracted from corporate annual reports and sustainability reports
- label: the label (0 -> risk, 1 -> neutral, 2 -> opportunity)
### Data Splits
The dataset is split into:
- train: 1,000
- test: 320
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains climate-related paragraphs extracted from financial disclosures by firms. We collect text from corporate annual reports and sustainability reports.
For more information regarding our sample selection, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the annotators?
The authors and students at Universität Zürich and Friedrich-Alexander-Universität Erlangen-Nürnberg with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
- Nicolas Webersinke
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch).
### Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
### Contributions
Thanks to [@webersni](https://github.com/webersni) for adding this dataset. | 4,481 | [
[
-0.022216796875,
-0.02264404296875,
0.01427459716796875,
0.01142120361328125,
-0.029815673828125,
0.002918243408203125,
-0.022369384765625,
-0.040374755859375,
0.02178955078125,
0.0266265869140625,
-0.039459228515625,
-0.06317138671875,
-0.039642333984375,
-... |
tiansz/ChineseSTS | 2023-04-20T07:19:37.000Z | [
"task_categories:sentence-similarity",
"size_categories:1M<n<10M",
"language:zh",
"license:apache-2.0",
"STS",
"region:us"
] | tiansz | null | null | 6 | 23 | 2023-04-20T06:40:04 | ---
license: apache-2.0
task_categories:
- sentence-similarity
language:
- zh
tags:
- STS
size_categories:
- 1M<n<10M
---
这是一个中文文本相似度的数据集,相似度划分为 0、1。
该 [notebook](https://www.kaggle.com/code/tiansztianszs/chinese-sentence-similarity) 记录了我使用本数据集的全过程。同时你也可以在 [github](https://github.com/tiansztiansz/Chinese-Text-Similarity) 上下载该数据集 | 335 | [
[
0.0002932548522949219,
-0.06951904296875,
0.02374267578125,
0.05194091796875,
-0.04534912109375,
-0.00896453857421875,
0.0003161430358886719,
-0.02728271484375,
0.044830322265625,
0.0290985107421875,
-0.0117340087890625,
-0.0478515625,
-0.0281219482421875,
0... |
BrunoHays/ESLO | 2023-10-03T09:22:11.000Z | [
"task_categories:automatic-speech-recognition",
"language:fr",
"license:cc-by-nc-4.0",
"region:us"
] | BrunoHays | ESLO dataset, each utterance are taken out individually | @misc{11403/eslo/v1,
title = {ESLO},
author = {LLL},
url = {https://hdl.handle.net/11403/eslo/v1},
note = {{ORTOLANG} ({Open} {Resources} {and} {TOols} {for} {LANGuage}) \textendash www.ortolang.fr},
copyright = {Licence Creative Commons Attribution - Pas d'Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International},
year = {2023}
} | 0 | 23 | 2023-04-21T10:30:18 | ---
task_categories:
- automatic-speech-recognition
language:
- fr
license: cc-by-nc-4.0
---
ESLO audio dataset
configs:
- max30s
- max10s
- single_samples (default)
This script relies on the raw data transcript files and audio files
Licence Creative Commons Attribution - Pas d'Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International
Dependencies:
- ffmpeg: `sudo apt-get install ffmpeg`
- ffmpeg-python: `pip install ffmpeg-python`
```
{'audio': {'array': array([-0.00250244, 0.00039673, 0.00326538, ..., 0.01953125,
0.02206421, 0.02304077]),
'path': None,
'sampling_rate': 16000},
'end_timestamp': 8.939,
'file': 'ESLO1_INTPERS_437',
'overlap': False,
'sentence': "eh bien je voudrais vous demander d'abord en quoi consiste votre "
'entreprise ici ? exactement',
'speaker': 'spk1',
'start_timestamp': 0.954}
```
Eshkol-Taravella I., Baude O., Maurel D., Hriba L., Dugua C., Tellier I., (2012), Un grand corpus oral « disponible » : le corpus d’Orléans 1968-2012., in Ressources linguistiques libres, TAL. Volume 52 – n° 3/2011, 17-46
Laboratoire Ligérien de Linguistique - UMR 7270 (LLL) (2023). ESLO [Corpus]. ORTOLANG (Open Resources and TOols for LANGuage) - www.ortolang.fr, v1, https://hdl.handle.net/11403/eslo/v1. | 1,304 | [
[
-0.03375244140625,
-0.050933837890625,
0.03515625,
0.0229034423828125,
0.007099151611328125,
0.0008921623229980469,
-0.014892578125,
-0.0032482147216796875,
0.03472900390625,
0.045135498046875,
-0.057281494140625,
-0.0704345703125,
-0.03839111328125,
0.02372... |
heegyu/open-korean-instructions | 2023-05-06T09:18:37.000Z | [
"license:mit",
"region:us"
] | heegyu | null | null | 11 | 23 | 2023-04-22T02:10:17 | ---
license: mit
---
4가지 한국어 챗봇 학습용 데이터셋을 합쳐놓았습니다. 이중 ShareGPT 데이터는 멀티턴으로 되어있습니다.
데이터 생성 및 합치는 코드는 https://github.com/HeegyuKim/open-korean-instructions 여기를 참고하세요
| 이름 | # | 타입 |
|---|---|---|
| [KoAlpaca v1.0](https://huggingface.co/datasets/Bingsu/ko_alpaca_data) | 52K | 싱글턴 |
| [KoAlpaca v1.1](https://raw.githubusercontent.com/Beomi/KoAlpaca/main/KoAlpaca_v1.1.jsonl) | 21K | 싱글턴 |
| [ShareGPT DeepL 번역](https://huggingface.co/datasets/junelee/sharegpt_deepl_ko) | 620K(싱글턴), 84K(멀티턴) | 멀티턴, 싱글턴 |
| [OIG-small-chip2-ko](https://huggingface.co/datasets/heegyu/OIG-small-chip2-ko) | 210K | 싱글턴 |
| [Korquad-Chat](https://huggingface.co/datasets/heegyu/korquad-chat-v1) | 9.6K | 멀티턴, 지식기반 |
모든 데이터는 포멧이 통일되어 있습니다. `<sys>`, `<usr>`, `<bot>` 세가지 토큰과 줄넘김으로 화자를 구분합니다.
korquad-chat 데이터의 경우, 유저와 봇이 서로를 호칭할 때는 `<|bot|>`, `<|user|>`로 되어있습니다.
```
{"source": "koalpaca-v1.0", "text": "<usr> 홀수 중 하나를 밝히세요.\n<sys> 트위터, 인스타그램, 텔레그램\n<bot> 텔레그램입니다."}
{"source": "korquad-chat", "text": "
<sys>1984년 9월부터 덴노지 역 ~ 히네노 역간에 운전을 개시한 '홈 라이너 이즈미'가 전신이다. 특급 구로시오의 381계 전동차의 히네노 전차구 입선 회송 열차를 좌석 정리권을 필요로 하는 정원제 열차로서 운행해온 것으로 간사이 지방 최초의 홈 라이너였다. 1986년 11월 개정에 따라 운전 구간이 와카야마 역까지 연장되어 한와 라이너로 개칭되어 현재에 이르게 되었다. 이전에는 평일에 상행 3편, 하행 6편이 운전되었지만 일부가 신오사카 역이나 교토 역 방면 발착 특급 구로시오로 변경되어 2002년 3월 이후에는 상행 2편, 하행 4편으로 축소되었다. 2009년 3월의 시간표 개정부터는 휴일 시간대의 운전이 중단되었으며 같은 해 6월 1일부터는 전면 금연화되었다.\n
<usr> 안녕하세요, <|bot|>. \"한와_선\"에 대해 알고 계시나요?\n
<bot> 네, 전체적으로 알고 있습니다. 1984년에 운전을 개시한 \"홈 라이너 이즈미\"가 전신이며, 간사이 지방 최초의 홈 라이너였습니다. 1986년에는 와카야마 역까지 연장된 후 \"한와 라이너\"로 개칭되었습니다.\n
<usr> 그렇군요. 이전에는 평일에 몇 편이 운행되었나요?\n
<bot> 평일에 상행 3편, 하행 6편이 운전되었습니다. 하지만 2002년 이후로는 상행 2편, 하행 4편으로 축소되었습니다.\n
<usr> 그렇군요. 혹시 특급 구로시오와 연관이 있을까요?\n
<bot> 네, \"한와 라이너\"는 특급 구로시오의 381계 전동차의 히네노 전차구 입선 회송 열차를 좌석 정리권을 필요로 하는 정원제 열차로서 운행되었습니다."
}
```
| 1,835 | [
[
-0.045745849609375,
-0.055267333984375,
0.020660400390625,
0.03839111328125,
-0.03631591796875,
-0.00051116943359375,
0.014007568359375,
-0.02630615234375,
0.051513671875,
0.0264129638671875,
-0.0377197265625,
-0.041900634765625,
-0.05230712890625,
-0.005413... |
Hyeon2/riffusion-musiccaps-dataset | 2023-07-15T15:43:17.000Z | [
"task_categories:text-to-image",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"music",
"region:us"
] | Hyeon2 | null | null | 2 | 23 | 2023-04-25T13:02:53 | ---
language: en
license: cc-by-4.0
size_categories:
- 10K<n<100K
task_categories:
- text-to-image
pretty_name: riffusion manipulated google/musiccap
viewer: true
tags:
- music
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2521001438.24
num_examples: 20588
download_size: 2509138106
dataset_size: 2521001438.24
---
riffusion manipulated google/MusicCaps | 449 | [
[
-0.0268402099609375,
-0.049224853515625,
0.027191162109375,
0.019989013671875,
-0.0091400146484375,
0.0185546875,
-0.022796630859375,
-0.02197265625,
0.06982421875,
0.0650634765625,
-0.08233642578125,
-0.0166015625,
-0.026763916015625,
-0.0024890899658203125... |
Harsit/xnli2.0_assamese | 2023-04-26T19:01:07.000Z | [
"region:us"
] | Harsit | null | null | 0 | 23 | 2023-04-26T08:38:00 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
philschmid/sql-create-context-copy | 2023-05-01T10:37:47.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:table-question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"SQL",
"code",
"NLP",
"text-to-sql",
"context-sql",
"spider",
"wikisql",
"sqlglot",
"region:us"
] | philschmid | null | null | 2 | 23 | 2023-05-01T10:37:03 | ---
license: cc-by-4.0
task_categories:
- text-generation
- question-answering
- table-question-answering
language:
- en
tags:
- SQL
- code
- NLP
- text-to-sql
- context-sql
- spider
- wikisql
- sqlglot
pretty_name: sql-create-context
size_categories:
- 10K<n<100K
duplicated_from: b-mc2/sql-create-context
---
# Fork of [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context)
#### Overview
This dataset builds from [WikiSQL](https://huggingface.co/datasets/wikisql) and [Spider](https://huggingface.co/datasets/spider).
There are 78,577 examples of natural language queries, SQL CREATE TABLE statements, and SQL Query answering the question using the CREATE statement as context. This dataset was built with text-to-sql LLMs in mind, intending to prevent hallucination of column and table names often seen when trained on text-to-sql datasets. The CREATE TABLE statement can often be copy and pasted from different DBMS and provides table names, column names and their data types. By providing just the CREATE TABLE statement as context, we can hopefully provide better grounding for models without having to provide actual rows of data, limiting token usage and exposure to private, sensitive, or proprietary data.
#### Cleansing and Augmentation
Cleansing and data augmentation has been done on the combined WikiSQL and Spider data. I used [SQLGlot](https://github.com/tobymao/sqlglot) on queries from Spider and WikiSQL and parsed them into different tables and columns, I then inferred column data types based on usage of `>` `<` operators as well as the use of `MIN()` `MAX()` `AVG()` `SUM()` on columns. While this isn't perfect, it increases the likelihood of inferring the correct datatype for a column, the columns otherwise default to VARCHAR type. These tables and columns are then used to generate CREATE TABLE statements using the inferred types. SQLGlot is used again to ensure both the SQL queries and CREATE TABLE statements parse without errors.
Some queries that do not have column names, e.g. SELECT * FROM table, have a default Id column added to the CREATE TABLE statement. Some other queries which use the generic `table` as the FROM table have instead been changed to a variation of `table_name_1` or some other number which is also reflected in the CREATE TABLE statement.
#### TODO
- Further augment the data by converting queries and CREATE TABLE statements into different SQL dialects, this can be done with SQLGlot. Reference to the dialect might also be added to the question.
- Support other informative contexts beyond CREATE TABLE
Random sample:
```json
{
"question": "Please show the themes of competitions with host cities having populations larger than 1000.",
"context": "CREATE TABLE city (City_ID VARCHAR, Population INTEGER); CREATE TABLE farm_competition (Theme VARCHAR, Host_city_ID VARCHAR)",
"answer": "SELECT T2.Theme FROM city AS T1 JOIN farm_competition AS T2 ON T1.City_ID = T2.Host_city_ID WHERE T1.Population > 1000"
},
{
"question": "Please show the different statuses of cities and the average population of cities with each status.",
"context": "CREATE TABLE city (Status VARCHAR, Population INTEGER)",
"answer": "SELECT Status, AVG(Population) FROM city GROUP BY Status"
},
``` | 3,302 | [
[
-0.03814697265625,
-0.070068359375,
0.029449462890625,
0.0016841888427734375,
-0.0290985107421875,
-0.006282806396484375,
0.0072479248046875,
-0.0156097412109375,
0.040771484375,
0.06658935546875,
-0.036956787109375,
-0.05023193359375,
-0.0073394775390625,
0... |
HAERAE-HUB/KoInstruct-QA | 2023-05-05T13:28:25.000Z | [
"region:us"
] | HAERAE-HUB | null | null | 0 | 23 | 2023-05-05T11:28:02 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: type
dtype: string
- name: template
dtype: string
splits:
- name: train
num_bytes: 237493038
num_examples: 50276
download_size: 113325801
dataset_size: 237493038
---
# Dataset Card for "ko_instruct_ki_v0.1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 514 | [
[
-0.03546142578125,
-0.00872802734375,
0.013458251953125,
0.0180206298828125,
-0.020721435546875,
-0.01346588134765625,
0.0272979736328125,
-0.007434844970703125,
0.06005859375,
0.044525146484375,
-0.064208984375,
-0.05657958984375,
-0.036773681640625,
-0.027... |
wangrongsheng/icliniq-10k-en | 2023-05-07T07:35:44.000Z | [
"region:us"
] | wangrongsheng | null | null | 2 | 23 | 2023-05-07T07:34:37 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
skeskinen/TinyStories-Instruct-hf | 2023-05-17T18:36:50.000Z | [
"arxiv:2305.07759",
"region:us"
] | skeskinen | null | null | 3 | 23 | 2023-05-17T17:17:07 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2648754575
num_examples: 2476533
- name: validation
num_bytes: 26745785
num_examples: 25028
download_size: 1325495040
dataset_size: 2675500360
---
A description of this dataset can be found at https://arxiv.org/abs/2305.07759
Copied from roneneldan/TinyStoriesInstruct
Modified with:
```
import ftfy.bad_codecs
from datasets import Dataset, DatasetDict
train = open('./TinyStories-Instruct-train.txt', 'r', encoding='sloppy-windows-1252').read()
train = train.split('<|endoftext|>')
train = [l.strip() for l in train]
valid = open('./TinyStories-Instruct-valid.txt', 'r', encoding='sloppy-windows-1252').read()
valid = valid.split('<|endoftext|>')
valid = [l.strip() for l in valid]
dataset = DatasetDict({
'train': Dataset.from_dict({'text': train }),
'validation': Dataset.from_dict({'text': valid}),
})
dataset.save_to_disk('./TinyStories-Instruct')
``` | 991 | [
[
-0.01045989990234375,
-0.0222320556640625,
0.011962890625,
-0.019287109375,
-0.005096435546875,
-0.026580810546875,
-0.03485107421875,
-0.005817413330078125,
0.00337982177734375,
0.026123046875,
-0.04669189453125,
-0.037567138671875,
-0.01532745361328125,
0.... |
almanach/hc3_french_ood | 2023-06-05T10:19:19.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:zero-shot-classification",
"size_categories:10K<n<100K",
"language:en",
"language:fr",
"license:cc-by-sa-4.0",
"ChatGPT",
"Bing",
"LM Detection",
"Detection",
... | almanach | Human ChatGPT Comparison Corpus (HC3) Translated To French.
The translation is done by Google Translate API.
We also add the native french QA pairs from ChatGPT, BingGPT and FAQ pages.
This dataset was used in our TALN 2023 paper.
Towards a Robust Detection of Language Model-Generated Text: Is ChatGPT that easy to detect? | # TODO: Add BibTeX citation for our TALN 2023 paper:
Towards a Robust Detection of Language Model-Generated Text: Is ChatGPT that easy to detect?
@article{guo-etal-2023-hc3,
title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
author = "Guo, Biyang and
Zhang, Xin and
Wang, Ziyuan and
Jiang, Minqi and
Nie, Jinran and
Ding, Yuxuan and
Yue, Jianwei and
Wu, Yupeng",
journal={arXiv preprint arxiv:2301.07597}
year = "2023",
} | 1 | 23 | 2023-05-30T14:16:14 | ---
task_categories:
- text-classification
- question-answering
- sentence-similarity
- zero-shot-classification
language:
- en
- fr
size_categories:
- 10K<n<100K
tags:
- ChatGPT
- Bing
- LM Detection
- Detection
- OOD
license: cc-by-sa-4.0
---
Dataset card for the dataset used in :
## Towards a Robust Detection of Language Model-Generated Text: Is ChatGPT that easy to detect?
Paper: https://gitlab.inria.fr/wantoun/robust-chatgpt-detection/-/raw/main/towards_chatgpt_detection.pdf
Source Code: https://gitlab.inria.fr/wantoun/robust-chatgpt-detection
## Dataset Summary
#### overview:
This dataset is made of two parts:
- First, an extension of the [Human ChatGPT Comparison Corpus (HC3) dataset](https://huggingface.co/datasets/Hello-SimpleAI/HC3) with French data automatically translated from the English source.
- Second, out-of-domain and adversarial French data set have been gathereed (Human adversarial, BingGPT, Native French ChatGPT responses).
#### Details:
- We first format the data into three subsets: `sentence`, `question` and `full` following the original paper.
- We then extend the data by translating the English questions and answers to French.
- We provide native French ChatGPT responses to a sample of the translated questions.
- We added a subset with QA pairs from BingGPT
- We included an adversarial subset with human-written answers in the style of conversational LLMs like Bing/ChatGPT.
## Available Subsets
### Out-of-domain:
- `hc3_fr_qa_chatgpt`: Translated French questions and native French ChatGPT answers pairs from HC3. This is the `ChatGPT-Native` subset from the paper.
- Features: `id`, `question`, `answer`, `chatgpt_answer`, `label`, `source`
- Size:
- test: `113` examples, `25592` words
- `qa_fr_binggpt`: French questions and BingGPT answers pairs. This is the `BingGPT` subset from the paper.
- Features: `id`, `question`, `answer`, `label`, `deleted_clues`, `deleted_sources`, `remarks`
- Size:
- test: `106` examples, `26291` words
- `qa_fr_binglikehuman`: French questions and human written BingGPT-like answers pairs. This is the `Adversarial` subset from the paper.
- Features: `id`, `question`, `answer`, `label`, `source`
- Size:
- test: `61` examples, `17328` words
- `faq_fr_gouv`: French FAQ questions and answers pairs from domain ending with `.gouv` from the MQA dataset (subset 'fr-faq-page'). https://huggingface.co/datasets/clips/mqa. This is the `FAQ-Gouv` subset from the paper.
- Features: `id`, `page_id`, `question_id`, `answer_id`, `bucket`, `domain`, `question`, `answer`, `label`
- Size:
- test: `235` examples, `22336` words
- `faq_fr_random`: French FAQ questions and answers pairs from random domain from the MQA dataset (subset 'fr-faq-page'). https://huggingface.co/datasets/clips/mqa. This is the `FAQ-Rand` subset from the paper.
- Features: `id`, `page_id`, `question_id`, `answer_id`, `bucket`, `domain`, `question`, `answer`, `label`
- Size:
- test: `4454` examples, `271823` words
### In-domain:
- `hc3_en_qa`: English questions and answers pairs from HC3.
- Features: `id`, `question`, `answer`, `label`, `source`
- Size:
- train: `68335` examples, `12306363` words
- validation: `17114` examples, `3089634` words
- test: `710` examples, `117001` words
- `hc3_en_sentence`: English answers split into sentences from HC3.
- Features: `id`, `text`, `label`, `source`
- Size:
- train: `455320` examples, `9983784` words
- validation: `113830` examples, `2510290` words
- test: `4366` examples, `99965` words
- `hc3_en_full`: English questions and answers pairs concatenated from HC3.
- Features: `id`, `text`, `label`, `source`
- Size:
- train: `68335` examples, `9982863` words
- validation: `17114` examples, `2510058` words
- test: `710` examples, `99926` words
- `hc3_fr_qa`: Translated French questions and answers pairs from HC3.
- Features: `id`, `question`, `answer`, `label`, `source`
- Size:
- train: `68283` examples, `12660717` words
- validation: `17107` examples, `3179128` words
- test: `710` examples, `127193` words
- `hc3_fr_sentence`: Translated French answers split into sentences from HC3.
- Features: `id`, `text`, `label`, `source`
- Size:
- train: `464885` examples, `10189606` words
- validation: `116524` examples, `2563258` words
- test: `4366` examples, `108374` words
- `hc3_fr_full`: Translated French questions and answers pairs concatenated from HC3.
- Features: `id`, `text`, `label`, `source`
- Size:
- train: `68283` examples, `10188669` words
- validation: `17107` examples, `2563037` words
- test: `710` examples, `108352` words
## How to load
```python
from datasets import load_dataset
dataset = load_dataset("almanach/hc3_multi", "hc3_fr_qa")
```
## Dataset Copyright
If the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same.
If not, they follow CC-BY-SA license.
| English Split | Source | Source License | Note |
|----------|-------------|--------|-------------|
| reddit_eli5 | [ELI5](https://github.com/facebookresearch/ELI5) | BSD License | |
| open_qa | [WikiQA](https://www.microsoft.com/en-us/download/details.aspx?id=52419) | [PWC Custom](https://paperswithcode.com/datasets/license) | |
| wiki_csai | Wikipedia | CC-BY-SA | | [Wiki FAQ](https://en.wikipedia.org/wiki/Wikipedia:FAQ/Copyright) |
| medicine | [Medical Dialog](https://github.com/UCSD-AI4H/Medical-Dialogue-System) | Unknown| [Asking](https://github.com/UCSD-AI4H/Medical-Dialogue-System/issues/10)|
| finance | [FiQA](https://paperswithcode.com/dataset/fiqa-1) | Unknown | Asking by 📧 |
| FAQ | [MQA]( https://huggingface.co/datasets/clips/mqa) | CC0 1.0| |
| ChatGPT/BingGPT | | Unknown | This is ChatGPT/BingGPT generated data. |
| Human | | CC-BY-SA | |
## Citation
```bibtex
@proceedings{towards-a-robust-2023-antoun,
title = "Towards a Robust Detection of Language Model-Generated Text: Is ChatGPT that easy to detect?",
editor = "Antoun, Wissam and
Mouilleron, Virginie and
Sagot, Benoit and
Seddah, Djam{\'e}",
month = "6",
year = "2023",
address = "Paris, France",
publisher = "ATALA",
url = "https://gitlab.inria.fr/wantoun/robust-chatgpt-detection/-/raw/main/towards_chatgpt_detection.pdf",
}
```
```bibtex
@article{guo-etal-2023-hc3,
title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
author = "Guo, Biyang and
Zhang, Xin and
Wang, Ziyuan and
Jiang, Minqi and
Nie, Jinran and
Ding, Yuxuan and
Yue, Jianwei and
Wu, Yupeng",
journal={arXiv preprint arxiv:2301.07597}
year = "2023",
url ="https://arxiv.org/abs/2301.07597"
}
``` | 7,025 | [
[
-0.03277587890625,
-0.073974609375,
0.02020263671875,
0.002918243408203125,
-0.004070281982421875,
-0.0131988525390625,
-0.0157928466796875,
-0.0171356201171875,
0.006336212158203125,
0.039337158203125,
-0.03558349609375,
-0.04364013671875,
-0.042205810546875,
... |
Amirkid/MedQuad-dataset | 2023-06-06T15:08:50.000Z | [
"region:us"
] | Amirkid | null | null | 1 | 23 | 2023-06-06T15:08:42 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 21658852
num_examples: 32800
download_size: 8756796
dataset_size: 21658852
---
# Dataset Card for "MedQuad-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 360 | [
[
-0.041046142578125,
-0.018768310546875,
0.01800537109375,
0.003528594970703125,
-0.0200347900390625,
0.00400543212890625,
0.02642822265625,
-0.003810882568359375,
0.062469482421875,
0.03656005859375,
-0.054840087890625,
-0.055389404296875,
-0.037506103515625,
... |
Nadav/pixel_glue_mnli | 2023-06-13T02:17:07.000Z | [
"region:us"
] | Nadav | null | null | 0 | 23 | 2023-06-13T02:11:30 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
splits:
- name: train
num_bytes: 5503541554.25
num_examples: 392702
- name: validation
num_bytes: 278770933.125
num_examples: 19647
download_size: 5641852302
dataset_size: 5782312487.375
---
# Dataset Card for "pixel_glue_mnli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 567 | [
[
-0.035919189453125,
-0.032623291015625,
0.010986328125,
0.01546478271484375,
-0.004314422607421875,
-0.0007615089416503906,
0.0231475830078125,
0.0032749176025390625,
0.06781005859375,
0.020721435546875,
-0.06072998046875,
-0.05413818359375,
-0.033294677734375,
... |
renumics/mnist-outlier | 2023-06-30T20:08:34.000Z | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-nist",
"language:en",
"license:mit",
"region:us"
] | renumics | null | null | 0 | 23 | 2023-06-14T07:28:06 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-nist
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: mnist
pretty_name: MNIST
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
- name: embedding_foundation
sequence: float32
- name: embedding_ft
sequence: float32
- name: outlier_score_ft
dtype: float64
- name: outlier_score_foundation
dtype: float64
- name: nn_image
struct:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
splits:
- name: train
num_bytes: 404136444.0
num_examples: 60000
download_size: 472581433
dataset_size: 404136444.0
---
# Dataset Card for "mnist-outlier"
📚 This dataset is an enriched version of the [MNIST Dataset](http://yann.lecun.com/exdb/mnist/).
The workflow is described in the medium article: [Changes of Embeddings during Fine-Tuning of Transformers](https://medium.com/@markus.stoll/changes-of-embeddings-during-fine-tuning-c22aa1615921).
## Explore the Dataset
The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) allows you to explorer this dataset. You can find a Hugging Face Space running Spotlight with this dataset here: <https://huggingface.co/spaces/renumics/mnist-outlier>.

Or you can explorer it locally:
```python
!pip install renumics-spotlight datasets
from renumics import spotlight
import datasets
ds = datasets.load_dataset("renumics/mnist-outlier", split="train")
df = ds.rename_columns({"label":"labels"}).to_pandas()
df["label_str"] = df["labels"].apply(lambda x: ds.features["label"].int2str(x))
dtypes = {
"nn_image": spotlight.Image,
"image": spotlight.Image,
"embedding_ft": spotlight.Embedding,
"embedding_foundation": spotlight.Embedding,
}
spotlight.show(
df,
dtype=dtypes,
layout="https://spotlight.renumics.com/resources/layout_pre_post_ft.json",
)
``` | 2,420 | [
[
-0.035919189453125,
-0.03997802734375,
0.01102447509765625,
0.0185089111328125,
-0.0217742919921875,
-0.0012493133544921875,
-0.0188446044921875,
-0.0008301734924316406,
0.05975341796875,
0.05267333984375,
-0.05523681640625,
-0.044921875,
-0.04534912109375,
... |
santoshtyss/us-court-cases | 2023-07-03T14:57:31.000Z | [
"region:us"
] | santoshtyss | null | null | 0 | 23 | 2023-07-03T13:33:16 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 68561500135
num_examples: 4430756
- name: validation
num_bytes: 369842972
num_examples: 100000
download_size: 15853634750
dataset_size: 68931343107
---
# Dataset Card for "us-court-cases"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 442 | [
[
-0.0224456787109375,
-0.01898193359375,
0.03594970703125,
0.0112762451171875,
-0.040985107421875,
-0.0030307769775390625,
0.0242156982421875,
0.004512786865234375,
0.047271728515625,
0.038116455078125,
-0.0390625,
-0.058837890625,
-0.03369140625,
-0.03121948... |
renumics/f1_demo_dataset | 2023-07-19T10:05:28.000Z | [
"region:us"
] | renumics | null | null | 0 | 23 | 2023-07-06T07:07:39 | ---
dataset_info:
features:
- name: Time
dtype: duration[ns]
- name: Driver
dtype: string
- name: DriverNumber
dtype: string
- name: LapTime
dtype: float64
- name: LapNumber
dtype: float64
- name: Stint
dtype: float64
- name: PitOutTime
dtype: duration[ns]
- name: PitInTime
dtype: duration[ns]
- name: Sector1Time
dtype: float64
- name: Sector2Time
dtype: float64
- name: Sector3Time
dtype: float64
- name: Sector1SessionTime
dtype: duration[ns]
- name: Sector2SessionTime
dtype: duration[ns]
- name: Sector3SessionTime
dtype: duration[ns]
- name: SpeedI1
dtype: float64
- name: SpeedI2
dtype: float64
- name: SpeedFL
dtype: float64
- name: SpeedST
dtype: float64
- name: IsPersonalBest
dtype: bool
- name: Compound
dtype: string
- name: TyreLife
dtype: float64
- name: FreshTyre
dtype: bool
- name: Team
dtype: string
- name: LapStartTime
dtype: duration[ns]
- name: LapStartDate
dtype: timestamp[ns]
- name: TrackStatus
dtype: string
- name: Position
dtype: float64
- name: Deleted
dtype: bool
- name: DeletedReason
dtype: string
- name: FastF1Generated
dtype: bool
- name: IsAccurate
dtype: bool
- name: speed
sequence:
sequence: float64
- name: throttle
sequence:
sequence: float64
- name: drs
sequence:
sequence: float64
- name: nGear
sequence:
sequence: float64
- name: brake
sequence:
sequence: float64
- name: x
sequence:
sequence: float64
- name: y
sequence:
sequence: float64
- name: z
sequence:
sequence: float64
- name: distance_driver
sequence:
sequence: float64
- name: speed_emb
sequence: float64
- name: brake_emb
sequence: float64
- name: throttle_emb
sequence: float64
- name: x_emb
dtype: float64
- name: y_emb
dtype: float64
- name: z_emb
dtype: float64
- name: gear_vis
dtype: string
- name: speed_vis
dtype: string
- name: portrait
dtype: string
- name: brake_emb_reduced
sequence: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 22426400
num_examples: 201
download_size: 15371945
dataset_size: 22426400
---
# Dataset Card for "f1_demo_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,510 | [
[
-0.0411376953125,
-0.0216522216796875,
0.0095367431640625,
0.023773193359375,
-0.01605224609375,
0.003978729248046875,
0.012603759765625,
-0.00007462501525878906,
0.05035400390625,
0.01250457763671875,
-0.072998046875,
-0.055389404296875,
-0.0259552001953125,
... |
chromadb/state_of_the_union | 2023-07-07T18:13:04.000Z | [
"region:us"
] | chromadb | null | null | 0 | 23 | 2023-07-07T18:12:59 | ---
dataset_info:
features:
- name: id
dtype: string
- name: embedding
sequence: float64
- name: metadata
struct:
- name: source
dtype: string
- name: document
dtype: string
splits:
- name: data
num_bytes: 556545
num_examples: 42
download_size: 519613
dataset_size: 556545
---
# Dataset Card for "state_of_the_union"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 501 | [
[
-0.033355712890625,
-0.015625,
0.0262451171875,
0.0172119140625,
-0.0234222412109375,
0.01087188720703125,
0.0261077880859375,
0.0128173828125,
0.0723876953125,
0.0230560302734375,
-0.0382080078125,
-0.042449951171875,
-0.031982421875,
-0.01210784912109375,
... |
zxbsmk/webnovel_cn | 2023-08-09T09:39:49.000Z | [
"task_categories:text2text-generation",
"size_categories:10M<n<100M",
"language:zh",
"license:mit",
"doi:10.57967/hf/0877",
"region:us"
] | zxbsmk | null | null | 41 | 23 | 2023-07-09T14:33:25 | ---
license: mit
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 10M<n<100M
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
---
## 内容
包含从12560本网文提取的约**21.7M**条可用于训练小说生成的中文指令数据(novel_json_tokens512.zip)。~~下载链接:https://pan.baidu.com/s/1TorBMbrqxrn6odRF0PJBVw
提取码:jlh3~~
以及从中提取出的包含**50k**条数据的子集(novel_cn_token512_50k.json)。其中输入和输出都不多于 512 tokens。
## 样例
在原有小说文本基础上,依据下列五种指令生成数据。
其中,文本由小说中随机抽取的连续句子组成。
1. 给定标题,直接生成简介。
2. 给定标题和简介,生成开头。
3. 给定简介和一段文本,生成后续文本。
4. 给定标题和一段文本,生成后续文本。
5. 给定一段文本,生成后续文本。
```
{
"instruction": "小说名:无限恐怖\n节选正文:\n“不行,中校,我们必须把这里的情况和这些虫族的拍摄画面传回地球去,是的,我们人类已经到了最危险的关头,地球上所有的工业重工业完全应该按照战时情况进行任务布置,我们人类的工业力根本不是虫族能够想象的,一年,不,半年我们就能有一万艘宇宙战舰升空,如果全球一共进行建造的话,我们一定……”一名少校参谋长看着那密密麻麻的宇宙虫海,他神色激动的对楚轩说道。\n楚轩默默咬了一口苹果,他说道:“问题一……一万艘宇宙战舰需要多少熟练的战舰船员?需要多少士兵在战舰上服役,半年里我们可以聚集齐全吗?问题二……虫族会给地球联邦政府留下半年的时间吗?一旦我们被消灭干净,则联邦政府将会随即遭受到这些宇宙虫族的太空袭击,直到我们的太空优势完全丧失之后,虫族就能放心移动陨石进行攻击,甚至是将大量陆地虫子带上地球,那时我们人类能够凭肉体与虫子对抗吗?问题三……我们现在该如何把信息传给地球?”\n这名参谋长顿时就不说话了,而旁边一名上尉忽然说道:“在探戈幽灵星上的七座要塞中,每一座要塞都有大功率粒子喷射联络器,这种联络器连我们自己的科技都无法屏蔽,我想虫子也应该不可能继续屏蔽吧,毕竟那只是粒子而已,不是电流信号这些……”续篇:",
"input": "",
"output": "楚轩咬了一口苹果说道:“没错,所以我们才要登陆到探戈幽灵星,这是我们的目的,同时为了保存宇宙舰队的战斗力,以最大的平原军事基地,风暴要塞为中心,所有舰队都要降落到地面上,然后层层布防,在援军到来前,在地球联邦政府做好了战争准备前,我们必须像颗钉子一样钉死在这里,让虫族无法忽视我们而去进攻地球……怎么样?我们很可能会死在这里,面对数十万,数百万,甚至数千万的虫子前扑后续的攻击,但是我们的名字一定会载入史册……把刚才记录下来的我的话发向所有舰队,然后告诉他们,作为逃命被自己人杀掉,或者是成为英雄回到地球,让他们明白我们的处境,然后自己去选择吧。”\n周围的参谋和军官们同时向楚轩四人敬了一礼,接着那参谋长激动的向某处平台走了过去。\n郑吒默默走到了楚轩身边,他小声的说道:“楚轩……你什么时候变得了那么会煽情了啊?”\n楚轩却是理所当然的说道:“将自己归于多数人的一边,以前的你不是这样评价我的吗?没错,将自己归于多数人的一边,这是做任何大事都要先完成的第一步……已经让他们知道命运和我们连接在了一起,接着就只需要好好的安排下局面与等待‘主神’的任务就可以了,时间还有三天……”\n时间还有三天,在当天中午的时候,舰队群的预警舰果然发现了在探戈幽灵星后方徘徊着另一颗巨大圆球,它仿佛卫星一样座落在探戈幽灵星的近地轨道上,而随着联合舰队的到来,这只巨大圆球上果然也飞出了数以万计的宇宙虫子,这下联合舰队果然却如楚轩的预言那般了,前有埋伏,后有追兵,唯一的一条路就只剩下降落到探戈幽灵星上了。"
},
{
"instruction": "给定小说简介和节选,续写小说",
"input": "小说简介:主人公郑吒自从失去了自己最亲密的青梅竹马后,对这种反复而又无聊的现代生活已经感到十分的厌倦。正在这时,他发现电脑屏幕上弹出了一段信息:“想明白生命的意义吗?想真正的……活着吗?”在按下YES后,一切都改变了。他进入了一个恐怖片的轮回世界——主神空间……在主神空间里,只有不停地变强、不停地进化,才能闯过那一关关的恐怖片,才能活下去。郑吒,怎样才能活下去?是杀死与自己对抗的所有人,走向孤独的王者之道?还是和自己的伙伴奋战到死,以仁义之道来度过劫难?其实,所有的一切也只是为了活下去。直到他们发现了主神空间的秘密……究竟……谁才是暗中真正的威胁?一切尽在无限恐怖!\n\n\n\n上半部:初始\n节选篇章:“什么叫作事情已经发展到了这一步!”郑吒大声的喧哗吼叫着,他冲着楚轩吼道:“你这个白痴!我一直都在重复着同样一句话,我们是同伴啊,为什么不可以事先把你的猜测或者布局告诉我们呢?每次都把我当成白痴一样,每次都在算计我之后轻描淡写的说一句我骗你的,你究竟把我当成什么人?”\n“呃……大概是可以欺骗的凡人智慧者吧。”楚轩想了一下直接说道。\n“……不要拦着我!王侠,伊莫顿,你们放开我!我要干掉这个白痴!”\n喧哗又一次发生了,片刻之后再次安静下来时,楚轩本已略微肿胀的脸终于是彻底肿胀了起来,而看他的样子依然是一丁点疼痛都感觉不到,依然悠闲的吃着黄瓜。\n“……总之,事情已经发展到了这一步……”郑吒摸着额头,很痛苦的说道:“那么我们必须先要得到王女鄂亚的关押位置所在,当然了,直接打进首都也其实也不是不可以,但是从原剧情来看,似乎只有那个名叫邪影者的魔法师与国王歌巴塔尔知道她的关押位置,为了不打草惊蛇,还是花些时间慢慢渗透进去查找线索吧。”\n“赵樱空,你负责潜行进王城内,能不杀人尽量不杀人,若是遇到危险就转换人格,或者联络我们……伊莫顿,你负责到王城外的村庄去打探消息,尽量不要引人注意,若是可以,使用你的法术控制些人也行,照刚才我说的,遇到危险可以杀人,也可以联络我们,楚轩,王侠,你们两人就在附近待机,一旦发现有任何情况都要提供支援。”",
"output": "众人都点着头,王侠忽然好奇的问道:“你呢?你打算干什么去?”\n“我会把整个王城的情况给摸清楚,关键是王城里可能隐藏着的力量,我很在意啊,伊莫顿的感觉从某些方面来讲比我们都更出色,毕竟他是死亡的生物……或许,这场恐怖片世界并不像我们所想的那样轻松呢。”郑吒说到这里时叹了口气,他接着又看向了那首都位置。\n“时间还有两天多,我们尽量在三天内完成这部恐怖片世界吧……希望另一边的幼龙能够赶快成长。”郑吒边说话,边驾驶绿魔滑板就向地面飞去,渐渐的,他离众人已经越来越远了。\n此刻,在离王城首都极遥远外的小村处,主角伊拉贡正极其狼狈的奔跑在树丛中,跟随在他身边的还有他的舅舅……非常不幸的,逃跑没多久,他的表哥就失散在了这片森林中,或者说是被那些士兵们给抓住了也说不定。\n更加不幸的是,那名中年武士明显已经落败,不然不会多出那么多士兵紧紧追着他们,比起在村庄的时候,士兵的数量又更加的多了,至少有十多名士兵在他们不远处紧紧追赶。\n“你到底偷了什么东西啊!为什么会有这么多士兵来追赶你呢?”伊拉贡的舅舅气喘吁吁的问道,他已经跑得没什么精力去发怒了。\n“……一个龙蛋,不是偷的,这是我从森林里拣来的!”伊拉贡虽然也是跑得筋疲力尽,但他还在坚持着最后的底线,依然不停辩解着。\n“龙蛋?那可是国王的东西啊!而且还是孵化出幼龙的龙蛋!你这个白痴,你这样会害死大家的!”伊拉贡的舅舅一听此话就气急败坏的叫道,但他还是不停向前跑去,不敢有丁点停顿,因为在他们背后不停的追赶着十多名士兵。\n“在那里!看到他们了!他们在那里!”"
}
```
## 字段:
```
instruction: 指令
input: 输入
output: 输出
```
## 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
Join group via https://t.me/+JbovpBG6-gBiNDI1 | 3,694 | [
[
-0.0635986328125,
-0.034454345703125,
0.0301666259765625,
0.017425537109375,
-0.0360107421875,
-0.018890380859375,
-0.0125274658203125,
-0.0254364013671875,
0.040008544921875,
0.03570556640625,
-0.03253173828125,
-0.037322998046875,
-0.044158935546875,
0.006... |
ds4sd/PubTabNet_OTSL | 2023-08-31T15:57:31.000Z | [
"task_categories:object-detection",
"task_categories:table-to-text",
"size_categories:10K<n<100K",
"license:other",
"table-structure-recognition",
"table-understanding",
"PDF",
"arxiv:2305.03393",
"region:us"
] | ds4sd | null | null | 1 | 23 | 2023-08-10T07:36:03 | ---
license: other
pretty_name: PubTabNet-OTSL
size_categories:
- 10K<n<100K
tags:
- table-structure-recognition
- table-understanding
- PDF
task_categories:
- object-detection
- table-to-text
---
# Dataset Card for PubTabNet_OTSL
## Dataset Description
- **Homepage:** https://ds4sd.github.io
- **Paper:** https://arxiv.org/pdf/2305.03393
### Dataset Summary
This dataset is a conversion of the original [PubTabNet](https://developer.ibm.com/exchanges/data/all/pubtabnet/) into the OTSL format presented in our paper "Optimized Table Tokenization for Table Structure Recognition". The dataset includes the original annotations amongst new additions.
### Dataset Structure
* cells: origunal dataset cell groundtruth (content).
* otsl: new reduced table structure token format
* html: original dataset groundtruth HTML (structure).
* html_restored: generated HTML from OTSL.
* cols: grid column length.
* rows: grid row length.
* image: PIL image
### OTSL Vocabulary:
**OTSL**: new reduced table structure token format
More information on the OTSL table structure format and its concepts can be read from our paper.
Format of this dataset extends work presented in a paper, and introduces slight modifications:
* "fcel" - cell that has content in it
* "ecel" - cell that is empty
* "lcel" - left-looking cell (to handle horizontally merged cells)
* "ucel" - up-looking cell (to handle vertically merged cells)
* "xcel" - 2d span cells, in this dataset - covers entire area of a merged cell
* "nl" - new line token
### Data Splits
The dataset provides three splits
- `train`
- `val`
## Additional Information
### Dataset Curators
The dataset is converted by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
Curators:
- Maksym Lysak, [@maxmnemonic](https://github.com/maxmnemonic)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Nikos Livathinos, [@nikos-livathinos](https://github.com/nikos-livathinos)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Citation Information
```bib
@misc{lysak2023optimized,
title={Optimized Table Tokenization for Table Structure Recognition},
author={Maksym Lysak and Ahmed Nassar and Nikolaos Livathinos and Christoph Auer and Peter Staar},
year={2023},
eprint={2305.03393},
archivePrefix={arXiv},
primaryClass={cs.CV}
}```
| 2,528 | [
[
-0.017120361328125,
-0.0267791748046875,
0.032012939453125,
-0.0015954971313476562,
-0.044342041015625,
-0.008544921875,
0.0008549690246582031,
-0.02301025390625,
0.035400390625,
0.0193328857421875,
-0.0200347900390625,
-0.06884765625,
-0.013641357421875,
0.... |
percins/IN-ABS | 2023-08-11T12:53:05.000Z | [
"region:us"
] | percins | null | null | 0 | 23 | 2023-08-11T12:51:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: file
dtype: string
splits:
- name: train
num_bytes: 160084476
num_examples: 5346
- name: validation
num_bytes: 22684426
num_examples: 712
- name: test
num_bytes: 30578218
num_examples: 1070
download_size: 103908520
dataset_size: 213347120
---
# Dataset Card for "IN-ABS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 726 | [
[
-0.049224853515625,
-0.00286102294921875,
0.0115203857421875,
0.01030731201171875,
-0.02044677734375,
0.0128631591796875,
0.03179931640625,
-0.0289154052734375,
0.0599365234375,
0.0301055908203125,
-0.05426025390625,
-0.04791259765625,
-0.0252227783203125,
-... |
tyzhu/v1.1_id0.2_context_instruction_tuning | 2023-08-16T11:38:01.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 23 | 2023-08-16T08:54:00 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype: string
- name: template_type
dtype: string
- name: context
dtype: string
- name: template_used
dtype: string
splits:
- name: train
num_bytes: 1154915040.1878934
num_examples: 437288
- name: eval_context
num_bytes: 38006832.85245361
num_examples: 13944
- name: eval_id_context
num_bytes: 10843981
num_examples: 5976
download_size: 237906027
dataset_size: 1203765854.040347
---
# Dataset Card for "v1.1_id0.2_context_instruction_tuning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 793 | [
[
-0.04071044921875,
-0.03399658203125,
0.003498077392578125,
0.0286102294921875,
-0.018280029296875,
-0.03302001953125,
0.0093231201171875,
-0.004505157470703125,
0.040313720703125,
0.0343017578125,
-0.08453369140625,
-0.053253173828125,
-0.0282745361328125,
... |
loremipsum3658/sick-br | 2023-08-21T13:46:32.000Z | [
"region:us"
] | loremipsum3658 | null | null | 0 | 23 | 2023-08-21T13:46:25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: pair_ID
dtype: int64
- name: sentence_A
dtype: string
- name: sentence_B
dtype: string
- name: entailment_label
dtype: string
- name: relatedness_score
dtype: float64
- name: entailment_AB
dtype: string
- name: entailment_BA
dtype: string
- name: sentence_A_original
dtype: string
- name: sentence_B_original
dtype: string
- name: sentence_A_dataset
dtype: string
- name: sentence_B_dataset
dtype: string
- name: SemEval_set
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 2196243
num_examples: 6887
- name: test
num_bytes: 470001
num_examples: 1477
- name: validation
num_bytes: 470022
num_examples: 1476
download_size: 1217241
dataset_size: 3136266
---
# Dataset Card for "sick-br"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,180 | [
[
-0.0265655517578125,
-0.0222320556640625,
0.00818634033203125,
0.02655029296875,
-0.0252685546875,
0.002471923828125,
0.0257568359375,
-0.0239410400390625,
0.0732421875,
0.023590087890625,
-0.051177978515625,
-0.046630859375,
-0.035888671875,
-0.001225471496... |
argilla/cloud_assistant_questions | 2023-08-30T11:46:23.000Z | [
"region:us"
] | argilla | null | null | 0 | 23 | 2023-08-25T09:48:25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 16707.87786259542
num_examples: 196
- name: test
num_bytes: 5626.12213740458
num_examples: 66
download_size: 12576
dataset_size: 22334.0
---
# Dataset Card for "cloud_assistant_questions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 570 | [
[
-0.055877685546875,
-0.04888916015625,
0.0284423828125,
0.0149078369140625,
-0.004241943359375,
-0.0189056396484375,
0.02618408203125,
-0.0106201171875,
0.05694580078125,
0.04595947265625,
-0.07135009765625,
-0.0285186767578125,
-0.028076171875,
-0.014411926... |
StudentLLM/Open-Wyvern-74k | 2023-09-06T00:24:42.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | StudentLLM | null | null | 2 | 23 | 2023-08-31T11:41:09 | ---
task_categories:
- text-classification
- question-answering
- summarization
- conversational
- text-generation
language:
- en
size_categories:
- 10K<n<100K
---
<p align="center"><img src="https://cdn-uploads.huggingface.co/production/uploads/63e087b6a98d931aa90c1b9c/jm4fCY9DMGDxDRyhIeDZh.jpeg"></p>
# The Wyvern 🐉 Dataset
Let's introduce the **Wyvern 🐉** dataset, the new combination of datasets([Open-Orca](https://huggingface.co/datasets/Open-Orca/OpenOrca),
[Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus), [airoboros](https://huggingface.co/datasets/jondurbin/airoboros-2.1),
[Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k))!
We have integrated high-quality datasets following the claim that quality is more matter than quantity.
In addition, we have deduplicated the duplication of datasets to improve the dataset's quality because each dataset has some data contaminations.
Please see below for more details about the dataset!
# Dataset Details
**Wyvern 🐉** dataset is mixture of several datasets(Open-Orca, Open-Platypus, airoboros, Dolly) as mentioned above.
The specific configuration of the dataset is as follows.
(Open-Orca GPT-4 answered dataset was sampled using stratified sampling)
- **Open-Platypus(100%) + airoboros(100%) + Open-Orca(GPT-4)(5%)(stratified sampled) + Dolly-15k(100%)**
|Dataset Name|Sampled Size(ratio)|Deduped Size|License Type|
|---|---|---|---|
|[Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)|24.9k(100%)|16.8k|None|
|[airoboros](https://huggingface.co/datasets/jondurbin/airoboros-2.1)|36.3k(100%)|11k|apache-2.0|
|[Open-Orca](https://huggingface.co/datasets/Open-Orca/OpenOrca)|999.9k → 49.7k(5%)|35.6k|MIT|
|[Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)|15k(100%)|11k|cc-by-sa-3.0|
After the deduplication process, the size of the combination dataset is changed from 125k to 74k! (125k → 74k)
# Data Deduplication
We referred to Open-Platypus's [data similarity check code](https://github.com/arielnlee/Platypus/blob/main/data_pipeline/data_similarity.ipynb) to deduplicate the duplicated data.
The specific code for deduplication will be uploaded soon!
# Citations
```
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
```
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
``` | 3,334 | [
[
-0.036834716796875,
-0.04547119140625,
-0.006504058837890625,
-0.0014677047729492188,
-0.0235748291015625,
-0.009429931640625,
0.003627777099609375,
-0.033966064453125,
0.0411376953125,
0.04510498046875,
-0.032989501953125,
-0.041778564453125,
-0.030746459960937... |
Kriyans/ner | 2023-10-09T12:44:11.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | Kriyans | null | null | 0 | 23 | 2023-08-31T12:37:31 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: wnut-2017-emerging-and-rare-entity
pretty_name: WNUT 17
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-corporation
'2': I-corporation
'3': B-creative-work
'4': I-creative-work
'5': B-group
'6': I-group
'7': B-location
'8': I-location
'9': B-person
'10': I-person
'11': B-product
'12': I-product
config_name: wnut_17
splits:
- name: train
num_bytes: 1078379
num_examples: 3394
- name: validation
num_bytes: 259383
num_examples: 1009
- name: test
num_bytes: 405536
num_examples: 1287
download_size: 800955
dataset_size: 1743298
---
# Dataset Card for "wnut_17"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://noisy-text.github.io/2017/emerging-rare-entities.html](http://noisy-text.github.io/2017/emerging-rare-entities.html)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 0.80 MB
- **Size of the generated dataset:** 1.74 MB
- **Total amount of disk used:** 2.55 MB
### Dataset Summary
WNUT 17: Emerging and Rare entity recognition
This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions.
Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisation),
but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms.
Take for example the tweet “so.. kktny in 30 mins?” - even human experts find entity kktny hard to detect and resolve.
This task will evaluate the ability to detect and classify novel, emerging, singleton named entities in noisy text.
The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 0.80 MB
- **Size of the generated dataset:** 1.74 MB
- **Total amount of disk used:** 2.55 MB
An example of 'train' looks as follows.
```
{
"id": "0",
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0],
"tokens": ["@paulwalk", "It", "'s", "the", "view", "from", "where", "I", "'m", "living", "for", "two", "weeks", ".", "Empire", "State", "Building", "=", "ESB", ".", "Pretty", "bad", "storm", "here", "last", "evening", "."]
}
```
### Data Fields
The data fields are the same among all splits:
- `id` (`string`): ID of the example.
- `tokens` (`list` of `string`): Tokens of the example text.
- `ner_tags` (`list` of class labels): NER tags of the tokens (using IOB2 format), with possible values:
- 0: `O`
- 1: `B-corporation`
- 2: `I-corporation`
- 3: `B-creative-work`
- 4: `I-creative-work`
- 5: `B-group`
- 6: `I-group`
- 7: `B-location`
- 8: `I-location`
- 9: `B-person`
- 10: `I-person`
- 11: `B-product`
- 12: `I-product`
### Data Splits
|train|validation|test|
|----:|---------:|---:|
| 3394| 1009|1287|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{derczynski-etal-2017-results,
title = "Results of the {WNUT}2017 Shared Task on Novel and Emerging Entity Recognition",
author = "Derczynski, Leon and
Nichols, Eric and
van Erp, Marieke and
Limsopatham, Nut",
booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W17-4418",
doi = "10.18653/v1/W17-4418",
pages = "140--147",
abstract = "This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions.
Named entities form the basis of many modern approaches to other tasks (like event clustering and summarization),
but recall on them is a real problem in noisy text - even among annotators.
This drop tends to be due to novel entities and surface forms.
Take for example the tweet {``}so.. kktny in 30 mins?!{''} {--} even human experts find the entity {`}kktny{'}
hard to detect and resolve. The goal of this task is to provide a definition of emerging and of rare entities,
and based on that, also datasets for detecting these entities. The task as described in this paper evaluated the
ability of participating entries to detect and classify novel and emerging named entities in noisy text.",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@stefan-it](https://github.com/stefan-it), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu) for adding this dataset. | 9,044 | [
[
-0.0511474609375,
-0.050506591796875,
0.0137786865234375,
0.00962066650390625,
-0.0213165283203125,
0.013763427734375,
-0.03662109375,
-0.057159423828125,
0.050994873046875,
0.026611328125,
-0.04864501953125,
-0.0631103515625,
-0.043670654296875,
0.010993957... |
sdadas/gpt-exams | 2023-09-09T12:06:12.000Z | [
"task_categories:question-answering",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:pl",
"license:cc-by-nc-sa-4.0",
"region:us"
] | sdadas | null | null | 0 | 23 | 2023-09-09T11:25:39 | ---
language:
- pl
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- question-answering
pretty_name: GPT-exams
dataset_info:
features:
- name: _id
dtype: int32
- name: question
dtype: string
- name: answer
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 17237681
num_examples: 8131
---
# GPT-exams
### Dataset summary
The dataset contains 8131 multi-domain question-answer pairs. It was created semi-automatically using the `gpt-3.5-turbo-0613` model available in the OpenAI API. The process of building the dataset was as follows:
1. We manually prepared a list of 409 university-level courses from various fields. For each course, we instructed the model with the prompt: "Wygeneruj 20 przykładowych pytań na egzamin z [nazwa przedmiotu]" (Generate 20 sample questions for the [course name] exam).
2. We then parsed the outputs of the model to extract individual questions and performed their deduplication.
3. In the next step, we requested the model to generate the answer to each of the collected questions. We used the following prompt: "Odpowiedz na następujące pytanie z dziedziny [nazwa przedmiotu]: [treść pytania]" (Answer the following question from [course name]: [question content]). Along with the prompt, we also sent the following system message: "Jesteś ekspertem w dziedzinie [nazwa przedmiotu]. Udzielasz specjalistycznych i wyczerpujących odpowiedzi na pytania." (You are an expert in [course name]. You provide knowledgeable and comprehensive answers to questions).
4. In the last step, we manually removed from the dataset the cases in which the model refused to answer the question. We searched for occurrences of phrases such as "model języka" (language model), "nie jestem" (I'm not), or "nie mogę" (I can't).
### Data Instances
Example instance:
```
{
"_id": 2338,
"domain": "wzorców projektowych w oprogramowaniu",
"question": "Co to jest dependency injection i jak może być wykorzystane w kontekście wzorców projektowych?",
"answer": "Dependency injection (DI) to technika wstrzykiwania zależności, która polega na dostarczaniu obiektowi (...)"
}
```
### Data Fields
- _id: record id
- question: question text
- answer: answer text
- domain: name of the course / field / domain
| 2,368 | [
[
-0.052703857421875,
-0.08056640625,
0.0419921875,
-0.01398468017578125,
-0.00055694580078125,
-0.01537322998046875,
-0.00489044189453125,
-0.00528717041015625,
-0.006267547607421875,
0.045654296875,
-0.0513916015625,
-0.038909912109375,
-0.0233306884765625,
... |
shnl/qg_vicoqa | 2023-09-13T04:14:23.000Z | [
"region:us"
] | shnl | null | null | 0 | 23 | 2023-09-13T03:58:25 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Binaryy/cars-for-sale | 2023-09-16T13:43:03.000Z | [
"region:us"
] | Binaryy | null | null | 1 | 23 | 2023-09-16T13:42:34 | ---
dataset_info:
features:
- name: image
dtype: image
- name: 'Unnamed: 0'
dtype: int64
- name: Car Name
dtype: string
- name: Region
dtype: string
- name: Price
dtype: string
- name: Status
dtype: string
- name: Mileage
dtype: string
- name: Car Name.1
dtype: string
- name: Image URL
dtype: string
splits:
- name: train
num_bytes: 8301111.18
num_examples: 1332
download_size: 8084700
dataset_size: 8301111.18
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cars-for-sale"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 741 | [
[
-0.042266845703125,
-0.01593017578125,
0.0286102294921875,
0.01520538330078125,
-0.0244140625,
0.00004547834396362305,
0.00524139404296875,
-0.0161590576171875,
0.030975341796875,
0.0186920166015625,
-0.0526123046875,
-0.053375244140625,
-0.005466461181640625,
... |
usvsnsp/pile-semantic-memorization-filter-results | 2023-09-19T18:56:42.000Z | [
"region:us"
] | usvsnsp | null | null | 1 | 23 | 2023-09-19T18:16:50 | ---
dataset_info:
features:
- name: sequence_id
dtype: int64
- name: text
dtype: string
- name: sequence_duplicates
dtype: int64
- name: max_frequency
dtype: int64
- name: avg_frequency
dtype: float64
- name: min_frequency
dtype: int64
- name: median_frequency
dtype: float64
- name: p25_frequency
dtype: int64
- name: p75_frequency
dtype: int64
- name: frequencies
sequence: int64
- name: is_incrementing
dtype: bool
- name: tokens
sequence: int64
- name: repeating_offset
dtype: int32
- name: num_repeating
dtype: int32
- name: smallest_repeating_chunk
sequence: int64
- name: memorization_score
dtype: float64
- name: templating_frequency_0.9
dtype: int64
- name: templating_frequency_0.8
dtype: int64
- name: prompt_perplexity
dtype: float32
- name: generation_perplexity
dtype: float32
- name: sequence_perplexity
dtype: float32
splits:
- name: pile.duped.70m
num_bytes: 7003348430
num_examples: 5000000
- name: pile.duped.160m
num_bytes: 7003348430
num_examples: 5000000
- name: pile.duped.410m
num_bytes: 7003348430
num_examples: 5000000
- name: pile.duped.1b
num_bytes: 7003348430
num_examples: 5000000
- name: pile.duped.1.4b
num_bytes: 7003348430
num_examples: 5000000
- name: pile.duped.2.8b
num_bytes: 7003348430
num_examples: 5000000
- name: pile.duped.6.9b
num_bytes: 7003348430
num_examples: 5000000
- name: pile.duped.12b
num_bytes: 7003348430
num_examples: 5000000
- name: pile.deduped.70m
num_bytes: 7013409756
num_examples: 5000000
- name: pile.deduped.160m
num_bytes: 7013409756
num_examples: 5000000
- name: pile.deduped.410m
num_bytes: 7013409756
num_examples: 5000000
- name: pile.deduped.1b
num_bytes: 7013409756
num_examples: 5000000
- name: pile.deduped.1.4b
num_bytes: 7013409756
num_examples: 5000000
- name: pile.deduped.2.8b
num_bytes: 7013409756
num_examples: 5000000
- name: pile.deduped.6.9b
num_bytes: 7013409756
num_examples: 5000000
- name: pile.deduped.12b
num_bytes: 7013409756
num_examples: 5000000
download_size: 48107269588
dataset_size: 112134065488
---
# Dataset Card for "pile-semantic-memorization-filter-results"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,485 | [
[
-0.04669189453125,
-0.0295867919921875,
0.0172271728515625,
0.00542449951171875,
-0.02947998046875,
0.0007028579711914062,
0.01233673095703125,
-0.00814056396484375,
0.05267333984375,
0.0567626953125,
-0.04962158203125,
-0.081298828125,
-0.06378173828125,
-0... |
linhtran92/infer_fix | 2023-09-22T09:56:56.000Z | [
"region:us"
] | linhtran92 | null | null | 0 | 23 | 2023-09-22T09:56:31 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
TrainingDataPro/generated-e-mail-spam | 2023-09-28T15:29:45.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"finance",
"region:us"
] | TrainingDataPro | The dataset consists of a **CSV file** containing of 300 generated email spam messages.
Each row in the file represents a separate email message, its *title and text.*
The dataset aims to facilitate the analysis and detection of spam emails.
The dataset can be used for various purposes, such as *training machine learning
algorithms to classify and filter spam emails, studying spam email patterns,
or analyzing text-based features of spam messages*. | @InProceedings{huggingface:dataset,
title = {generated-e-mail-spam},
author = {TrainingDataPro},
year = {2023}
} | 1 | 23 | 2023-09-28T14:36:07 | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- text-generation
- text-classification
tags:
- code
- finance
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: large_string
splits:
- name: train
num_bytes: 233533
num_examples: 300
download_size: 230500
dataset_size: 233533
---
# Generated E-mail Spam
The dataset consists of a **CSV file** containing of 300 generated email spam messages. Each row in the file represents a separate email message, its *title and text.* The dataset aims to facilitate the analysis and detection of spam emails.
The dataset can be used for various purposes, such as *training machine learning algorithms to classify and filter spam emails, studying spam email patterns, or analyzing text-based features of spam messages*.

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=generated-e-mail-spam) to discuss your requirements, learn about the price and buy the dataset.
# Content
### File with the extension .csv (utf-8)
includes the following information:
- **title**: title of the email,
- **text**: text of the email
# Email spam might be generated in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=generated-e-mail-spam)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | 1,966 | [
[
-0.018280029296875,
-0.068603515625,
-0.004871368408203125,
0.0265655517578125,
-0.005645751953125,
0.008514404296875,
-0.00963592529296875,
-0.0038700103759765625,
0.0193023681640625,
0.08099365234375,
-0.061431884765625,
-0.06353759765625,
-0.05743408203125,
... |
reza-alipour/Yelp_Sentiment | 2023-10-01T09:28:56.000Z | [
"region:us"
] | reza-alipour | null | null | 0 | 23 | 2023-10-01T09:28:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: IsPositive
dtype: int64
splits:
- name: train
num_bytes: 24204778
num_examples: 444101
- name: validation
num_bytes: 3466415
num_examples: 63483
- name: test
num_bytes: 6861944
num_examples: 126670
download_size: 17440510
dataset_size: 34533137
---
# Dataset Card for "Yelp_Sentiment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 704 | [
[
-0.0297393798828125,
-0.01444244384765625,
0.023529052734375,
0.01263427734375,
-0.007724761962890625,
-0.01007843017578125,
0.0170135498046875,
-0.01026153564453125,
0.06451416015625,
0.0264129638671875,
-0.07379150390625,
-0.05078125,
-0.0251312255859375,
... |
BirdL/DONOTUSEDATA-SideA | 2023-10-07T21:59:31.000Z | [
"not-for-all-audiences",
"region:us"
] | BirdL | null | null | 0 | 23 | 2023-10-06T05:56:53 | ---
dataset_info:
features:
- name: text
dtype: string
- name: sexual
dtype: float64
- name: hate
dtype: float64
- name: violence
dtype: float64
- name: self-harm
dtype: float64
- name: sexual/minors
dtype: float64
- name: hate/threatening
dtype: float64
- name: violence/graphic
dtype: float64
splits:
- name: train
num_bytes: 8256999
num_examples: 30002
download_size: 6382984
dataset_size: 8256999
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- not-for-all-audiences
---
# Dataset Card for "DONOTUSEDATA"
Studying the effects of harmful data on LLMs. Side A.
Filtered Subset of [kjj0/4chanpol-openai](https://huggingface.co/datasets/kjj0/4chanpol-openaimod)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 907 | [
[
-0.0294647216796875,
-0.042266845703125,
0.01297760009765625,
0.004978179931640625,
-0.0362548828125,
-0.01123809814453125,
0.017974853515625,
-0.0186309814453125,
0.059600830078125,
0.06689453125,
-0.059600830078125,
-0.0413818359375,
-0.037628173828125,
-0... |
Sharka/CIVQA_easyocr_encode_valid | 2023-10-06T19:19:19.000Z | [
"region:us"
] | Sharka | null | null | 0 | 23 | 2023-10-06T19:16:48 | ---
dataset_info:
features:
- name: input_ids
sequence: int64
- name: bbox
dtype:
array2_d:
shape:
- 512
- 4
dtype: int64
- name: attention_mask
sequence: int64
- name: image
dtype:
array3_d:
shape:
- 3
- 224
- 224
dtype: int64
- name: start_positions
dtype: int64
- name: end_positions
dtype: int64
- name: questions
dtype: string
- name: answers
dtype: string
splits:
- name: validation
num_bytes: 21069068623
num_examples: 17079
download_size: 707118847
dataset_size: 21069068623
---
# Dataset Card for "CIVQA_easyocr_encode_valid"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 817 | [
[
-0.0290374755859375,
-0.00707244873046875,
0.0170440673828125,
0.017333984375,
-0.0096435546875,
-0.0036468505859375,
0.005519866943359375,
0.01248931884765625,
0.028717041015625,
0.031524658203125,
-0.03192138671875,
-0.05743408203125,
-0.03497314453125,
-0... |
ContextualAI/tiny-boolq | 2023-10-09T19:41:14.000Z | [
"region:us"
] | ContextualAI | null | null | 0 | 23 | 2023-10-08T22:37:09 | ---
dataset_info:
features:
- name: passage
dtype: string
- name: query
dtype: string
- name: gold_generation
dtype: string
- name: choices
sequence: string
splits:
- name: dev
num_bytes: 63014
num_examples: 100
download_size: 44185
dataset_size: 63014
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
---
# Dataset Card for "tiny-boolq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 545 | [
[
-0.039276123046875,
-0.0232391357421875,
0.0191192626953125,
-0.00479888916015625,
-0.0146331787109375,
-0.0009264945983886719,
0.0206298828125,
-0.00974273681640625,
0.052642822265625,
0.033172607421875,
-0.05743408203125,
-0.04095458984375,
-0.0178985595703125... |
salsarra/SQAC-Corrected | 2023-10-09T14:56:46.000Z | [
"region:us"
] | salsarra | null | null | 0 | 23 | 2023-10-09T14:33:38 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
johannes-garstenauer/embeddings_from_distilbert_masking_heaps_and_eval_part1 | 2023-10-09T23:36:10.000Z | [
"region:us"
] | johannes-garstenauer | null | null | 0 | 23 | 2023-10-09T23:34:11 | ---
dataset_info:
features:
- name: struct
dtype: string
- name: label
dtype: int64
- name: pred
dtype: int64
- name: cls_layer_6
sequence: float32
- name: cls_layer_5
sequence: float32
- name: cls_layer_4
sequence: float32
splits:
- name: train
num_bytes: 1281395185
num_examples: 134495
download_size: 1491732485
dataset_size: 1281395185
---
# Dataset Card for "embeddings_from_distilbert_masking_heaps_and_eval_part1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 607 | [
[
-0.03472900390625,
-0.040435791015625,
0.01727294921875,
0.030853271484375,
-0.0187530517578125,
0.01125335693359375,
0.0355224609375,
0.0114288330078125,
0.06292724609375,
0.023773193359375,
-0.043701171875,
-0.061737060546875,
-0.058013916015625,
-0.024154... |
Waterfront/social-media-captions | 2023-10-11T14:25:30.000Z | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"license:mit",
"social media",
"region:us"
] | Waterfront | null | null | 0 | 23 | 2023-10-10T14:09:17 | ---
license: mit
task_categories:
- conversational
tags:
- social media
size_categories:
- 10K<n<100K
---
# Social Media Captions
Based on the [Instagram Influencer Dataset from Seungbae Kim, Jyun-Yu Jiang, and Wei Wang](https://sites.google.com/site/sbkimcv/dataset/instagram-influencer-dataset)
Extended with photo descriptions of [ydshieh/vit-gpt2-coco-en](https://huggingface.co/ydshieh/vit-gpt2-coco-en) model to create a dataset which can be used to finetune Llama-2.
* 20k smaller subset: [Waterfront/social-media-captions-20k](https://huggingface.co/datasets/Waterfront/social-media-captions-20k)
* 10k smaller subset: [Waterfront/social-media-captions-10k](https://huggingface.co/datasets/Waterfront/social-media-captions-10k) | 737 | [
[
-0.0265350341796875,
-0.03607177734375,
0.038726806640625,
0.060516357421875,
-0.052520751953125,
0.0276336669921875,
-0.0025539398193359375,
-0.040557861328125,
0.055938720703125,
0.04534912109375,
-0.0521240234375,
-0.0288238525390625,
-0.06024169921875,
0... |
benedettoCesium/hackathon2 | 2023-10-11T17:41:07.000Z | [
"region:us"
] | benedettoCesium | null | null | 0 | 23 | 2023-10-11T16:34:12 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
pphuc25/uit_data_train | 2023-10-15T08:49:19.000Z | [
"region:us"
] | pphuc25 | null | null | 1 | 23 | 2023-10-12T08:06:36 | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: claim
dtype: string
- name: label
dtype: int64
- name: evidence
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 109989912
num_examples: 26967
download_size: 21040532
dataset_size: 109989912
---
# Dataset Card for "uit_data_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 532 | [
[
-0.0355224609375,
-0.007686614990234375,
0.006717681884765625,
0.014801025390625,
-0.011138916015625,
-0.0007262229919433594,
0.0266571044921875,
0.001087188720703125,
0.04498291015625,
0.029571533203125,
-0.05078125,
-0.03179931640625,
-0.029693603515625,
-... |
berkouille/Baize_Alpaca_Golf | 2023-10-15T17:46:18.000Z | [
"region:us"
] | berkouille | null | null | 0 | 23 | 2023-10-15T17:45:55 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
zelros/pj-ca | 2023-10-17T19:41:56.000Z | [
"region:us"
] | zelros | null | null | 0 | 23 | 2023-10-17T19:41:34 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
haseong8012/child-20k_for-test | 2023-10-21T18:54:58.000Z | [
"region:us"
] | haseong8012 | null | null | 0 | 23 | 2023-10-18T05:00:48 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: audio
sequence: float32
splits:
- name: test
num_bytes: 3598304981
num_examples: 20000
download_size: 3170949439
dataset_size: 3598304981
---
# Dataset Card for "child-20k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 482 | [
[
-0.04571533203125,
0.0029087066650390625,
-0.004779815673828125,
0.0273590087890625,
-0.0197906494140625,
0.01219940185546875,
0.0308990478515625,
-0.0259246826171875,
0.03448486328125,
0.0322265625,
-0.08038330078125,
-0.042205810546875,
-0.05010986328125,
... |
KaiLv/UDR_Go | 2023-10-19T11:42:50.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 23 | 2023-10-19T11:41:48 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 89583705
num_examples: 167137
- name: validation
num_bytes: 3547138
num_examples: 7320
- name: test
num_bytes: 4244257
num_examples: 8115
- name: debug
num_bytes: 53690904
num_examples: 100000
download_size: 66725224
dataset_size: 151066004
---
# Dataset Card for "UDR_Go_new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 699 | [
[
-0.037353515625,
-0.0246124267578125,
0.00739288330078125,
-0.00919342041015625,
-0.0169219970703125,
0.01021575927734375,
0.026824951171875,
-0.004482269287109375,
0.05377197265625,
0.0384521484375,
-0.05218505859375,
-0.050872802734375,
-0.032318115234375,
... |
Isaak-Carter/JOSIE_v928.16 | 2023-10-19T15:43:46.000Z | [
"region:us"
] | Isaak-Carter | null | null | 0 | 23 | 2023-10-19T15:43:42 | ---
dataset_info:
features:
- name: sample
dtype: string
splits:
- name: train
num_bytes: 6499831
num_examples: 2348
download_size: 3066207
dataset_size: 6499831
---
# Dataset Card for "JOSIE_v928.16"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 357 | [
[
-0.03173828125,
-0.00760650634765625,
0.00006717443466186523,
0.015838623046875,
-0.01078033447265625,
-0.023773193359375,
0.0299530029296875,
-0.0157318115234375,
0.076171875,
0.06402587890625,
-0.0635986328125,
-0.045318603515625,
-0.04351806640625,
-0.022... |
damand2061/id_cannot_12K | 2023-10-23T15:31:38.000Z | [
"task_categories:text-classification",
"language:id",
"license:cc-by-sa-4.0",
"region:us"
] | damand2061 | null | null | 0 | 23 | 2023-10-23T14:55:11 | ---
license: cc-by-sa-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1487375
num_examples: 9600
- name: validation
num_bytes: 372708
num_examples: 2400
download_size: 1214303
dataset_size: 1860083
task_categories:
- text-classification
language:
- id
---
This is Indonesia-translated version of the 12K top-rows of the [cannot](https://huggingface.co/datasets/tum-nlp/cannot-dataset) dataset. Translated using Google Translate and rechecked (then modified if necessary) manually. | 766 | [
[
-0.0194854736328125,
-0.037445068359375,
-0.0155029296875,
0.037689208984375,
-0.0361328125,
0.0021495819091796875,
-0.002040863037109375,
-0.046844482421875,
0.05426025390625,
0.07958984375,
-0.061248779296875,
-0.03192138671875,
-0.0654296875,
0.0580749511... |
MattBastar/Medicine_Details | 2023-10-25T00:04:39.000Z | [
"region:us"
] | MattBastar | null | null | 0 | 23 | 2023-10-24T22:48:33 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jasonshen8848/dup_data | 2023-10-26T01:59:53.000Z | [
"region:us"
] | jasonshen8848 | null | null | 0 | 23 | 2023-10-25T09:12:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
AlFrauch/im2latex | 2023-10-25T16:21:16.000Z | [
"task_categories:image-to-text",
"size_categories:1M<n<10M",
"code",
"region:us"
] | AlFrauch | null | null | 1 | 23 | 2023-10-25T14:53:53 | ---
task_categories:
- image-to-text
tags:
- code
size_categories:
- 1M<n<10M
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is a set of pairs: an image and its corresponding latex code for expression. This set of pairs was generated by analyzing more than 100,000 articles on natural sciences and mathematics and further generating a corresponding set of latex expressions. The set has been cleared of duplicates. There are about 1 500 000 images in the set.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Latex
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
```python
Dataset({
features: ['image', 'text'],
num_rows: 1586584
})
```
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
@misc{alexfrauch_VSU_2023,
title = {Recognition of mathematical formulas in the Latex: Image-Text Pair Dataset},
author = {Aleksandr Frauch (Proshunin)},
year = {2023},
howpublished = {\url{https://huggingface.co/datasets/AlFrauch/im2latex}},
}
### Contributions
[More Information Needed] | 2,018 | [
[
-0.01508331298828125,
-0.04052734375,
0.0036773681640625,
0.026397705078125,
-0.01346588134765625,
-0.0002570152282714844,
-0.01904296875,
-0.0238037109375,
0.0232086181640625,
0.0247039794921875,
-0.032501220703125,
-0.055694580078125,
-0.052520751953125,
0... |
emi429/humansleepproject-rr-small-individuals | 2023-10-26T18:41:16.000Z | [
"region:us"
] | emi429 | null | null | 0 | 23 | 2023-10-26T18:41:07 | ---
dataset_info:
features:
- name: rr_intervals
sequence: float64
- name: sleep_stage
dtype: string
- name: patient_id
dtype: string
splits:
- name: test
num_bytes: 1631857
num_examples: 504
- name: train
num_bytes: 5747903
num_examples: 2070
download_size: 1335531
dataset_size: 7379760
---
# Dataset Card for "humansleepproject-rr-small-individuals"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 531 | [
[
-0.03277587890625,
-0.00534820556640625,
0.01226043701171875,
0.0199127197265625,
-0.007396697998046875,
0.00699615478515625,
0.0096435546875,
-0.0211181640625,
0.0709228515625,
0.0251312255859375,
-0.06085205078125,
-0.040679931640625,
-0.028045654296875,
-... |
linhtran92/soict_private_test_fix | 2023-10-28T03:21:14.000Z | [
"region:us"
] | linhtran92 | null | null | 0 | 23 | 2023-10-28T03:20:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: id
dtype: string
splits:
- name: train
num_bytes: 378888808.625
num_examples: 2139
download_size: 351233206
dataset_size: 378888808.625
---
# Dataset Card for "soict_private_test_fix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 531 | [
[
-0.027191162109375,
-0.0225677490234375,
0.008575439453125,
0.02349853515625,
-0.006542205810546875,
-0.007755279541015625,
0.016082763671875,
0.0005221366882324219,
0.0452880859375,
0.040191650390625,
-0.061126708984375,
-0.048828125,
-0.03155517578125,
-0.... |
AnanyaAJ/dolly-llama2-1k | 2023-10-28T20:50:33.000Z | [
"region:us"
] | AnanyaAJ | null | null | 0 | 23 | 2023-10-28T20:50:32 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1734805
num_examples: 1000
download_size: 1056790
dataset_size: 1734805
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dolly-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 593 | [
[
-0.0206451416015625,
-0.0135955810546875,
-0.002109527587890625,
0.036346435546875,
-0.036773681640625,
-0.0042266845703125,
0.0496826171875,
-0.01369476318359375,
0.06884765625,
0.04486083984375,
-0.0625,
-0.0504150390625,
-0.056060791015625,
-0.00813293457... |
itopcu/hate-speech-target | 2023-10-30T20:55:39.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:tr",
"code",
"region:us"
] | itopcu | null | null | 0 | 23 | 2023-10-30T20:47:22 | ---
task_categories:
- text-classification
language:
- tr
tags:
- code
pretty_name: hate speech target detection dataset
size_categories:
- 10K<n<100K
---
https://coltekin.github.io/offensive-turkish/guidelines-tr.html | 218 | [
[
-0.0281219482421875,
-0.044158935546875,
0.005252838134765625,
0.0188446044921875,
-0.06060791015625,
-0.04315185546875,
-0.0108489990234375,
-0.03302001953125,
0.023681640625,
0.0531005859375,
-0.03778076171875,
-0.07586669921875,
-0.019989013671875,
0.0146... |
Yeshwanth-03-06-2004/twitter_bios | 2023-11-02T10:59:59.000Z | [
"region:us"
] | Yeshwanth-03-06-2004 | null | null | 0 | 23 | 2023-11-01T07:01:17 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jinmang2/common-sense-mrc | 2021-12-12T07:56:31.000Z | [
"region:us"
] | jinmang2 | null | null | 0 | 22 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
julien-c/reactiongif | 2022-09-20T12:10:26.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:2105.09967",
"regio... | julien-c | null | null | 1 | 22 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: reactiongif
---
## ReactionGIF
> From https://github.com/bshmueli/ReactionGIF

___
## Excerpt from original repo readme
ReactionGIF is a unique, first-of-its-kind dataset of 30K sarcastic tweets and their GIF reactions.
To find out more about ReactionGIF,
check out our ACL 2021 paper:
* Shmueli, Ray and Ku, [Happy Dance, Slow Clap: Using Reaction GIFs to Predict Induced Affect on Twitter](https://arxiv.org/abs/2105.09967)
## Citation
If you use our dataset, kindly cite the paper using the following BibTex entry:
```bibtex
@misc{shmueli2021happy,
title={Happy Dance, Slow Clap: Using Reaction {GIFs} to Predict Induced Affect on {Twitter}},
author={Boaz Shmueli and Soumya Ray and Lun-Wei Ku},
year={2021},
eprint={2105.09967},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 1,211 | [
[
-0.0160369873046875,
-0.052642822265625,
0.019775390625,
0.0396728515625,
-0.0236358642578125,
-0.003459930419921875,
-0.013092041015625,
-0.02972412109375,
0.05322265625,
0.00601959228515625,
-0.04962158203125,
-0.02947998046875,
-0.04736328125,
-0.00402069... |
wardenga/lsoie | 2022-10-21T05:51:54.000Z | [
"task_categories:text-retrieval",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|qa_srl",
"language:en",
"license:mit",
"Open Information Extraction",
"arxiv:2101.11177",
"region:us"
] | wardenga | The Large Scale Open Information Extraction Dataset (LSOIE), is a dataset 20
times larger than the next largest human-annotated Open Information Extraction
(OIE) dataset. LSOIE is a built upon the QA-SRL 2.0 dataset. | @article{lsoie-2021,
title={{LSOIE}: A Large-Scale Dataset for Supervised Open Information Extraction},
author={{Solawetz}, Jacob and {Larson}, Stefan},
journal={arXiv preprint arXiv:2101.11177},
year={2019},
url="https://arxiv.org/pdf/2101.11177.pdf"
} | 0 | 22 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- extended|qa_srl
task_categories:
- text-retrieval
task_ids: []
pretty_name: LSOIE
tags:
- Open Information Extraction
---
# Dataset Card for LSOIE
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/Jacobsolawetz/large-scale-oie
- **Repository:** https://github.com/Jacobsolawetz/large-scale-oie
- **Paper:** https://arxiv.org/abs/2101.11177
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The Large Scale Open Information Extraction Dataset (LSOIE), is a dataset 20 times larger than the next largest human-annotated Open Information Extraction (OIE) dataset. LSOIE is a built upon the QA-SRL 2.0 dataset by transforming the list of Questions and answers for each predicate to a tuple representing a fact.
### Supported Tasks and Leaderboards
Open Information Extraction
### Languages
The text in this dataset is english.
## Dataset Structure
### Data Instances
A datapoint comprises one fact together with the sentence it was extracted from. There can be multiple facts for each Sentence. Each fact is represented by a tuple $(a_0, p, a_1,\dots a_n)$ where $a_0$ is the head entity $p$ is the predicate and $a_1, \dots,a_n$ represent the tail.
### Data Fields
- word_ids : sequence of indices (int) representing tokens in a sentence,
- words : a sequence of strings, the tokens in the sentence,
- pred : the predicate of the fact,
- pred_ids : ids of the tokens in the predicate,
- head_pred_id : id of the head token in the predicate,
- sent_id : sentence id,
- run_id : ,
- label : Sequence of tags (BIO) representing the fact, e.g. if the fact is given by $(a_0, p, a_1, \dots, a_n) $
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | 3,706 | [
[
-0.029693603515625,
-0.050994873046875,
0.032379150390625,
0.0019207000732421875,
-0.0160064697265625,
-0.0029125213623046875,
-0.0150604248046875,
-0.03839111328125,
0.044891357421875,
0.04296875,
-0.0517578125,
-0.06292724609375,
-0.04620361328125,
0.01736... |
openclimatefix/mrms | 2022-06-22T13:39:35.000Z | [
"doi:10.57967/hf/0885",
"region:us"
] | openclimatefix | This dataset consists of MRMS precipitation radar data for the continental United States,
sampled at a 1kmx1km area and 2-mimntely spatial resolution. | @InProceedings{ocf:mrms,
title = {MRMS Archival Precipitation Rate Radar Dataset},
author={Jacob Bieker
},
year={2022}
} | 7 | 22 | 2022-03-22T15:39:47 | annotations_creators:
- machine-generated
language_creators:
- machine-generated
languages: []
licenses:
- mit
multilinguality: []
pretty_name: Mutli-Radar/Multi-System Precipitation Radar
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- time-series-forecasting
- image-classification
- image-segmentation
- other
task_ids:
- univariate-time-series-forecasting
- multi-label-image-classification
- semantic-segmentation
# Dataset Card for MRMS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://mrms.nssl.noaa.gov/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Jacob Bieker](mailto:jacob@openclimatefix.org)
### Dataset Summary
Multi-Radar/Multi-System Precipitation Rate Radar data for 2016-2022. This data contains precipitation rate values for the continental United States.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
This dataset was constructed to help recreate the original dataset used for MetNet/MetNet-2 as well as Deep Generative Model of Radar papers. The datasets were not pubicly released, but this dataset should cover the time period used plus more compared to the datasets in the papers.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
US Government License, no restrictions
### Citation Information
@article(ocf:mrms,
author = {Jacob Bieker}
title = {MRMS Precipitation Rate Dataset}
year = {2022}
} | 3,325 | [
[
-0.048309326171875,
-0.0222015380859375,
0.031585693359375,
0.0260467529296875,
-0.0301666259765625,
0.005641937255859375,
-0.0141754150390625,
-0.0298919677734375,
0.0044708251953125,
0.03924560546875,
-0.045623779296875,
-0.05767822265625,
-0.060028076171875,
... |
hackathon-pln-es/es_tweets_laboral | 2022-10-25T10:03:39.000Z | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:es",
"license:unknown",
"region:us"
] | hackathon-pln-es | null | null | 1 | 22 | 2022-04-01T13:20:33 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- es
license:
- unknown
multilinguality:
- monolingual
pretty_name: "Tweets en espa\xF1ol denuncia laboral"
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
---
# Dataset Card for [es_tweets_laboral]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
Dataset creado por @hucruz, @DanielaGarciaQuezada, @hylandude, @BloodBoy21
Etiquetado por @DanielaGarciaQuezada
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
español
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | 2,905 | [
[
-0.018402099609375,
-0.033447265625,
0.01107025146484375,
0.029541015625,
-0.0273895263671875,
0.0291748046875,
-0.0321044921875,
-0.0238037109375,
0.050628662109375,
0.040496826171875,
-0.064697265625,
-0.08447265625,
-0.058441162109375,
0.006591796875,
... |
mteb/quora-retrieval | 2022-04-12T17:15:57.000Z | [
"region:us"
] | mteb | null | null | 0 | 22 | 2022-04-12T17:06:39 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
pietrolesci/recast_white | 2022-04-22T15:34:14.000Z | [
"region:us"
] | pietrolesci | null | null | 0 | 22 | 2022-04-22T15:27:37 | ## Overview
This dataset has been introduced by "Inference is Everything: Recasting Semantic Resources into a Unified Evaluation Framework", Aaron Steven White, Pushpendre Rastogi, Kevin Duh, Benjamin Van Durme. IJCNLP, 2017. Original data available [here](https://github.com/decompositional-semantics-initiative/DNC/raw/master/inference_is_everything.zip).
## Dataset curation
The following processing is applied
- `hypothesis_grammatical` and `judgement_valid` columns are filled with `""` when empty
- all columns are stripped
- the `entailed` column is renamed `label`
- `label` column is encoded with the following mapping `{"not-entailed": 0, "entailed": 1}`
- columns `rating` and `good_word` are dropped from `fnplus` dataset
## Code to generate the dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict
ds = {}
for name in ("fnplus", "sprl", "dpr"):
# read data
with open(f"<path to files>/{name}_data.txt", "r") as f:
data = f.read()
data = data.split("\n\n")
data = [lines.split("\n") for lines in data]
data = [dict([col.split(":", maxsplit=1) for col in line if len(col) > 0]) for line in data]
df = pd.DataFrame(data)
# fill empty hypothesis_grammatical and judgement_valid
df["hypothesis_grammatical"] = df["hypothesis_grammatical"].fillna("")
df["judgement_valid"] = df["judgement_valid"].fillna("")
# fix dtype
df["index"] = df["index"].astype(int)
# strip
for col in df.select_dtypes(object).columns:
df[col] = df[col].str.strip()
# rename columns
df = df.rename(columns={"entailed": "label"})
# encode labels
df["label"] = df["label"].map({"not-entailed": 0, "entailed": 1})
# cast to dataset
features = Features({
"provenance": Value(dtype="string", id=None),
"index": Value(dtype="int64", id=None),
"text": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"partof": Value(dtype="string", id=None),
"hypothesis_grammatical": Value(dtype="string", id=None),
"judgement_valid": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=2, names=["not-entailed", "entailed"]),
})
# select common columns
df = df.loc[:, list(features.keys())]
ds[name] = Dataset.from_pandas(df, features=features)
ds = DatasetDict(ds)
ds.push_to_hub("recast_white", token="<token>")
``` | 2,464 | [
[
-0.017974853515625,
-0.061767578125,
0.034027099609375,
0.01534271240234375,
-0.016387939453125,
-0.021942138671875,
-0.0233612060546875,
0.00307464599609375,
0.032928466796875,
0.052734375,
-0.025665283203125,
-0.07733154296875,
-0.04364013671875,
0.0178985... |
joelniklaus/brazilian_court_decisions | 2022-09-22T13:43:42.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pt",
"license:other",
"arxiv:1905.10348",
"region:us"
] | joelniklaus | null | null | 8 | 22 | 2022-06-24T13:50:02 | ---
annotations_creators:
- found
language_creators:
- found
language:
- pt
license:
- 'other'
multilinguality:
- monolingual
pretty_name: predicting-brazilian-court-decisions
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for predicting-brazilian-court-decisions
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/lagefreitas/predicting-brazilian-court-decisions
- **Paper:** Lage-Freitas, A., Allende-Cid, H., Santana, O., & Oliveira-Lage, L. (2022). Predicting Brazilian Court
Decisions. PeerJ. Computer Science, 8, e904–e904. https://doi.org/10.7717/peerj-cs.904
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
The dataset is a collection of 4043 *Ementa* (summary) court decisions and their metadata from
the *Tribunal de Justiça de Alagoas* (TJAL, the State Supreme Court of Alagoas (Brazil). The court decisions are labeled
according to 7 categories and whether the decisions were unanimous on the part of the judges or not. The dataset
supports the task of Legal Judgment Prediction.
### Supported Tasks and Leaderboards
Legal Judgment Prediction
### Languages
Brazilian Portuguese
## Dataset Structure
### Data Instances
The file format is jsonl and three data splits are present (train, validation and test) for each configuration.
### Data Fields
The dataset contains the following fields:
- `process_number`: A number assigned to the decision by the court
- `orgao_julgador`: Judging Body: one of '1ª Câmara Cível', '2ª Câmara Cível', '3ª Câmara Cível', 'Câmara Criminal', '
Tribunal Pleno', 'Seção Especializada Cível'
- `publish_date`: The date, when the decision has been published (14/12/2018 - 03/04/2019). At that time (in 2018-2019),
the scraping script was limited and not configurable to get data based on date range. Therefore, only the data from
the last months has been scraped.
- `judge_relator`: Judicial panel
- `ementa_text`: Summary of the court decision
- `decision_description`: **Suggested input**. Corresponds to ementa_text - judgment_text - unanimity_text. Basic
statistics (number of words): mean: 119, median: 88, min: 12, max: 1400
- `judgment_text`: The text used for determining the judgment label
- `judgment_label`: **Primary suggested label**. Labels that can be used to train a model for judgment prediction:
- `no`: The appeal was denied
- `partial`: For partially favourable decisions
- `yes`: For fully favourable decisions
- removed labels (present in the original dataset):
- `conflito-competencia`: Meta-decision. For example, a decision just to tell that Court A should rule this case
and not Court B.
- `not-cognized`: The appeal was not accepted to be judged by the court
- `prejudicada`: The case could not be judged for any impediment such as the appealer died or gave up on the
case for instance.
- `unanimity_text`: Portuguese text to describe whether the decision was unanimous or not.
- `unanimity_label`: **Secondary suggested label**. Unified labels to describe whether the decision was unanimous or
not (in some cases contains ```not_determined```); they can be used for model training as well (Lage-Freitas et al.,
2019).
### Data Splits
The data has been split randomly into 80% train (3234), 10% validation (404), 10% test (405).
There are two tasks possible for this dataset.
#### Judgment
Label Distribution
| judgment | train | validation | test |
|:----------|---------:|-----------:|--------:|
| no | 1960 | 221 | 234 |
| partial | 677 | 96 | 93 |
| yes | 597 | 87 | 78 |
| **total** | **3234** | **404** | **405** |
#### Unanimity
In this configuration, all cases that have `not_determined` as `unanimity_label` can be removed.
Label Distribution
| unanimity_label | train | validation | test |
|:-----------------|----------:|---------------:|---------:|
| not_determined | 1519 | 193 | 201 |
| unanimity | 1681 | 205 | 200 |
| not-unanimity | 34 | 6 | 4 |
| **total** | **3234** | **404** | **405** |
## Dataset Creation
### Curation Rationale
This dataset was created to further the research on developing models for predicting Brazilian court decisions that are
also able to predict whether the decision will be unanimous.
### Source Data
The data was scraped from *Tribunal de Justiça de Alagoas* (TJAL, the State Supreme Court of Alagoas (Brazil).
#### Initial Data Collection and Normalization
*“We developed a Web scraper for collecting data from Brazilian courts. The scraper first searched for the URL that
contains the list of court cases […]. Then, the scraper extracted from these HTML files the specific case URLs and
downloaded their data […]. Next, it extracted the metadata and the contents of legal cases and stored them in a CSV file
format […].”* (Lage-Freitas et al., 2022)
#### Who are the source language producers?
The source language producer are presumably attorneys, judges, and other legal professionals.
### Annotations
#### Annotation process
The dataset was not annotated.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The court decisions might contain sensitive information about individuals.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton
Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset
consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the
dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that,
differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to
have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the
original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to
the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.
## Additional Information
Lage-Freitas, A., Allende-Cid, H., Santana Jr, O., & Oliveira-Lage, L. (2019). Predicting Brazilian court decisions:
- "In Brazil [...] lower court judges decisions might be appealed to Brazilian courts (*Tribiunais de Justiça*) to be
reviewed by second instance court judges. In an appellate court, judges decide together upon a case and their
decisions are compiled in Agreement reports named *Acóordãos*."
### Dataset Curators
The names of the original dataset curators and creators can be found in references given below, in the section *Citation
Information*. Additional changes were made by Joel Niklaus ([Email](mailto:joel.niklaus.2@bfh.ch)
; [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:veton.matoshi@bfh.ch)
; [Github](https://github.com/kapllan)).
### Licensing Information
No licensing information was provided for this dataset. However, please make sure that you use the dataset according to
Brazilian law.
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.1905.10348,
author = {Lage-Freitas, Andr{\'{e}} and Allende-Cid, H{\'{e}}ctor and Santana, Orivaldo and de Oliveira-Lage, L{\'{i}}via},
doi = {10.48550/ARXIV.1905.10348},
keywords = {Computation and Language (cs.CL),FOS: Computer and information sciences,Social and Information Networks (cs.SI)},
publisher = {arXiv},
title = {{Predicting Brazilian court decisions}},
url = {https://arxiv.org/abs/1905.10348},
year = {2019}
}
```
```
@article{Lage-Freitas2022,
author = {Lage-Freitas, Andr{\'{e}} and Allende-Cid, H{\'{e}}ctor and Santana, Orivaldo and Oliveira-Lage, L{\'{i}}via},
doi = {10.7717/peerj-cs.904},
issn = {2376-5992},
journal = {PeerJ. Computer science},
keywords = {Artificial intelligence,Jurimetrics,Law,Legal,Legal NLP,Legal informatics,Legal outcome forecast,Litigation prediction,Machine learning,NLP,Portuguese,Predictive algorithms,judgement prediction},
language = {eng},
month = {mar},
pages = {e904--e904},
publisher = {PeerJ Inc.},
title = {{Predicting Brazilian Court Decisions}},
url = {https://pubmed.ncbi.nlm.nih.gov/35494851 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9044329/},
volume = {8},
year = {2022}
}
```
### Contributions
Thanks to [@kapllan](https://github.com/kapllan) and [@joelniklaus](https://github.com/joelniklaus) for adding this
dataset.
| 10,174 | [
[
-0.0167388916015625,
-0.037811279296875,
0.033905029296875,
0.027130126953125,
-0.03558349609375,
-0.020172119140625,
-0.006072998046875,
-0.0187225341796875,
0.0122833251953125,
0.04595947265625,
-0.0209808349609375,
-0.060089111328125,
-0.051116943359375,
... |
embedding-data/flickr30k_captions_quintets | 2022-08-02T01:59:48.000Z | [
"language:en",
"license:mit",
"region:us"
] | embedding-data | null | null | 0 | 22 | 2022-07-07T23:09:35 | ---
license: mit
language:
- en
paperswithcode_id: embedding-data/flickr30k-captions
pretty_name: flickr30k-captions
---
# Dataset Card for "flickr30k-captions"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Usage Example](#usage-example)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://shannon.cs.illinois.edu/DenotationGraph/](https://shannon.cs.illinois.edu/DenotationGraph/)
- **Repository:** [More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/)
- **Paper:** [https://transacl.org/ojs/index.php/tacl/article/view/229/33](https://transacl.org/ojs/index.php/tacl/article/view/229/33)
- **Point of Contact:** [Peter Young](pyoung2@illinois.edu), [Alice Lai](aylai2@illinois.edu), [Micah Hodosh](mhodosh2@illinois.edu), [Julia Hockenmaier](juliahmr@illinois.edu)
### Dataset Summary
We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph, i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions.
Disclaimer: The team releasing Flickr30k did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team.
### Supported Tasks
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
### Languages
- English.
## Dataset Structure
Each example in the dataset contains quintets of similar sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value":
```
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
...
{"set": [sentence_1, sentence_2, sentence3, sentence4, sentence5]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/flickr30k-captions")
```
The dataset is loaded as a `DatasetDict` has the format:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: 31783
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Curation Rationale
[More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/)
#### Who are the source language producers?
[More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/)
### Annotations
#### Annotation process
[More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/)
#### Who are the annotators?
[More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/)
### Personal and Sensitive Information
[More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/)
### Discussion of Biases
[More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/)
### Other Known Limitations
[More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/)
## Additional Information
### Dataset Curators
[More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/)
### Licensing Information
[More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/)
### Citation Information
[More Information Needed](https://shannon.cs.illinois.edu/DenotationGraph/)
### Contributions
Thanks to [Peter Young](pyoung2@illinois.edu), [Alice Lai](aylai2@illinois.edu), [Micah Hodosh](mhodosh2@illinois.edu), [Julia Hockenmaier](juliahmr@illinois.edu) for adding this dataset.
| 5,285 | [
[
-0.0439453125,
-0.029815673828125,
0.01165771484375,
0.0016717910766601562,
-0.01082611083984375,
0.0012483596801757812,
-0.00391387939453125,
-0.0191497802734375,
0.0225982666015625,
0.03424072265625,
-0.055755615234375,
-0.053619384765625,
-0.046051025390625,
... |
ttxy/weibo_4_moods | 2022-07-25T09:55:43.000Z | [
"region:us"
] | ttxy | null | null | 0 | 22 | 2022-07-25T09:55:07 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Toygar/turkish-offensive-language-detection | 2023-10-31T21:57:24.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:tr",
"license:cc-by-2.0",
"offensive-language-classification",
"region:us"
] | Toygar | null | null | 4 | 22 | 2022-07-28T11:45:25 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- crowdsourced
language:
- tr
license:
- cc-by-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets: []
task_categories:
- text-classification
task_ids: []
pretty_name: Turkish Offensive Language Detection Dataset
tags:
- offensive-language-classification
---
# Dataset Summary
This dataset is enhanced version of existing offensive language studies. Existing studies are highly imbalanced, and solving this problem is too costly. To solve this, we proposed contextual data mining method for dataset augmentation. Our method is basically prevent us from retrieving random tweets and label individually. We can directly access almost exact hate related tweets and label them directly without any further human interaction in order to solve imbalanced label problem.
In addition, existing studies *(can be found at Reference section)* are merged to create even more comprehensive and robust dataset for Turkish offensive language detection task.
The file train.csv contains 42,398, test.csv contains 8,851, valid.csv contains 1,756 annotated tweets.
# Dataset Structure
A binary dataset with with (0) Not Offensive and (1) Offensive tweets.
### Task and Labels
Offensive language identification:
- (0) Not Offensive - Tweet does not contain offense or profanity.
- (1) Offensive - Tweet contains offensive language or a targeted (veiled or direct) offense
### Data Splits
| | train | test | dev |
|------:|:------|:-----|:-----|
| 0 (Not Offensive) | 22,589 | 4,436 | 1,402 |
| 1 (Offensive) | 19,809 | 4,415 | 354 |
### Citation Information
```
T. Tanyel, B. Alkurdi and S. Ayvaz, "Linguistic-based Data Augmentation Approach for Offensive Language Detection," 2022 7th International Conference on Computer Science and Engineering (UBMK), 2022, pp. 1-6, doi: 10.1109/UBMK55850.2022.9919562.
```
### Paper codes
https://github.com/tanyelai/lingda
# References
We merged open-source offensive language dataset studies in Turkish to increase contextuality with existing data even more, before our method is applied.
- https://huggingface.co/datasets/offenseval2020_tr
- https://github.com/imayda/turkish-hate-speech-dataset-2
- https://www.kaggle.com/datasets/kbulutozler/5k-turkish-tweets-with-incivil-content
| 2,329 | [
[
-0.0170745849609375,
-0.068115234375,
-0.0255279541015625,
0.0146636962890625,
-0.029296875,
-0.005100250244140625,
-0.02691650390625,
-0.050048828125,
0.0136566162109375,
0.045654296875,
-0.040252685546875,
-0.06146240234375,
-0.055755615234375,
0.005722045... |
graphs-datasets/IMDB-BINARY | 2023-02-07T16:39:00.000Z | [
"task_categories:graph-ml",
"license:unknown",
"region:us"
] | graphs-datasets | null | null | 1 | 22 | 2022-08-01T16:17:25 | ---
license: unknown
task_categories:
- graph-ml
---
# Dataset Card for IMDB-BINARY (IMDb-B)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://dl.acm.org/doi/10.1145/2783258.2783417)**
- **[Repository](https://www.chrsmrrs.com/graphkerneldatasets/IMDB-BINARY.zip):**:
- **Paper:**: Deep Graph Kernels (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-imdb-b)
### Dataset Summary
The `IMDb-B` dataset is "a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB. In each graph, nodes represent actors/actress, and there is an edge between them if they appear in the same movie. These graphs are derived from the Action and Romance genres".
### Supported Tasks and Leaderboards
`IMDb-B` should be used for graph classification (aiming to predict whether a movie graph is an action or romance movie), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | medium |
| #graphs | 1000 |
| average #nodes | 19.79 |
| average #edges | 193.25 |
### Data Fields
Each row of a given file is a graph, with:
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset.
This information can be found back using
```python
from torch_geometric.datasets import TUDataset
cur_dataset = TUDataset(root="../dataset/loaded/",
name="IMDB-BINARY")
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have this information.
### Citation Information
```
@inproceedings{10.1145/2783258.2783417,
author = {Yanardag, Pinar and Vishwanathan, S.V.N.},
title = {Deep Graph Kernels},
year = {2015},
isbn = {9781450336642},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2783258.2783417},
doi = {10.1145/2783258.2783417},
abstract = {In this paper, we present Deep Graph Kernels, a unified framework to learn latent representations of sub-structures for graphs, inspired by latest advancements in language modeling and deep learning. Our framework leverages the dependency information between sub-structures by learning their latent representations. We demonstrate instances of our framework on three popular graph kernels, namely Graphlet kernels, Weisfeiler-Lehman subtree kernels, and Shortest-Path graph kernels. Our experiments on several benchmark datasets show that Deep Graph Kernels achieve significant improvements in classification accuracy over state-of-the-art graph kernels.},
booktitle = {Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},
pages = {1365–1374},
numpages = {10},
keywords = {collaboration networks, bioinformatics, r-convolution kernels, graph kernels, structured data, deep learning, social networks, string kernels},
location = {Sydney, NSW, Australia},
series = {KDD '15}
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. | 4,489 | [
[
-0.0259552001953125,
-0.0428466796875,
0.0169525146484375,
-0.01715087890625,
-0.0160369873046875,
0.0194549560546875,
-0.00127410888671875,
-0.0181427001953125,
0.036285400390625,
0.022979736328125,
-0.0391845703125,
-0.0572509765625,
-0.06219482421875,
-0.... |
scikit-learn/churn-prediction | 2022-08-08T17:56:29.000Z | [
"license:cc-by-4.0",
"region:us"
] | scikit-learn | null | null | 5 | 22 | 2022-08-08T17:42:17 | ---
license: cc-by-4.0
---
Customer churn prediction dataset of a fictional telecommunication company made by IBM Sample Datasets.
Context
Predict behavior to retain customers. You can analyze all relevant customer data and develop focused customer retention programs.
Content
Each row represents a customer, each column contains customer’s attributes described on the column metadata.
The data set includes information about:
- Customers who left within the last month: the column is called Churn
- Services that each customer has signed up for: phone, multiple lines, internet, online security, online backup, device protection, tech support, and streaming TV and movies
- Customer account information: how long they’ve been a customer, contract, payment method, paperless billing, monthly charges, and total charges
- Demographic info about customers: gender, age range, and if they have partners and dependents
Credits for the dataset and the card:
- [Kaggle](https://www.kaggle.com/datasets/blastchar/telco-customer-churn)
- [Latest version of the dataset by IBM Samples team](https://community.ibm.com/community/user/businessanalytics/blogs/steven-macko/2019/07/11/telco-customer-churn-1113)
| 1,203 | [
[
-0.009674072265625,
-0.03363037109375,
0.007663726806640625,
-0.0021533966064453125,
-0.0147552490234375,
0.009674072265625,
0.0361328125,
-0.01904296875,
0.027679443359375,
0.0767822265625,
-0.078369140625,
-0.0198974609375,
0.0005035400390625,
-0.013710021... |
climatebert/environmental_claims | 2023-05-23T08:53:10.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"arxiv:2209.00507",
"region:us"
] | climatebert | null | null | 9 | 22 | 2022-09-01T14:19:17 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: EnvironmentalClaims
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'no'
'1': 'yes'
splits:
- name: train
num_bytes: 346686
num_examples: 2117
- name: validation
num_bytes: 43018
num_examples: 265
- name: test
num_bytes: 42810
num_examples: 265
download_size: 272422
dataset_size: 432514
---
# Dataset Card for environmental_claims
## Dataset Description
- **Homepage:** [climatebert.ai](https://climatebert.ai)
- **Repository:**
- **Paper:** [arxiv.org/abs/2209.00507](https://arxiv.org/abs/2209.00507)
- **Leaderboard:**
- **Point of Contact:** [Dominik Stammbach](mailto:dominsta@ethz.ch)
### Dataset Summary
We introduce an expert-annotated dataset for detecting real-world environmental claims made by listed companies.
### Supported Tasks and Leaderboards
The dataset supports a binary classification task of whether a given sentence is an environmental claim or not.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
"text": "It will enable E.ON to acquire and leverage a comprehensive understanding of the transfor- mation of the energy system and the interplay between the individual submarkets in regional and local energy supply sys- tems.",
"label": 0
}
```
### Data Fields
- text: a sentence extracted from corporate annual reports, sustainability reports and earning calls transcripts
- label: the label (0 -> no environmental claim, 1 -> environmental claim)
### Data Splits
The dataset is split into:
- train: 2,400
- validation: 300
- test: 300
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains environmental claims by firms, often in the financial domain. We collect text from corporate annual reports, sustainability reports, and earning calls transcripts.
For more information regarding our sample selection, please refer to Appendix B of our paper, which is provided for [citation](#citation-information).
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to Appendix C of our paper, which is provided for [citation](#citation-information).
#### Who are the annotators?
The authors and students at University of Zurich with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Dominik Stammbach
- Nicolas Webersinke
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch).
### Citation Information
```bibtex
@misc{stammbach2022environmentalclaims,
title = {A Dataset for Detecting Real-World Environmental Claims},
author = {Stammbach, Dominik and Webersinke, Nicolas and Bingler, Julia Anna and Kraus, Mathias and Leippold, Markus},
year = {2022},
doi = {10.48550/ARXIV.2209.00507},
url = {https://arxiv.org/abs/2209.00507},
publisher = {arXiv},
}
```
### Contributions
Thanks to [@webersni](https://github.com/webersni) for adding this dataset. | 4,248 | [
[
-0.020843505859375,
-0.033111572265625,
0.0276947021484375,
0.0023517608642578125,
-0.01108551025390625,
-0.01216888427734375,
-0.01503753662109375,
-0.058868408203125,
0.024261474609375,
0.040374755859375,
-0.0309906005859375,
-0.05987548828125,
-0.035125732421... |
allenai/wcep_sparse_oracle | 2022-11-24T15:58:43.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | allenai | null | null | 0 | 22 | 2022-09-14T20:37:12 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: WCEP-10
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: wcep
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8753 | 0.6443 | 0.6443 | 0.6443 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8706 | 0.6280 | 0.6280 | 0.6280 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8836 | 0.6658 | 0.6658 | 0.6658 | | 1,722 | [
[
-0.03338623046875,
-0.004467010498046875,
0.021331787109375,
0.01093292236328125,
-0.0202789306640625,
-0.005615234375,
-0.0106658935546875,
-0.002300262451171875,
0.0257415771484375,
0.034393310546875,
-0.043609619140625,
-0.042083740234375,
-0.048004150390625,... |
tomekkorbak/detoxify-pile-chunk3-1500000-1550000 | 2022-10-04T23:53:18.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 22 | 2022-10-04T23:53:11 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
tomekkorbak/detoxify-pile-chunk3-1450000-1500000 | 2022-10-04T23:56:05.000Z | [
"region:us"
] | tomekkorbak | null | null | 0 | 22 | 2022-10-04T23:55:57 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
jpwahle/dblp-discovery-dataset | 2022-11-28T13:18:13.000Z | [
"task_categories:other",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|s2orc",
"language:en",
"license:cc-by-4.0",
"dblp",
"s2",
"scientometrics",
"computer science",
"papers",
"arxiv",
"regio... | jpwahle | This repository provides metadata to papers from DBLP. | @inproceedings{wahle-etal-2022-d3,
title = "D3: A Massive Dataset of Scholarly Metadata for Analyzing the State of Computer Science Research",
author = "Wahle, Jan Philip and
Ruas, Terry and
Mohammad, Saif and
Gipp, Bela",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.283",
pages = "2642--2651",
abstract = "DBLP is the largest open-access repository of scientific articles on computer science and provides metadata associated with publications, authors, and venues. We retrieved more than 6 million publications from DBLP and extracted pertinent metadata (e.g., abstracts, author affiliations, citations) from the publication texts to create the DBLP Discovery Dataset (D3). D3 can be used to identify trends in research activity, productivity, focus, bias, accessibility, and impact of computer science research. We present an initial analysis focused on the volume of computer science research (e.g., number of papers, authors, research activity), trends in topics of interest, and citation patterns. Our findings show that computer science is a growing research field (15{\%} annually), with an active and collaborative researcher community. While papers in recent years present more bibliographical entries in comparison to previous decades, the average number of citations has been declining. Investigating papers{'} abstracts reveals that recent topic trends are clearly reflected in D3. Finally, we list further applications of D3 and pose supplemental research questions. The D3 dataset, our findings, and source code are publicly available for research purposes.",
} | 1 | 22 | 2022-11-06T09:42:13 | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: DBLP Discovery Dataset (D3)
size_categories:
- 1M<n<10M
source_datasets:
- extended|s2orc
tags:
- dblp
- s2
- scientometrics
- computer science
- papers
- arxiv
task_categories:
- other
task_ids: []
paperswithcode_id: d3
dataset_info:
- config_name: papers
download_size: 15876152
dataset_size: 15876152
- config_name: authors
download_size: 1177888
dataset_size: 1177888
---
# Dataset Card for DBLP Discovery Dataset (D3)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/jpwahle/lrec22-d3-dataset
- **Paper:** https://aclanthology.org/2022.lrec-1.283/
- **Total size:** 8.71 GB
### Dataset Summary
DBLP is the largest open-access repository of scientific articles on computer science and provides metadata associated with publications, authors, and venues. We retrieved more than 6 million publications from DBLP and extracted pertinent metadata (e.g., abstracts, author affiliations, citations) from the publication texts to create the DBLP Discovery Dataset (D3). D3 can be used to identify trends in research activity, productivity, focus, bias, accessibility, and impact of computer science research. We present an initial analysis focused on the volume of computer science research (e.g., number of papers, authors, research activity), trends in topics of interest, and citation patterns. Our findings show that computer science is a growing research field (15% annually), with an active and collaborative researcher community. While papers in recent years present more bibliographical entries in comparison to previous decades, the average number of citations has been declining. Investigating papers’ abstracts reveals that recent topic trends are clearly reflected in D3. Finally, we list further applications of D3 and pose supplemental research questions. The D3 dataset, our findings, and source code are publicly available for research purposes.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
Total size: 8.71 GB
Papers size: 8.13 GB
Authors size: 0.58 GB
### Data Fields
#### Papers
| Feature | Description |
| --- | --- |
| `corpusid` | The unique identifier of the paper. |
| `externalids` | The same paper in other repositories (e.g., DOI, ACL). |
| `title` | The title of the paper. |
| `authors` | The authors of the paper with their `authorid` and `name`. |
| `venue` | The venue of the paper. |
| `year` | The year of the paper publication. |
| `publicationdate` | A more precise publication date of the paper. |
| `abstract` | The abstract of the paper. |
| `outgoingcitations` | The number of references of the paper. |
| `ingoingcitations` | The number of citations of the paper. |
| `isopenaccess` | Whether the paper is open access. |
| `influentialcitationcount` | The number of influential citations of the paper according to SemanticScholar. |
| `s2fieldsofstudy` | The fields of study of the paper according to SemanticScholar. |
| `publicationtypes` | The publication types of the paper. |
| `journal` | The journal of the paper. |
| `updated` | The last time the paper was updated. |
| `url` | A url to the paper in SemanticScholar. |
#### Authors
| Feature | Description |
| --- | --- |
| `authorid` | The unique identifier of the author. |
| `externalids` | The same author in other repositories (e.g., ACL, PubMed). This can include `ORCID` |
| `name` | The name of the author. |
| `affiliations` | The affiliations of the author. |
| `homepage` | The homepage of the author. |
| `papercount` | The number of papers the author has written. |
| `citationcount` | The number of citations the author has received. |
| `hindex` | The h-index of the author. |
| `updated` | The last time the author was updated. |
| `email` | The email of the author. |
| `s2url` | A url to the author in SemanticScholar. |
### Data Splits
- `papers`
- `authors`
## Dataset Creation
### Curation Rationale
Providing a resource to analyze the state of computer science research statistically and semantically.
### Source Data
#### Initial Data Collection and Normalization
DBLP and from v2.0 SemanticScholar
## Additional Information
### Dataset Curators
[Jan Philip Wahle](https://jpwahle.com/)
### Licensing Information
The DBLP Discovery Dataset is released under the CC BY-NC 4.0. By using this corpus, you are agreeing to its usage terms.
### Citation Information
If you use the dataset in any way, please cite:
```bib
@inproceedings{Wahle2022c,
title = {D3: A Massive Dataset of Scholarly Metadata for Analyzing the State of Computer Science Research},
author = {Wahle, Jan Philip and Ruas, Terry and Mohammad, Saif M. and Gipp, Bela},
year = {2022},
month = {July},
booktitle = {Proceedings of The 13th Language Resources and Evaluation Conference},
publisher = {European Language Resources Association},
address = {Marseille, France},
doi = {},
}
```
Also make sure to cite the following papers if you use SemanticScholar data:
```bib
@inproceedings{ammar-etal-2018-construction,
title = "Construction of the Literature Graph in Semantic Scholar",
author = "Ammar, Waleed and
Groeneveld, Dirk and
Bhagavatula, Chandra and
Beltagy, Iz",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers)",
month = jun,
year = "2018",
address = "New Orleans - Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-3011",
doi = "10.18653/v1/N18-3011",
pages = "84--91",
}
```
```bib
@inproceedings{lo-wang-2020-s2orc,
title = "{S}2{ORC}: The Semantic Scholar Open Research Corpus",
author = "Lo, Kyle and Wang, Lucy Lu and Neumann, Mark and Kinney, Rodney and Weld, Daniel",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.447",
doi = "10.18653/v1/2020.acl-main.447",
pages = "4969--4983"
}
```### Contributions
Thanks to [@jpwahle](https://github.com/jpwahle) for adding this dataset.
| 7,588 | [
[
-0.03314208984375,
-0.040679931640625,
0.04937744140625,
0.01099395751953125,
0.0011148452758789062,
0.0093994140625,
-0.00901031494140625,
-0.034027099609375,
0.0268707275390625,
0.020721435546875,
-0.0462646484375,
-0.06488037109375,
-0.040679931640625,
0.... |
bigbio/bc7_litcovid | 2022-12-22T15:43:23.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | The training and development datasets contain the publicly-available text of over 30 thousand COVID-19-related articles and their metadata (e.g., title, abstract, journal). Articles in both datasets have been manually reviewed and articles annotated by in-house models. | @inproceedings{chen2021overview,
title = {
Overview of the BioCreative VII LitCovid Track: multi-label topic
classification for COVID-19 literature annotation
},
author = {
Chen, Qingyu and Allot, Alexis and Leaman, Robert and Do{\\u{g}}an, Rezarta
Islamaj and Lu, Zhiyong
},
year = 2021,
booktitle = {Proceedings of the seventh BioCreative challenge evaluation workshop}
} | 0 | 22 | 2022-11-13T22:06:17 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: BC7-LitCovid
homepage: https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vii/track-5/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- TEXT_CLASSIFICATION
---
# Dataset Card for BC7-LitCovid
## Dataset Description
- **Homepage:** https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vii/track-5/
- **Pubmed:** True
- **Public:** True
- **Tasks:** TXTCLASS
The training and development datasets contain the publicly-available text of over 30 thousand COVID-19-related articles and their metadata (e.g., title, abstract, journal). Articles in both datasets have been manually reviewed and articles annotated by in-house models.
## Citation Information
```
@inproceedings{chen2021overview,
title = {
Overview of the BioCreative VII LitCovid Track: multi-label topic
classification for COVID-19 literature annotation
},
author = {
Chen, Qingyu and Allot, Alexis and Leaman, Robert and Do{\u{g}}an, Rezarta
Islamaj and Lu, Zhiyong
},
year = 2021,
booktitle = {Proceedings of the seventh BioCreative challenge evaluation workshop}
}
```
| 1,266 | [
[
-0.0039215087890625,
-0.02215576171875,
0.0158233642578125,
0.00029969215393066406,
-0.04180908203125,
0.0115509033203125,
-0.00292205810546875,
-0.0211639404296875,
0.0088043212890625,
0.0149688720703125,
-0.039520263671875,
-0.06292724609375,
-0.02081298828125... |
bigbio/msh_wsd | 2022-12-22T15:45:41.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | Evaluation of Word Sense Disambiguation methods (WSD) in the biomedical domain is difficult because the available
resources are either too small or too focused on specific types of entities (e.g. diseases or genes). We have
developed a method that can be used to automatically develop a WSD test collection using the Unified Medical Language
System (UMLS) Metathesaurus and the manual MeSH indexing of MEDLINE. The resulting dataset is called MSH WSD and
consists of 106 ambiguous abbreviations, 88 ambiguous terms and 9 which are a combination of both, for a total of 203
ambiguous words. Each instance containing the ambiguous word was assigned a CUI from the 2009AB version of the UMLS.
For each ambiguous term/abbreviation, the data set contains a maximum of 100 instances per sense obtained from
MEDLINE; totaling 37,888 ambiguity cases in 37,090 MEDLINE citations. | @article{jimeno2011exploiting,
title={Exploiting MeSH indexing in MEDLINE to generate a data set for word sense disambiguation},
author={Jimeno-Yepes, Antonio J and McInnes, Bridget T and Aronson, Alan R},
journal={BMC bioinformatics},
volume={12},
number={1},
pages={1--14},
year={2011},
publisher={BioMed Central}
} | 1 | 22 | 2022-11-13T22:10:11 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: UMLS_LICENSE
pretty_name: MSH WSD
homepage: https://lhncbc.nlm.nih.gov/ii/areas/WSD/collaboration.html
bigbio_pubmed: True
bigbio_public: False
bigbio_tasks:
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for MSH WSD
## Dataset Description
- **Homepage:** https://lhncbc.nlm.nih.gov/ii/areas/WSD/collaboration.html
- **Pubmed:** True
- **Public:** False
- **Tasks:** NED
Evaluation of Word Sense Disambiguation methods (WSD) in the biomedical domain is difficult because the available
resources are either too small or too focused on specific types of entities (e.g. diseases or genes). We have
developed a method that can be used to automatically develop a WSD test collection using the Unified Medical Language
System (UMLS) Metathesaurus and the manual MeSH indexing of MEDLINE. The resulting dataset is called MSH WSD and
consists of 106 ambiguous abbreviations, 88 ambiguous terms and 9 which are a combination of both, for a total of 203
ambiguous words. Each instance containing the ambiguous word was assigned a CUI from the 2009AB version of the UMLS.
For each ambiguous term/abbreviation, the data set contains a maximum of 100 instances per sense obtained from
MEDLINE; totaling 37,888 ambiguity cases in 37,090 MEDLINE citations.
## Citation Information
```
@article{jimeno2011exploiting,
title={Exploiting MeSH indexing in MEDLINE to generate a data set for word sense disambiguation},
author={Jimeno-Yepes, Antonio J and McInnes, Bridget T and Aronson, Alan R},
journal={BMC bioinformatics},
volume={12},
number={1},
pages={1--14},
year={2011},
publisher={BioMed Central}
}
```
| 1,741 | [
[
-0.034454345703125,
-0.0401611328125,
0.0439453125,
0.0013093948364257812,
-0.02496337890625,
-0.0049896240234375,
0.0033016204833984375,
-0.0228118896484375,
0.0328369140625,
-0.0006771087646484375,
-0.0560302734375,
-0.04119873046875,
-0.05133056640625,
0.... |
WillHeld/mtop | 2022-12-10T17:50:10.000Z | [
"region:us"
] | WillHeld | null | null | 0 | 22 | 2022-11-17T21:54:47 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: ' intent'
dtype: string
- name: ' slot'
dtype: string
- name: ' utterance'
dtype: string
- name: ' domain'
dtype: string
- name: ' locale'
dtype: string
- name: ' dcp_form'
dtype: string
- name: ' tokens'
dtype: string
- name: intent
dtype: string
- name: slot
dtype: string
- name: utterance
dtype: string
- name: domain
dtype: string
- name: locale
dtype: string
- name: dcp_form
dtype: string
- name: tokens
dtype: string
splits:
- name: eval_en
num_bytes: 2077234
num_examples: 2235
- name: test_en
num_bytes: 4090856
num_examples: 4386
- name: train_en
num_bytes: 14501480
num_examples: 15667
- name: eval_de
num_bytes: 1764320
num_examples: 1815
- name: test_de
num_bytes: 3439946
num_examples: 3549
- name: train_de
num_bytes: 13122042
num_examples: 13424
- name: eval_es
num_bytes: 1594238
num_examples: 1527
- name: test_es
num_bytes: 3089782
num_examples: 2998
- name: train_es
num_bytes: 11277514
num_examples: 10934
- name: eval_fr
num_bytes: 1607082
num_examples: 1577
- name: test_fr
num_bytes: 3289276
num_examples: 3193
- name: train_fr
num_bytes: 12147836
num_examples: 11814
- name: eval_hi
num_bytes: 2618172
num_examples: 2012
- name: test_hi
num_bytes: 3491690
num_examples: 2789
- name: train_hi
num_bytes: 14225324
num_examples: 11330
- name: eval_th
num_bytes: 2251378
num_examples: 1671
- name: test_th
num_bytes: 3654864
num_examples: 2765
- name: train_th
num_bytes: 14277512
num_examples: 10759
download_size: 16165451
dataset_size: 112520546
---
# Dataset Card for "mtop"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,971 | [
[
-0.038360595703125,
-0.019927978515625,
0.0227203369140625,
0.01145172119140625,
-0.0178070068359375,
-0.007564544677734375,
0.0196075439453125,
-0.022796630859375,
0.055694580078125,
0.05694580078125,
-0.062469482421875,
-0.043487548828125,
-0.049346923828125,
... |
pszemraj/booksum-short | 2023-02-27T08:45:01.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"source_datasets:kmfoda/booksum",
"language:en",
"license:bsd-3-clause",
"booksum",
"long-document",
"region:us"
] | pszemraj | null | null | 1 | 22 | 2022-11-23T16:40:45 | ---
source_datasets: kmfoda/booksum
license:
- bsd-3-clause
train-eval-index:
- config: pszemraj--booksum_short
task: summarization
task_id: summarization
splits:
eval_split: test
col_mapping:
chapter: text
summary_text: target
task_categories:
- summarization
- text2text-generation
language:
- en
tags:
- booksum
- long-document
size_categories:
- 10K<n<100K
---
---
# booksum short
`BookSum` but all summaries with length greater than 512 `long-t5` tokens are filtered out.
The columns `chapter_length` and `summary_length` **in this dataset** have been updated to reflect the total of Long-T5 tokens in the respective source text.
## Token Length Distribution for inputs
 | 750 | [
[
-0.0316162109375,
-0.0092620849609375,
0.02642822265625,
0.004589080810546875,
-0.0772705078125,
0.0176239013671875,
-0.014251708984375,
-0.03662109375,
0.0399169921875,
0.059814453125,
-0.043975830078125,
-0.07049560546875,
-0.0595703125,
0.0333251953125,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.