id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
lhoestq/custom_squad | 2022-10-25T09:50:53.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
... | lhoestq | Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | 0 | 43 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for "squad"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits Sample Size](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 33.51 MB
- **Size of the generated dataset:** 85.75 MB
- **Total amount of disk used:** 119.27 MB
### Dataset Summary
This dataset is a custom copy of the original SQuAD dataset. It is used to showcase dataset repositories. Data are the same as the original dataset.
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
### Supported Tasks
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 33.51 MB
- **Size of the generated dataset:** 85.75 MB
- **Total amount of disk used:** 119.27 MB
An example of 'train' looks as follows.
```
{
"answers": {
"answer_start": [1],
"text": ["This is a test text"]
},
"context": "This is a test context.",
"id": "1",
"question": "Is this a test?",
"title": "train test"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits Sample Size
| name |train|validation|
|----------|----:|---------:|
|plain_text|87599| 10570|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
### Annotations
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. | 5,103 | [
[
-0.04364013671875,
-0.04815673828125,
0.006000518798828125,
0.018310546875,
-0.00728607177734375,
0.01490020751953125,
-0.007503509521484375,
-0.0194244384765625,
0.034515380859375,
0.025421142578125,
-0.083740234375,
-0.05474853515625,
-0.0273590087890625,
... |
eugenetanjc/speech_accent_general | 2022-06-24T03:36:14.000Z | [
"region:us"
] | eugenetanjc | null | null | 0 | 43 | 2022-06-24T03:35:00 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LanceaKing/asvspoof2019 | 2022-11-11T08:41:54.000Z | [
"task_categories:audio-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|vctk",
"language:en",
"license:odc-by",
"voice-anti-spoofing",
"arxiv:1911.01601",
"region:us"
] | LanceaKing | This is a database used for the Third Automatic Speaker Verification Spoofing
and Countermeasuers Challenge, for short, ASVspoof 2019 (http://www.asvspoof.org)
organized by Junichi Yamagishi, Massimiliano Todisco, Md Sahidullah, Héctor
Delgado, Xin Wang, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Ville Vestman,
and Andreas Nautsch in 2019. | @InProceedings{Todisco2019,
Title = {{ASV}spoof 2019: {F}uture {H}orizons in {S}poofed and {F}ake {A}udio {D}etection},
Author = {Todisco, Massimiliano and
Wang, Xin and
Sahidullah, Md and
Delgado, H ́ector and
Nautsch, Andreas and
Yamagishi, Junichi and
Evans, Nicholas and
Kinnunen, Tomi and
Lee, Kong Aik},
booktitle = {Proc. of Interspeech 2019},
Year = {2019}
} | 0 | 43 | 2022-07-20T08:29:40 | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- odc-by
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|vctk
task_categories:
- audio-classification
task_ids: []
pretty_name: asvspoof2019
tags:
- voice-anti-spoofing
---
# Dataset Card for asvspoof2019
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://datashare.ed.ac.uk/handle/10283/3336
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/1911.01601
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is a database used for the Third Automatic Speaker Verification Spoofing
and Countermeasuers Challenge, for short, ASVspoof 2019 (http://www.asvspoof.org)
organized by Junichi Yamagishi, Massimiliano Todisco, Md Sahidullah, Héctor
Delgado, Xin Wang, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Ville Vestman,
and Andreas Nautsch in 2019.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
```
{'speaker_id': 'LA_0091',
'audio_file_name': 'LA_T_8529430',
'audio': {'path': 'D:/Users/80304531/.cache/huggingface/datasets/downloads/extracted/8cabb6d5c283b0ed94b2219a8d459fea8e972ce098ef14d8e5a97b181f850502/LA/ASVspoof2019_LA_train/flac/LA_T_8529430.flac',
'array': array([-0.00201416, -0.00234985, -0.0022583 , ..., 0.01309204,
0.01339722, 0.01461792], dtype=float32),
'sampling_rate': 16000},
'system_id': 'A01',
'key': 1}
```
### Data Fields
Logical access (LA):
- `speaker_id`: `LA_****`, a 4-digit speaker ID
- `audio_file_name`: name of the audio file
- `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `system_id`: ID of the speech spoofing system (A01 - A19), or, for bonafide speech SYSTEM-ID is left blank ('-')
- `key`: 'bonafide' for genuine speech, or, 'spoof' for spoofing speech
Physical access (PA):
- `speaker_id`: `PA_****`, a 4-digit speaker ID
- `audio_file_name`: name of the audio file
- `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `environment_id`: a triplet (S,R,D_s), which take one letter in the set {a,b,c} as categorical value, defined as
| | a | b | c |
| -------------------------------- | ------ | ------- | -------- |
| S: Room size (square meters) | 2-5 | 5-10 | 10-20 |
| R: T60 (ms) | 50-200 | 200-600 | 600-1000 |
| D_s: Talker-to-ASV distance (cm) | 10-50 | 50-100 | 100-150 |
- `attack_id`: a duple (D_a,Q), which take one letter in the set {A,B,C} as categorical value, defined as
| | A | B | C |
| ----------------------------------- | ------- | ------ | ----- |
| Z: Attacker-to-talker distance (cm) | 10-50 | 50-100 | > 100 |
| Q: Replay device quality | perfect | high | low |
for bonafide speech, `attack_id` is left blank ('-')
- `key`: 'bonafide' for genuine speech, or, 'spoof' for spoofing speech
### Data Splits
| | Training set | Development set | Evaluation set |
| -------- | ------------ | --------------- | -------------- |
| Bonafide | 2580 | 2548 | 7355 |
| Spoof | 22800 | 22296 | 63882 |
| Total | 25380 | 24844 | 71237 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
This ASVspoof 2019 dataset is made available under the Open Data Commons Attribution License: http://opendatacommons.org/licenses/by/1.0/
### Citation Information
```
@InProceedings{Todisco2019,
Title = {{ASV}spoof 2019: {F}uture {H}orizons in {S}poofed and {F}ake {A}udio {D}etection},
Author = {Todisco, Massimiliano and
Wang, Xin and
Sahidullah, Md and
Delgado, H ́ector and
Nautsch, Andreas and
Yamagishi, Junichi and
Evans, Nicholas and
Kinnunen, Tomi and
Lee, Kong Aik},
booktitle = {Proc. of Interspeech 2019},
Year = {2019}
}
```
| 6,819 | [
[
-0.038299560546875,
-0.048828125,
-0.00722503662109375,
0.0294036865234375,
-0.0177001953125,
-0.0013284683227539062,
-0.0258026123046875,
-0.022552490234375,
0.047576904296875,
0.0301361083984375,
-0.05194091796875,
-0.062225341796875,
-0.047515869140625,
0... |
biglam/nls_chapbook_illustrations | 2023-02-15T16:11:54.000Z | [
"task_categories:object-detection",
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"size_categories:1K<n<10K",
"license:other",
"lam",
"historic",
"arxiv:1405.0312",
"region:us"
] | biglam | null | @inproceedings{10.1145/3476887.3476893,
author = {Dutta, Abhishek and Bergel, Giles and Zisserman, Andrew},
title = {Visual Analysis of Chapbooks Printed in Scotland},
year = {2021},
isbn = {9781450386906},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3476887.3476893},
doi = {10.1145/3476887.3476893},
abstract = {Chapbooks were short, cheap printed booklets produced in large quantities in Scotland, England, Ireland, North America and much of Europe between roughly the seventeenth and nineteenth centuries. A form of popular literature containing songs, stories, poems, games, riddles, religious writings and other content designed to appeal to a wide readership, they were frequently illustrated, particularly on their title-pages. This paper describes the visual analysis of such chapbook illustrations. We automatically extract all the illustrations contained in the National Library of Scotland Chapbooks Printed in Scotland dataset, and create a visual search engine to search this dataset using full or part-illustrations as queries. We also cluster these illustrations based on their visual content, and provide keyword-based search of the metadata associated with each publication. The visual search; clustering of illustrations based on visual content; and metadata search features enable researchers to forensically analyse the chapbooks dataset and to discover unnoticed relationships between its elements. We release all annotations and software tools described in this paper to enable reproduction of the results presented and to allow extension of the methodology described to datasets of a similar nature.},
booktitle = {The 6th International Workshop on Historical Document Imaging and Processing},
pages = {67–72},
numpages = {6},
keywords = {illustration detection, chapbooks, image search, visual grouping, printing, digital scholarship, illustration dataset},
location = {Lausanne, Switzerland},
series = {HIP '21}
} | 7 | 43 | 2022-07-23T21:05:40 | ---
annotations_creators:
- expert-generated
language_creators: []
license:
- other
multilinguality: []
pretty_name: National Library of Scotland Chapbook Illustrations
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- lam
- historic
task_categories:
- object-detection
- image-classification
task_ids:
- multi-class-image-classification
---
# Dataset Card for National Library of Scotland Chapbook Illustrations
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.robots.ox.ac.uk/~vgg/research/chapbooks/
- **Repository:** https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/
- **Paper:** https://www.robots.ox.ac.uk/~vgg/research/chapbooks/data/dutta2021visual.pdf
- **Leaderboard:**
- **Point of Contact:** giles.bergel@eng.ox.ac.uk
### Dataset Summary
This dataset comprises of images from chapbooks held by the [National Library of Scotland](https://www.nls.uk/) and digitised and published as its [Chapbooks Printed in Scotland](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/) dataset.
> "Chapbooks were staple everyday reading material from the end of the 17th to the later 19th century. They were usually printed on a single sheet and then folded into books of 8, 12, 16 and 24 pages, and they were often illustrated with crude woodcuts. Their subjects range from news courtship, humour, occupations, fairy tales, apparitions, war, politics, crime, executions, historical figures, transvestites [*sic*] and freemasonry to religion and, of course, poetry. It has been estimated that around two thirds of chapbooks contain songs and poems, often under the title garlands." -[Source](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/)
Chapbooks were frequently illustrated, particularly on their title pages to attract customers, usually with a woodblock-printed illustration, or occasionally with a stereotyped woodcut or cast metal ornament. Apart from their artistic interest, these illustrations can also provide historical evidence such as the date, place or persons behind the publication of an item.
This dataset contains annotations for a subset of these chapbooks, created by Giles Bergel and Abhishek Dutta, based in the [Visual Geometry Group](https://www.robots.ox.ac.uk/~vgg/) in the University of Oxford. They were created under a National Librarian of Scotland's Fellowship in Digital Scholarship [awarded](https://data.nls.uk/projects/the-national-librarians-research-fellowship-in-digital-scholarship/) to Giles Bergel in 2020. These annotations provide bounding boxes around illustrations printed on a subset of the chapbook pages, created using a combination of manual annotation and machine classification, described in [this paper](https://www.robots.ox.ac.uk/~vgg/research/chapbooks/data/dutta2021visual.pdf).
The dataset also includes computationally inferred 'visual groupings' to which illustrated chapbook pages may belong. These groupings are based on the recurrence of illustrations on chapbook pages, as determined through the use of the [VGG Image Search Engine (VISE) software](https://www.robots.ox.ac.uk/~vgg/software/vise/)
### Supported Tasks and Leaderboards
- `object-detection`: the dataset contains bounding boxes for images contained in the Chapbooks
- `image-classification`: a configuration for this dataset provides a classification label indicating if a page contains an illustration or not.
- `image-matching`: a configuration for this dataset contains the annotations sorted into clusters or 'visual groupings' of illustrations that contain visually-matching content as determined by using the [VGG Image Search Engine (VISE) software](https://www.robots.ox.ac.uk/~vgg/software/vise/).
The performance on the `object-detection` task reported in the paper [Visual Analysis of Chapbooks Printed in Scotland](https://dl.acm.org/doi/10.1145/3476887.3476893) is as follows:
| IOU threshold | Precision | Recall |
|---------------|-----------|--------|
| 0.50 | 0.993 | 0.911 |
| 0.75 | 0.987 | 0.905 |
| 0.95 | 0.973 | 0.892 |
The performance on the `image classification` task reported in the paper [Visual Analysis of Chapbooks Printed in Scotland](https://dl.acm.org/doi/10.1145/3476887.3476893) is as follows:
Images in original dataset: 47329
Numbers of images on which at least one illustration was detected: 3629
Note that these figures do not represent images that contained multiple detections.
See the [paper](https://dl.acm.org/doi/10.1145/3476887.3476893) for examples of false-positive detections.
The performance on the 'image-matching' task is undergoing evaluation.
### Languages
Text accompanying the illustrations is in English, Scots or Scottish Gaelic.
## Dataset Structure
### Data Instances
An example instance from the `illustration-detection` split:
```python
{'image_id': 4,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=600x1080>,
'width': 600,
'height': 1080,
'objects': [{'category_id': 0,
'image_id': '4',
'id': 1,
'area': 110901,
'bbox': [34.529998779296875,
556.8300170898438,
401.44000244140625,
276.260009765625],
'segmentation': [[34.529998779296875,
556.8300170898438,
435.9700012207031,
556.8300170898438,
435.9700012207031,
833.0900268554688,
34.529998779296875,
833.0900268554688]],
'iscrowd': False}]}
```
An example instance from the `image-classification` split:
```python
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=600x1080>,
'label': 1}
```
An example from the `image-matching` split:
```python
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=600x1080>,
'group-label': 231}
```
### Data Fields
The fields for the `illustration-detection` config:
- image_id: id for the image
- height: height of the image
- width: width of the image
- image: image of the chapbook page
- objects: annotations in COCO format, consisting of a list containing dictionaries with the following keys:
- bbox: bounding boxes for the images
- category_id: a label for the image
- image_id: id for the image
- iscrowd: COCO is a crowd flag
- segmentation: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)
The fields for the `image-classification` config:
- image: image
- label: a label indicating if the page contains an illustration or not
The fields for the `image-matching` config:
- image: image of the chapbook page
- label: an id for a particular instance of an image i.e. the same images will share the same id.
### Data Splits
There is a single split `train` for all configs. K-fold validation was used in the [paper](https://dl.acm.org/doi/10.1145/3476887.3476893) describing this dataset, so no existing splits were defined.
## Dataset Creation
### Curation Rationale
The dataset was created to facilitate research into Scottish chapbook illustration and publishing. Detected illustrations can be browsed under publication metadata: together with the use of [VGG Image Search Engine (VISE) software](https://www.robots.ox.ac.uk/~vgg/software/vise/), this allows researchers to identify matching imagery and to infer the source of a chapbook from partial evidence. This browse and search functionality is available in this [public demo](http://meru.robots.ox.ac.uk/nls_chapbooks/filelist) documented [here](https://www.robots.ox.ac.uk/~vgg/research/chapbooks/)
### Source Data
#### Initial Data Collection and Normalization
The initial data was taken from the [National Library of Scotland's Chapbooks Printed in Scotland dataset](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/) No normalisation was performed, but only the images and a subset of the metadata was used. OCR text was not used.
#### Who are the source language producers?
The initial dataset was created by the National Library of Scotland from scans and in-house curated catalogue descriptions for the NLS [Data Foundry](https://data.nls.uk) under the direction of Dr. Sarah Ames.
This subset of the data was created by Dr. Giles Bergel and Dr. Abhishek Dutta using a combination of manual annotation and machine classification, described below.
### Annotations
#### Annotation process
Annotation was initially performed on a subset of 337 of the 47329 images, using the [VGG List Annotator (LISA](https://gitlab.com/vgg/lisa) software. Detected illustrations, displayed as annotations in LISA, were reviewed and refined in a number of passes (see [this paper](https://dl.acm.org/doi/10.1145/3476887.3476893) for more details). Initial detections were performed with an [EfficientDet](https://ai.googleblog.com/2020/04/efficientdet-towards-scalable-and.html) object detector trained on [COCO](https://cocodataset.org/#home), the annotation of which is described in [this paper](https://arxiv.org/abs/1405.0312)
#### Who are the annotators?
Abhishek Dutta created the initial 337 annotations for retraining the EfficentDet model. Detections were reviewed and in some cases revised by Giles Bergel.
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
We believe this dataset will assist in the training and benchmarking of illustration detectors. It is hoped that by automating a task that would otherwise require manual annotation it will save researchers time and labour in preparing data for both machine and human analysis. The dataset in question is based on a category of popular literature that reflected the learning, tastes and cultural faculties of both its large audiences and its largely-unknown creators - we hope that its use, reuse and adaptation will highlight the importance of cheap chapbooks in the spread of literature, knowledge and entertainment in both urban and rural regions of Scotland and the United Kingdom during this period.
### Discussion of Biases
While the original Chapbooks Printed in Scotland is the largest single collection of digitised chapbooks, it is as yet unknown if it is fully representative of all chapbooks printed in Scotland, or of cheap printed literature in general. It is known that a small number of chapbooks (less than 0.1%) within the original collection were not printed in Scotland but this is not expected to have a significant impact on the profile of the collection as a representation of the population of chapbooks as a whole.
The definition of an illustration as opposed to an ornament or other non-textual printed feature is somewhat arbitrary: edge-cases were evaluated by conformance with features that are most characteristic of the chapbook genre as a whole in terms of content, style or placement on the page.
As there is no consensus definition of the chapbook even among domain specialists, the composition of the original dataset is based on the judgement of those who assembled and curated the original collection.
### Other Known Limitations
Within this dataset, illustrations are repeatedly reused to an unusually high degree compared to other printed forms. The positioning of illustrations on the page and the size and format of chapbooks as a whole is also characteristic of the chapbook format in particular. The extent to which these annotations may be generalised to other printed works is under evaluation: initial results have been promising for other letterpress illustrations surrounded by texts.
## Additional Information
### Dataset Curators
- Giles Bergel
- Abhishek Dutta
### Licensing Information
In accordance with the [original data](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/), this dataset is in the public domain.
### Citation Information
``` bibtex
@inproceedings{10.1145/3476887.3476893,
author = {Dutta, Abhishek and Bergel, Giles and Zisserman, Andrew},
title = {Visual Analysis of Chapbooks Printed in Scotland},
year = {2021},
isbn = {9781450386906},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3476887.3476893},
doi = {10.1145/3476887.3476893},
abstract = {Chapbooks were short, cheap printed booklets produced in large quantities in Scotland, England, Ireland, North America and much of Europe between roughly the seventeenth and nineteenth centuries. A form of popular literature containing songs, stories, poems, games, riddles, religious writings and other content designed to appeal to a wide readership, they were frequently illustrated, particularly on their title-pages. This paper describes the visual analysis of such chapbook illustrations. We automatically extract all the illustrations contained in the National Library of Scotland Chapbooks Printed in Scotland dataset, and create a visual search engine to search this dataset using full or part-illustrations as queries. We also cluster these illustrations based on their visual content, and provide keyword-based search of the metadata associated with each publication. The visual search; clustering of illustrations based on visual content; and metadata search features enable researchers to forensically analyse the chapbooks dataset and to discover unnoticed relationships between its elements. We release all annotations and software tools described in this paper to enable reproduction of the results presented and to allow extension of the methodology described to datasets of a similar nature.},
booktitle = {The 6th International Workshop on Historical Document Imaging and Processing},
pages = {67–72},
numpages = {6},
keywords = {illustration detection, chapbooks, image search, visual grouping, printing, digital scholarship, illustration dataset},
location = {Lausanne, Switzerland},
series = {HIP '21}
}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) and Giles Bergel for adding this dataset. | 15,055 | [
[
-0.042144775390625,
-0.0211181640625,
-0.0013666152954101562,
-0.0189056396484375,
-0.0291748046875,
-0.019256591796875,
0.01910400390625,
-0.059600830078125,
0.018768310546875,
0.06097412109375,
-0.0307159423828125,
-0.055145263671875,
-0.0374755859375,
0.0... |
tner/wikineural | 2022-09-27T19:46:37.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:multilingual",
"size_categories:10K<100k",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"language:nl",
"language:pl",
"language:pt",
"language:ru",
"region:us"
] | tner | [wikineural](https://aclanthology.org/2021.findings-emnlp.215/) | @inproceedings{tedeschi-etal-2021-wikineural-combined,
title = "{W}iki{NE}u{R}al: {C}ombined Neural and Knowledge-based Silver Data Creation for Multilingual {NER}",
author = "Tedeschi, Simone and
Maiorca, Valentino and
Campolungo, Niccol{\`o} and
Cecconi, Francesco and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.215",
doi = "10.18653/v1/2021.findings-emnlp.215",
pages = "2521--2533",
abstract = "Multilingual Named Entity Recognition (NER) is a key intermediate task which is needed in many areas of NLP. In this paper, we address the well-known issue of data scarcity in NER, especially relevant when moving to a multilingual scenario, and go beyond current approaches to the creation of multilingual silver data for the task. We exploit the texts of Wikipedia and introduce a new methodology based on the effective combination of knowledge-based approaches and neural models, together with a novel domain adaptation technique, to produce high-quality training corpora for NER. We evaluate our datasets extensively on standard benchmarks for NER, yielding substantial improvements up to 6 span-based F1-score points over previous state-of-the-art systems for data creation.",
} | 4 | 43 | 2022-09-27T17:56:40 | ---
language:
- de
- en
- es
- fr
- it
- nl
- pl
- pt
- ru
multilinguality:
- multilingual
size_categories:
- 10K<100k
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: WikiNeural
---
# Dataset Card for "tner/wikineural"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/2021.findings-emnlp.215/](https://aclanthology.org/2021.findings-emnlp.215/)
- **Dataset:** WikiNeural
- **Domain:** Wikipedia
- **Number of Entity:** 16
### Dataset Summary
WikiAnn NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `PER`, `LOC`, `ORG`, `ANIM`, `BIO`, `CEL`, `DIS`, `EVE`, `FOOD`, `INST`, `MEDIA`, `PLANT`, `MYTH`, `TIME`, `VEHI`, `MISC`
## Dataset Structure
### Data Instances
An example of `train` of `de` looks as follows.
```
{
'tokens': [ "Dieses", "wiederum", "basierte", "auf", "dem", "gleichnamigen", "Roman", "von", "Noël", "Calef", "." ],
'tags': [ 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0 ]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/wikineural/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-PER": 1,
"I-PER": 2,
"B-LOC": 3,
"I-LOC": 4,
"B-ORG": 5,
"I-ORG": 6,
"B-ANIM": 7,
"I-ANIM": 8,
"B-BIO": 9,
"I-BIO": 10,
"B-CEL": 11,
"I-CEL": 12,
"B-DIS": 13,
"I-DIS": 14,
"B-EVE": 15,
"I-EVE": 16,
"B-FOOD": 17,
"I-FOOD": 18,
"B-INST": 19,
"I-INST": 20,
"B-MEDIA": 21,
"I-MEDIA": 22,
"B-PLANT": 23,
"I-PLANT": 24,
"B-MYTH": 25,
"I-MYTH": 26,
"B-TIME": 27,
"I-TIME": 28,
"B-VEHI": 29,
"I-VEHI": 30,
"B-MISC": 31,
"I-MISC": 32
}
```
### Data Splits
| language | train | validation | test |
|:-----------|--------:|-------------:|-------:|
| de | 98640 | 12330 | 12372 |
| en | 92720 | 11590 | 11597 |
| es | 76320 | 9540 | 9618 |
| fr | 100800 | 12600 | 12678 |
| it | 88400 | 11050 | 11069 |
| nl | 83680 | 10460 | 10547 |
| pl | 108160 | 13520 | 13585 |
| pt | 80560 | 10070 | 10160 |
| ru | 92320 | 11540 | 11580 |
### Citation Information
```
@inproceedings{tedeschi-etal-2021-wikineural-combined,
title = "{W}iki{NE}u{R}al: {C}ombined Neural and Knowledge-based Silver Data Creation for Multilingual {NER}",
author = "Tedeschi, Simone and
Maiorca, Valentino and
Campolungo, Niccol{\`o} and
Cecconi, Francesco and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.215",
doi = "10.18653/v1/2021.findings-emnlp.215",
pages = "2521--2533",
abstract = "Multilingual Named Entity Recognition (NER) is a key intermediate task which is needed in many areas of NLP. In this paper, we address the well-known issue of data scarcity in NER, especially relevant when moving to a multilingual scenario, and go beyond current approaches to the creation of multilingual silver data for the task. We exploit the texts of Wikipedia and introduce a new methodology based on the effective combination of knowledge-based approaches and neural models, together with a novel domain adaptation technique, to produce high-quality training corpora for NER. We evaluate our datasets extensively on standard benchmarks for NER, yielding substantial improvements up to 6 span-based F1-score points over previous state-of-the-art systems for data creation.",
}
``` | 3,745 | [
[
-0.043548583984375,
-0.040618896484375,
0.002094268798828125,
-0.00019228458404541016,
-0.0011110305786132812,
-0.00400543212890625,
-0.024505615234375,
-0.02294921875,
0.05377197265625,
0.01085662841796875,
-0.0341796875,
-0.05487060546875,
-0.045928955078125,
... |
bigbio/pubhealth | 2022-12-22T15:46:21.000Z | [
"multilinguality:monolingual",
"language:en",
"license:mit",
"region:us"
] | bigbio | A dataset of 11,832 claims for fact- checking, which are related a range of health topics
including biomedical subjects (e.g., infectious diseases, stem cell research), government healthcare policy
(e.g., abortion, mental health, women’s health), and other public health-related stories | @article{kotonya2020explainable,
title={Explainable automated fact-checking for public health claims},
author={Kotonya, Neema and Toni, Francesca},
journal={arXiv preprint arXiv:2010.09926},
year={2020}
} | 0 | 43 | 2022-11-13T22:11:42 |
---
language:
- en
bigbio_language:
- English
license: mit
multilinguality: monolingual
bigbio_license_shortname: MIT
pretty_name: PUBHEALTH
homepage: https://github.com/neemakot/Health-Fact-Checking/tree/master/data
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- TEXT_CLASSIFICATION
---
# Dataset Card for PUBHEALTH
## Dataset Description
- **Homepage:** https://github.com/neemakot/Health-Fact-Checking/tree/master/data
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS
A dataset of 11,832 claims for fact- checking, which are related a range of health topics
including biomedical subjects (e.g., infectious diseases, stem cell research), government healthcare policy
(e.g., abortion, mental health, women’s health), and other public health-related stories
## Citation Information
```
@article{kotonya2020explainable,
title={Explainable automated fact-checking for public health claims},
author={Kotonya, Neema and Toni, Francesca},
journal={arXiv preprint arXiv:2010.09926},
year={2020}
}
```
| 1,040 | [
[
0.0087432861328125,
-0.032379150390625,
0.049224853515625,
0.00611114501953125,
-0.0216064453125,
0.004726409912109375,
0.01470947265625,
-0.020538330078125,
0.0243072509765625,
0.034759521484375,
-0.0279083251953125,
-0.0723876953125,
-0.044342041015625,
0.... |
sdadas/ppc | 2022-12-29T11:30:31.000Z | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:pl",
"license:cc-by-nc-sa-4.0",
"region:us"
] | sdadas | null | null | 0 | 43 | 2022-12-29T10:11:25 | ---
language:
- pl
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
pretty_name: Polish Paraphrase Corpus
dataset_info:
features:
- name: sentence_A
dtype: string
- name: sentence_B
dtype: string
- name: label
dtype:
class_label:
names:
0: not used
1: exact paraphrases
2: similar sentences
3: non-paraphrases
splits:
- name: train
- name: validation
- name: test
---
# PPC - Polish Paraphrase Corpus
### Dataset Summary
Polish Paraphrase Corpus contains 7000 manually labeled sentence pairs. The dataset was divided into training, validation and test splits. The training part includes 5000 examples, while the other parts contain 1000 examples each. The main purpose of creating such a dataset was to verify how machine learning models perform in the challenging problem of paraphrase identification, where most records contain semantically overlapping parts. Technically, this is a three-class classification task, where each record can be assigned to one of the following categories:
- Exact paraphrases - Sentence pairs that convey exactly the same information. We are interested only in the semantic meaning of the sentence, therefore this category also includes sentences that are semantically identical but, for example, have different emotional emphasis.
- Close paraphrases - Sentence pairs with similar semantic meaning. In this category we include all pairs which contain the same information, but in addition to it there may be other semantically non-overlapping parts. This category also contains context-dependent paraphrases - sentence pairs that may have the same meaning in some contexts but are different in others.
- Non-paraphrases - All other cases, including contradictory sentences and semantically unrelated sentences.
The corpus contains 2911, 1297, and 2792 examples for the above three categories, respectively. The process of annotating the dataset was preceded by an automated generation of candidate pairs, which were then manually labeled. We experimented with two popular techniques of generating possible paraphrases: backtranslation with a set of neural machine translation models and paraphrase mining using a pre-trained multilingual sentence encoder. The extracted sentence pairs are drawn from different data sources: Taboeba, Polish news articles, Wikipedia and Polish version of SICK dataset. Since most of the sentence pairs obtained in this way fell into the first two categories, in order to balance the dataset, some of the examples were manually modified to convey different information. In this way, even negative examples often have high semantic overlap, making this problem difficult for machine learning models.
### Data Instances
Example instance:
```
{
"sentence_A": "Libia: lotnisko w w Trypolisie ostrzelane rakietami.",
"sentence_B": "Jedyne lotnisko w stolicy Libii - Trypolisie zostało w nocy z wtorku na środę ostrzelane rakietami.",
"label": "2"
}
```
### Data Fields
- sentence_A: first sentence text
- sentence_B: second sentence text
- label: label identifier corresponding to one of three categories
### Citation Information
```
@inproceedings{9945218,
author={Dadas, S{\l}awomir},
booktitle={2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC)},
title={Training Effective Neural Sentence Encoders from Automatically Mined Paraphrases},
year={2022},
volume={},
number={},
pages={371-378},
doi={10.1109/SMC53654.2022.9945218}
}
``` | 3,657 | [
[
-0.0187225341796875,
-0.04736328125,
0.048370361328125,
0.02606201171875,
-0.03265380859375,
-0.0204010009765625,
-0.00830078125,
-0.0172271728515625,
0.004711151123046875,
0.0479736328125,
-0.02972412109375,
-0.055908203125,
-0.03948974609375,
0.04095458984... |
dream-textures/textures-color-normal-1k | 2023-01-13T21:20:22.000Z | [
"task_categories:image-to-image",
"size_categories:1K<n<10K",
"license:cc0-1.0",
"region:us"
] | dream-textures | null | null | 5 | 43 | 2023-01-13T21:14:42 | ---
dataset_info:
features:
- name: color
dtype: image
- name: normal
dtype: image
splits:
- name: train
num_bytes: 110631687.194
num_examples: 1426
download_size: 111043422
dataset_size: 110631687.194
license: cc0-1.0
task_categories:
- image-to-image
size_categories:
- 1K<n<10K
---
# textures-color-normal-1k
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The `textures-color-normal-1k` dataset is an image dataset of 1000+ color and normal map textures in 512x512 resolution.
The dataset was created for use in image to image tasks.
It contains a combination of CC0 procedural and photoscanned PBR materials from [ambientCG](https://ambientcg.com/).
## Dataset Structure
### Data Instances
Each data point contains a 512x512 color texture and the corresponding 512x512 normal map.
### Data Fields
* `color`: the color texture as a PIL image
* `normal`: the normal map as a PIL image
### Data Splits
| | train |
| -- | ----- |
| ambientCG | 1426 |
## Dataset Creation
### Curation Rationale
`textures-color-normal-1k` was created to provide an accesible source of data for automating 3D-asset creation workflows.
The [Dream Textures](https://github.com/carson-katri/dream-textures) add-on is one such tool providing AI automation in Blender.
By training models designed for image to image tasks, this particular use-case can be more accurately automated.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained from [ambientCG](https://ambientcg.com/)'s CC0 textures. Only the color and normal maps were included in this dataset.
## Additional Information
### Dataset Curators
The dataset was created by Carson Katri, with the images being provided by [ambientCG](https://ambientcg.com/).
### Licensing Information
All of the images used in this dataset are CC0.
### Citation Information
[N/A]
### Contributions
Thanks to [@carson-katri](https://github.com/carson-katri) for adding this dataset. | 2,717 | [
[
-0.032806396484375,
-0.03302001953125,
0.02105712890625,
0.033843994140625,
-0.0225372314453125,
-0.0112762451171875,
-0.003383636474609375,
-0.042388916015625,
0.0276336669921875,
0.0513916015625,
-0.054931640625,
-0.0740966796875,
-0.015533447265625,
-0.00... |
fathyshalab/atis_intents | 2023-01-23T18:25:53.000Z | [
"region:us"
] | fathyshalab | null | null | 0 | 43 | 2023-01-23T18:19:03 | ---
dataset_info:
features:
- name: label text
dtype: string
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 448812
num_examples: 4834
- name: test
num_bytes: 69352
num_examples: 800
download_size: 157677
dataset_size: 518164
---
# Dataset Card for "atis_intents"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 481 | [
[
-0.02288818359375,
-0.0037860870361328125,
0.03125,
0.01102447509765625,
-0.00667572021484375,
-0.0181884765625,
0.0199737548828125,
-0.01126861572265625,
0.073486328125,
0.039398193359375,
-0.0653076171875,
-0.0579833984375,
-0.03631591796875,
-0.0241546630... |
cQueenccc/Vivian-Blip-Captions | 2023-03-22T17:45:08.000Z | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"size_categories:n<1K",
"language:en",
"region:us"
] | cQueenccc | null | null | 7 | 43 | 2023-03-22T17:11:32 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 171055893.125
num_examples: 1087
download_size: 170841790
dataset_size: 171055893.125
language:
- en
task_categories:
- text-to-image
annotations_creators:
- machine-generated
size_categories:
- n<1K
---
# Disclaimer
This was inspired from https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions
# Dataset Card for A subset of Vivian Maier's photographs BLIP captions
The captions are generated with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
For each row the dataset contains `image` and `caption` keys. `image` is a varying size PIL jpeg, and `caption` is the accompanying text caption. Only a train split is provided.
## Examples

> A group of people

> person floating in the water

> a person standing next to a refrigerator
## Citation
If you use this dataset, please cite it as:
```
@misc{cqueenccc2023vivian,
author = {cQueenccc},
title = {Vivian Maier's photograph split BLIP captions},
year={2023},
howpublished= {\url{https://huggingface.co/datasets/cQueenccc/Vivian-Blip-Captions/}}
}
``` | 1,627 | [
[
-0.01351165771484375,
-0.005702972412109375,
0.0186004638671875,
0.0290679931640625,
-0.032318115234375,
0.0006809234619140625,
0.0161590576171875,
-0.0193023681640625,
0.0391845703125,
0.046539306640625,
-0.0521240234375,
-0.0256500244140625,
-0.029449462890625... |
hackathon-somos-nlp-2023/informes_discriminacion_gitana | 2023-04-11T09:29:14.000Z | [
"task_categories:text-classification",
"task_categories:text2text-generation",
"size_categories:n<1K",
"language:es",
"license:apache-2.0",
"hate",
"region:us"
] | hackathon-somos-nlp-2023 | null | null | 7 | 43 | 2023-04-04T14:19:40 | ---
dataset_info:
features:
- name: sintetico
dtype: string
- name: text
dtype: string
- name: intervencion
dtype: string
- name: tipo_discriminacion
dtype: string
- name: resultado
dtype: string
splits:
- name: train
num_bytes: 1569183.3
num_examples: 1791
- name: test
num_bytes: 87614.92462311558
num_examples: 100
- name: valid
num_bytes: 86738.77537688443
num_examples: 99
download_size: 936705
dataset_size: 1743537.0000000002
task_categories:
- text-classification
- text2text-generation
language:
- es
tags:
- hate
size_categories:
- n<1K
license: apache-2.0
---
### Resumen del dataset
Se trata de un dataset en español, extraído del centro de documentación de la Fundación Secretariado Gitano, en el que se presentan distintas situaciones discriminatorias acontecidas por el pueblo gitano. Puesto que el objetivo del modelo es crear un sistema de generación de actuaciones que permita minimizar el impacto de una situación discriminatoria, se hizo un scrappeo y se extrajeron todos los PDFs que contuvieron casos de discriminación con el formato (HECHOS, INTERVENCIÓN, RESULTADO). Para extraer la información se hizo un scrappeo de la página, a continuación se limpió y se unificó todo el dataset con un script de preprocesamiento para que todo el dataset tuviera el mismo formato.
### Tareas admitidas y tablas de clasificación
- `task-generation`: Dado el hecho generar la intervención y la etiqueta de resultado, para dar métodos para hacer la intervección y que sea efectiva. ([PAG-BERT](https://huggingface.co/hackathon-somos-nlp-2023/PAG-BERT))
- `task-classication`: Se puede entrenar un modelo de clasificación, dejamos a los usarios, predecir el tipo de descriminación de dependiedo del hecho
### Idioma
Es un dataset con la variante español de España, el estilo empleado es formal y objetivo, limitándose a describir los hechos descritos por las personas afectadas.
## Estructura de los datos
### Instancias
A continuación se muestra una instancia de ejemplo del dataset:
```
{
'sintetico': '0',
'text': 'Una joven gitana comenzó a trabajar en una tienda de ropa, hace dos años, con contrato indefinido. Al mes de comenzar a trabajar, una compañera le preguntó, en presencia de su encargada, si era gitana, ella respondió que sí; desde entonces el trato de la encargada hacia la joven cambió, comenzó a tirar al suelo perchas, tierra, para luego acusarla de que no limpiaba el suelo, además de hacer continuamente comentarios generalizados refiriéndose a las mujeres gitanas, del tipo “¿Pero te dejan trabajar?” “¿Y estudiar?”, “tú tienes que saber cómo trabajar en la tienda porque como aprendéis en los mercadillos...” La víctima comentó que desde que la encargada se enteró de que era gitana le hizo la vida imposible, se sintió muy humillada. No aguantó más y presentó la baja voluntaria, aun siendo consciente de que perdía su derecho a la prestación por desempleo.',
'intervencion': 'Se entrevistó a la joven. Se comprobó a través del testimonio de la víctima que desde que su encargada se enteró de que es mujer gitana, al mes de comenzar a trabajar aproximadamente, comenzó a sufrir discriminación. Se informó a la víctima del Servicio, del trabajo que realizamos y de sus derechos.\xa0',
'tipo_discriminacion': 'Discriminación directa',
'resultado': 'Negativo.'
}
```
### Campos de los datos
- `sintetico`: indica si los datos son relacionados con la intervención y el resultado son originales, es decir, proceden de la fuente "Fundación Secretariado Gitano" (valor 0); o si, por el contrario, los hemos generado sintéticamente (valor 1).
- `text`: expone los hechos descritos por la persona afectada.
- `intervencion`: presenta las medidas que se tomaron desde la Fundación para evitar que los hechos descritos en "text" se volvieran a repetir.
- `tipo_discriminacion`: etiqueta que identifica el tipo de discriminación. Puede tomar los valores **Acoso discriminatorio**, **Discriminación directa**, **Discriminación indirecta**, **Discriminación interseccional**, **Discurso de odio**, **Orden de discriminar**,, **Sin especificar**.
- `resultado`: presenta la repercusión que tuvo la intervención adoptada. Sus posibles valores son **Positivo**, **Negativo** y **Neutro**.
### División de los datos
El dataset cuenta con un total de 1990 instancias, repartidas del siguiente modo:
| | train | validation | test |
|-------------------------|----------:|-------------:|----------:|
| Input Sentences | 90% | 5% | 5% |
| Average Sentence Length | 94.71 | 90.94 | 98.07 |
Cabe destacar que, teniendo en cuenta el resultado de las intervenciones (positivo, negativo o neutro), el dataset no está balanceado. En concreto, hay un total de 280 muestras positivas, 939 negativas y 771 neutras. En próximas actualizaciones del dataset trabajaremos para incrementar el tamaño del dataset de forma balanceada.
## Creación del dataset
### Justificación de la curación
El motivo por el que se creó este dataset es para conocer de una forma objetiva, si las medidas actuales que se están adoptando por parte de la Fundación han surtido efecto (en cuyo caso sería positivo), no ha surtido ningún efecto (negativo), o si por el contrario, las medidas propuestas no han incentivado al usuario a llevar a cabo ninguna acción.
Se ha optado por este dataset por el volumen de datos que contiene relativos a distintos escenarios, y por el formato que todos comparten de: HECHOS, INTERVENCIÓN Y RESULTADO.
### Fuente de los datos
Los datos utilizados para construir el modelo fueron extraídos de la página web de la Fundación Secretariado Gitano (<a href="https://informesdiscriminacion.gitanos.org">FSM</a>). El FSM tiene una base de datos que contiene actos de discriminación que han sido reportados a la organización. Estos actos de discriminación fueron seleccionados para entrenar y evaluar el modelo.
#### Recogida inicial de datos y normalización
Los datos fueron extraídos de la sección de <a href = "https://informesdiscriminacion.gitanos.org/buscar-casos" >Buscador de casos</a>, donde se lleva un registro de todo los casos de descriminación.
Los campos que ofrece la página web para estetipo de informes son:
* `Hecho` que hace referencia al acto de descriminación.
* `Intervención` qué medidas tomo la FSG para solucionar el problema.
* `Resultado`: Descripción del resultado.
* Año que ocurrió el caso.
* Año del informe.
* Ámbito: Dado el caso de que la discrimnación haya sido una empresa gubernamenta, en cual derecho fundamental se presentó.
* Provincia: Lugar donde ocurrió el acto.
* Tipo de discriminación.
En la extracción de datos solo tuvimos en cuenta los campos **hechos**, **intervención**, **resultados** y **tipo de discriminación**. El lenguaje usado en los informes es formal.
Originalmente, una elevado número de Hechos no contaban con una intervención y resultado (los campos estaban vacíos).
#### Limpieza de los datos
En la página web, el campo resultado contiene un breve explicación del los efectos obtenidos tras llevar a cabo la intervección. Usando la librería <a href="https://github.com/pysentimiento/pysentimiento">pysentimiento</a>, se clasificó el resultado entre negativo, neutro y positivo.
Posterior mente se revisó la etiqueta y se ajustó para lo que se consideraba neutral, negativo o positivo
El 17% de los actos de discriminación en el dataset no contaban con intervención ni resultado. Para completar estos campos se aplicó la técnica few-show learning usando el modelo Bloom. De modo que dado algunos ejemplos de **hechos**, **intervención** y **resultado**, seríamos capaces de generar **intervenciones** y **resultados** de forma automática. El output del modelo Bloom se revisó manualmente para corregir errores.
El 41% de los textos del campo **hechos** eran demasiado largos para ser utilizado en BLOOM aplicando la técnica de few-shot learning. Para resolver este problema, se decidió resumirlos, para esto se utilizó la función `segmenter.split_single` de la librería <a href="https://github.com/fnl/segtok" >segtok</a>, que divide el texto en oraciones y las separa por caracteres de nueva línea.
Se usaron dos modelos pre-etrenados para resumir cada sub-texto. El primero fue <a href="https://huggingface.co/mrm8488/bert2bert_shared-spanish-finetuned-summarization">mrm8488/bert2bert_shared-spanish-finetuned-summarization</a> y el segundo fue el <a href="https://huggingface.co/Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization">Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization</a>
En el repositorio https://github.com/Frorozcoloa/somos_nlp_hackaton se encuentran los scripts originales usados para el preprocesamiento. También puedes encontrar una copia de los mismos en este mismo repositorio.
### Anotación
Las anotaciones que se ralizaron fueron verificaciones a los datos de sintéticos generados con few-shot learning (intervenciones y resultados):
* Se rellenaron los valores nulos.
* Se hicieron resumenes de algunos textos (Hehos) aplicando modelos pre-entrenados.
* Se cambió el texto de resultado por etiquetas de POS, NEU, NEG.
#### Proceso de anotación
Para el proceso de etiquetado se utilizó Argilla para etiquetar la categoría de "Resultado", para ello se emplearon las siguientes etiquetas: "Positivo", "Negativo" y "Neutro". En el proceso de etiquetado lo que nos interesaba era etiquetar el resultado de las intervenciones para que el modelo aprendiera y pudiera generar texto para dar respuesta a la situación expuesta por el usuario, además de predecir con los datos etiquetados si la repercusión que pudiera tener la medida que propone el modelo sería "positiva"(surtiría efecto), "negativa"(no tendría ningún efecto) o "neutra"(si es posible que el usuario no llevara a cabo ninguna acción).
En concreto, tras descargar todos los datos disponibles en la web, los preprocesamos y unimos en un solo dataset que fue subido a Argilla. Una vez aquí, validamos cada una de las instancias del siguiente modo:
* Si la intervención y/o resultado están vacías, se anota como tal.
* Se comprueba que el resultado positivo, negativo o neutro es correcto. La mayoría de las incongruencias surgen entre los pares positivo/neutro y negativo/neutro.
Una vez validado el dataset con argilla, seleccionamos las muestras que fueron anotadas como "vacías" para proceder a completarlas. Para ello, hemos aplicado Few-Shot Learning usando el modelo [BLOOM](https://huggingface.co/bigscience/bloom).
Cabe destacar que algunos hechos del dataset eran demasiado largos y no podían ser procesados por BLOOM (generaba un error que indicaba que habíamos superado el número máximo de tokens), para solucionarlo, utilizamos los modelos <a href="https://huggingface.co/mrm8488/bert2bert_shared-spanish-finetuned-summarization">mrm8488/bert2bert_shared-spanish-finetuned-summarization</a> y <a href="https://huggingface.co/Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization">Narrativa/bsc_roberta2roberta_shared-spanish-finetuned-mlsum-summarization</a> para resumir dichos hechos y minimizar así su tamaño.
### Información personal y sensible
En este caso no se ha necesitado utilizar ningún proceso de anonimización, ya que los datos procedentes de esta fuente no contienen ninguna información que vulnere los derechos de los afectados.
## Consideraciones sobre el uso de los datos
### Consideraciones sobre el uso de los datos
El impacto social de este dataset se dirige a ser una herramienta que sirva para implementar acciones que ayuden a combatir el racismo hacia la población gitana, además este dataset se podría utilizar para evaluar la repercusión de las distintas medidas adoptadas durante un período de tiempo, y aquellas medidas con una repercusión "negativa" o "neutra" investigarlas y mejorarlas con un trato más concienzudo hacia la población gitana.
### Debate sobre los prejuicios
Sé realizó un analisís exploratorio de los datos, para eso hemos realizado una nube de palabras para analizar los datos sintéticos y no sintéticos.
#### Datos no sintéticos
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/Hechos_normales.png">
Aquí podemos ver que muchos de los hechos se generaron en noticias, en mujeres, temas de vivienda, con la policia y la familia.
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/Intervenci%C3%B3n_normal.png">
Las intervenciones hablan de derechos, de cartas, de igualdad, asesorar a la persona y de presentar quejas.
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/etiqueta_normal.png">
Muchos de los resultados de las intervenciones fueron negativos o neutrales (Posiblemente sin respuesta) o de que no se logró lo propuesto (Negativo). Se puede observar el desbalance en los datos.
Por medio de la librería *pysentimiento* y usando el modelo `pysentimiento/pt_hate_speech`, se realizó una métrica para medir el discurso de odio en el `Hecho`.
Para eso análizaremos hateful, targeted y aggressive. La métrica va de 0 a 1, para cada una. Siendo la probabilidad de que esa caracteristica esté en el texto.
Se encotró lo siguiente
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/hate_normal.png">
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/hate_2_normal.png">
La distribución de los valores de hateful, targeted y aggressive presentan una cola alargada hacia la derecha, lo que indica que hay pocos casos en los que se detecta un mensaje de odio en los hechos.
Para el caso, donde no se generó la intervección y resultado se presenta un crecimiento en el tercer cuartil, esto quiere decir que hay mensajes que muestra un discurso de odio. Por ejemplo el hateful es de 0.4, targeted de 0.02 y aggresive de 0.03. En conclusión, como está escrito el hecho y como fue entrenado el modelo de *pysentimiento*, en general los hechos no tienen un mensaje de odio.
#### Datos sintéticos.
Se realizó el mismo análisis para los datos sintéticos
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/Hechos_sinteticos.png"/>
Cabe resltar que el hecho no fue generado.
Es claro que el dataset está más sesgado a contener las palabras gitano, gitana, comunidad gitana, etnia gitana, familia, discriminación.
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/Intervenci%C3%B3n_sintetica.png"/>
Esta parte fue generada por el modelo *Bloom*. Puede comprobarse que con *few-shot* se logra captar más que todo la palabra `derecho`.
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/Etiquetas%20sinteticas.png">
Tambien hay un desbalance en las etiquetas generadas.
Por medio de la librería *pysentimiento* y usando el modelo `pysentimiento/pt_hate_speech` ,se realizó una métrica para medir el discurso de odio en el `Hecho`
Para eso análizaremos hateful, targeted y aggressive. La métrica va de 0 a 1, para cada una. Siendo la probabilidad de que esa caracteristica esté en el texto.
Se encotró lo siguiente
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/hate_sintetico.png">
<img src="https://huggingface.co/datasets/hackathon-somos-nlp-2023/informes_discriminacion_gitana/resolve/main/imagenes/hate_2_sintetico.png">
La distribución de los valores de hateful, targeted y aggressive presentan una cola alargada hacia la derecha, lo que indica que hay pocos casos en los que se detecta un mensaje de odio en los hechos.
Tanto la mediana como la media de los valores de hateful, targeted y aggressive están muy cerca de cero, lo que indica que la mayoría de los hechos no contienen mensajes de odio. Además, se observó que en el tercer cuartil, el 75% de los datos en la métrica de hateful es 0.3, para targeted es de 0.0089 y para aggressive es de 0.06, lo que refuerza la conclusión de que la mayoría de los datos no contienen un mensaje de odio en la descripción de los hechos.
## Información adicional
### Curadores del dataset
* <a href="https://www.linkedin.com/in/frorozcol/">Fredy Orozco</a>
* <a href="https://www.linkedin.com/in/mariajesusgs">María Jesús García</a>
* <a href="https://www.linkedin.com/in/ramonruedadelgado/">Ramón Rueda</a> | 16,774 | [
[
-0.035736083984375,
-0.04937744140625,
0.01214599609375,
0.0270843505859375,
-0.0307464599609375,
-0.01029205322265625,
-0.01177215576171875,
-0.036407470703125,
0.031890869140625,
0.0184478759765625,
-0.03887939453125,
-0.0645751953125,
-0.047393798828125,
... |
437aewuh/dog-dataset | 2023-04-18T13:18:25.000Z | [
"task_categories:audio-to-audio",
"task_categories:audio-classification",
"size_categories:n<1K",
"license:other",
"biology",
"region:us"
] | 437aewuh | null | null | 0 | 43 | 2023-04-18T13:01:04 | ---
license: other
task_categories:
- audio-to-audio
- audio-classification
tags:
- biology
size_categories:
- n<1K
---
This dataset is a redistribution of the following dataset.
https://github.com/suzuki256/dog-dataset
```
The dataset and its contents are made available on an "as is" basis and without warranties of any kind, including without limitation satisfactory quality and conformity, merchantability, fitness for a particular purpose, accuracy or completeness, or absence of errors.
```
| 499 | [
[
-0.0263671875,
-0.0186614990234375,
0.02606201171875,
0.0350341796875,
-0.032440185546875,
0.0000928044319152832,
-0.00498199462890625,
-0.0164337158203125,
0.04132080078125,
0.05181884765625,
-0.06378173828125,
-0.038482666015625,
-0.0239410400390625,
-0.00... |
philschmid/chip2_en_code | 2023-05-01T12:30:50.000Z | [
"region:us"
] | philschmid | null | null | 0 | 43 | 2023-05-01T12:28:08 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1750565
num_examples: 3300
download_size: 517363
dataset_size: 1750565
---
# Dataset Card for "chip2_en_code"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 427 | [
[
-0.03179931640625,
0.0005736351013183594,
0.0119476318359375,
0.019500732421875,
-0.021575927734375,
0.0108642578125,
0.0172882080078125,
-0.0232086181640625,
0.04876708984375,
0.0265655517578125,
-0.0440673828125,
-0.054412841796875,
-0.0367431640625,
-0.02... |
Abrumu/Fashion_controlnet_dataset_V3 | 2023-05-19T09:44:48.000Z | [
"region:us"
] | Abrumu | null | null | 10 | 43 | 2023-05-18T17:04:45 | ---
dataset_info:
features:
- name: target
dtype: image
- name: mask
dtype: image
- name: cloth
dtype: image
- name: control
dtype: image
- name: prompt
dtype: string
- name: CLIP_captions
dtype: string
splits:
- name: train
num_bytes: 7964862365.0
num_examples: 11647
download_size: 7944023014
dataset_size: 7964862365.0
---
# Dataset Card for "Fashion_controlnet_dataset_V3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 563 | [
[
-0.0228729248046875,
-0.0014123916625976562,
0.0013723373413085938,
0.0276947021484375,
-0.017822265625,
-0.0052490234375,
0.043548583984375,
-0.02410888671875,
0.057373046875,
0.036773681640625,
-0.07452392578125,
-0.052978515625,
-0.0280914306640625,
-0.01... |
kunishou/cnn-dailymail-27k-ja | 2023-05-19T04:37:02.000Z | [
"license:mit",
"region:us"
] | kunishou | null | null | 5 | 43 | 2023-05-18T21:08:49 | ---
license: mit
---
This dataset was created by automatically translating part of "cnn_dailymail" into Japanese.
cnn_dailymail repository
https://github.com/abisee/cnn-dailymail
cnn_dailymail
https://huggingface.co/datasets/cnn_dailymail | 244 | [
[
-0.0239715576171875,
-0.0389404296875,
0.01529693603515625,
0.019195556640625,
-0.028106689453125,
0.004215240478515625,
-0.005596160888671875,
-0.027618408203125,
0.042022705078125,
0.0634765625,
-0.0762939453125,
-0.057586669921875,
-0.04608154296875,
0.01... |
clarin-knext/nfcorpus-pl-qrels | 2023-06-07T08:10:48.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | 0 | 43 | 2023-06-06T22:44:12 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | 201 | [
[
-0.01541900634765625,
-0.0628662109375,
0.035400390625,
0.016387939453125,
-0.02215576171875,
-0.0103912353515625,
-0.0115966796875,
-0.034515380859375,
-0.0013093948364257812,
0.028656005859375,
-0.03826904296875,
-0.048126220703125,
-0.02899169921875,
-0.0... |
takaaki-inada/databricks-dolly-15k-ja-zundamon | 2023-06-17T10:41:52.000Z | [
"license:cc-by-sa-3.0",
"region:us"
] | takaaki-inada | null | null | 2 | 43 | 2023-06-17T10:35:48 | ---
license: cc-by-sa-3.0
---
This dataset was based on "kunishou/databricks-dolly-15k-ja".
This dataset is licensed under CC BY SA 3.0
Last Update : 2023-05-11
databricks-dolly-15k-ja
https://github.com/kunishou/databricks-dolly-15k-ja
databricks-dolly-15k
https://github.com/databrickslabs/dolly/tree/master/data
| 324 | [
[
-0.003688812255859375,
-0.019561767578125,
0.01457977294921875,
0.045654296875,
-0.0206451416015625,
-0.004848480224609375,
0.02862548828125,
-0.01110076904296875,
0.029266357421875,
0.055633544921875,
-0.0758056640625,
-0.030059814453125,
-0.022705078125,
0... |
TrainingDataPro/speech-emotion-recognition-dataset | 2023-09-19T19:34:11.000Z | [
"task_categories:audio-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"legal",
"region:us"
] | TrainingDataPro | The audio dataset consists of a collection of texts spoken with four distinct
emotions. These texts are spoken in English and represent four different
emotional states: **euphoria, joy, sadness and surprise**.
Each audio clip captures the tone, intonation, and nuances of speech as
individuals convey their emotions through their voice.
The dataset includes a diverse range of speakers, ensuring variability in age,
gender, and cultural backgrounds*, allowing for a more comprehensive
representation of the emotional spectrum.
The dataset is labeled and organized based on the emotion expressed in each
audio sample, making it a valuable resource for emotion recognition and
analysis. Researchers and developers can utilize this dataset to train and
evaluate machine learning models and algorithms, aiming to accurately
recognize and classify emotions in speech. | @InProceedings{huggingface:dataset,
title = {speech-emotion-recognition-dataset},
author = {TrainingDataPro},
year = {2023}
} | 1 | 43 | 2023-07-13T12:46:41 | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- audio-classification
tags:
- code
- legal
dataset_info:
features:
- name: set_id
dtype: string
- name: euphoric
dtype: audio
- name: joyfully
dtype: audio
- name: sad
dtype: audio
- name: surprised
dtype: audio
- name: text
dtype: string
- name: gender
dtype: string
- name: age
dtype: int8
- name: country
dtype: string
splits:
- name: train
num_bytes: 17202
num_examples: 20
download_size: 28409585
dataset_size: 17202
---
# Emotions on Audio Dataset
The audio dataset consists of a collection of texts spoken with four distinct emotions. These texts are spoken in English and represent four different emotional states: **euphoria, joy, sadness and surprise**.
Each audio clip captures the tone, intonation, and nuances of speech as individuals convey their emotions through their voice.
The dataset includes a diverse range of speakers, ensuring variability in *age, gender, and cultural backgrounds*, allowing for a more comprehensive representation of the emotional spectrum.
The dataset is labeled and organized based on the emotion expressed in each audio sample, making it a valuable resource for emotion recognition and analysis. Researchers and developers can utilize this dataset to train and evaluate machine learning models and algorithms, aiming to accurately recognize and classify emotions in speech.
### The audio dataset also provides an opportunity for various applications:
- sentiment analysis
- automatic emotion detection
- emotional speech synthesis.
- voice assistants
- customer service
- mental health analysis
- entertainment industries

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=speech-emotion-recognition-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Content
- **files**: includes folders corresponding to people and containing text spoken in English in 4 different manners: **euphoric, joyfully, sad and surprised**
- **.csv** file: contains information about people in the dataset
### File with the extension .csv
includes the following information for each set of media files:
- **set_id**: link to the set of audio files,
- **text**: text spoken in the audio set,
- **gender**: gender of the person,
- **age**: age of the person,
- **country**: country of the person
# Audio with emotions might be collected in accordance with your requirements.
## [TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=speech-emotion-recognition-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | 3,216 | [
[
-0.033416748046875,
-0.0251312255859375,
0.007762908935546875,
0.026611328125,
-0.00565338134765625,
0.006679534912109375,
-0.034912109375,
-0.03253173828125,
0.0221099853515625,
0.023162841796875,
-0.061126708984375,
-0.06915283203125,
-0.0382080078125,
0.0... |
Andyrasika/question_answer | 2023-07-26T16:10:07.000Z | [
"region:us"
] | Andyrasika | null | null | 1 | 43 | 2023-07-26T16:09:32 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
roszcz/giant-midi-sustain | 2023-08-15T18:55:06.000Z | [
"region:us"
] | roszcz | null | null | 0 | 43 | 2023-08-15T18:53:13 | ---
dataset_info:
features:
- name: notes
struct:
- name: duration
sequence: float64
- name: end
sequence: float64
- name: pitch
sequence: int64
- name: start
sequence: float64
- name: velocity
sequence: int64
- name: midi_filename
dtype: string
splits:
- name: train
num_bytes: 1548922542
num_examples: 10853
download_size: 483630029
dataset_size: 1548922542
---
# Dataset Card for "giant-midi-sustain"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 616 | [
[
-0.049041748046875,
-0.0194244384765625,
0.0216064453125,
0.024993896484375,
-0.0113677978515625,
0.0084381103515625,
-0.0001252889633178711,
-0.01256561279296875,
0.0736083984375,
0.02947998046875,
-0.0615234375,
-0.039581298828125,
-0.028839111328125,
-0.0... |
alexandrainst/nordjylland-news-image-captioning | 2023-09-08T06:41:05.000Z | [
"size_categories:10K<n<100K",
"language:da",
"Image captioning",
"region:us"
] | alexandrainst | null | null | 2 | 43 | 2023-09-05T06:32:33 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 10341164216.808
num_examples: 11707
download_size: 11002607252
dataset_size: 10341164216.808
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- da
tags:
- Image captioning
pretty_name: Nordjylland News - Image caption dataset
size_categories:
- 10K<n<100K
---
# Dataset Card for "nordjylland-news-image-captioning"
## Dataset Description
- **Point of Contact:** [Oliver Kinch](mailto:oliver.kinch@alexandra.dk)
- **Size of dataset:** 11 GB
### Dataset Summary
This dataset is a collection of image-caption pairs from the Danish newspaper [TV2 Nord](https://www.tv2nord.dk/).
### Supported Tasks and Leaderboards
Image captioning is the intended task for this dataset. No leaderboard is active at this point.
### Languages
The dataset is available in Danish (`da`).
## Dataset Structure
An example from the dataset looks as follows.
```
{
"file_name": "1.jpg",
"caption": "Bruno Sørensen og Poul Erik Pedersen er ofte at finde i Fyensgade Centret."
}
```
### Data Fields
- `file_name`: a `string` giving the file name of the image.
- `caption`: a `string` feature.
### Dataset Statistics
#### Number of samples
11707
#### Image sizes
All images in the dataset are in RGB format, but they exhibit varying resolutions:
- Width ranges from 73 to 11,830 pixels.
- Height ranges from 38 to 8,268 pixels.
The side length of a square image with the same number of pixels as an image with height \\(h \\) and width \\(w \\) is approximately given as
\\( x = \text{int}({{\sqrt{h \cdot w}})} \\).
Plotting the distribution of \\( x \\) gives an insight into the sizes of the images in the dataset.

#### Caption Length Distribution

## Potential Dataset Issues
- There are 14 images with the caption "Arkivfoto".
- There are 37 images with captions consisting solely of a source reference, such as "Kilde: \<name of source\>".
You might want to consider excluding these samples from the model training process.
## Dataset Creation
### Curation Rationale
There are not many large-scale image-captioning datasets in Danish.
### Source Data
The dataset has been collected through the TV2 Nord API, which can be accessed [here](https://developer.bazo.dk/#876ab6f9-e057-43e3-897a-1563de34397e).
## Additional Information
### Dataset Curators
[Oliver Kinch](https://huggingface.co/oliverkinch) from the [The Alexandra
Institute](https://alexandra.dk/)
### Licensing Information
The dataset is licensed under the [CC0
license](https://creativecommons.org/share-your-work/public-domain/cc0/).
| 2,989 | [
[
-0.050506591796875,
-0.007717132568359375,
0.008514404296875,
0.01477813720703125,
-0.0657958984375,
-0.0117034912109375,
-0.0179595947265625,
-0.03289794921875,
0.01507568359375,
0.046234130859375,
-0.037384033203125,
-0.051177978515625,
-0.052032470703125,
... |
HumanCompatibleAI/ppo-seals-HalfCheetah-v1 | 2023-09-27T06:57:57.000Z | [
"region:us"
] | HumanCompatibleAI | null | null | 0 | 43 | 2023-09-26T14:41:04 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float64
- name: acts
sequence:
sequence: float32
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float32
splits:
- name: train
num_bytes: 92213656
num_examples: 104
download_size: 25621245
dataset_size: 92213656
---
# Dataset Card for "ppo-seals-HalfCheetah-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 549 | [
[
-0.031280517578125,
-0.0088958740234375,
0.01245880126953125,
0.0187835693359375,
-0.032257080078125,
-0.00021588802337646484,
0.0458984375,
-0.01042938232421875,
0.061859130859375,
0.0506591796875,
-0.0572509765625,
-0.04754638671875,
-0.04925537109375,
-0.... |
mmathys/profanity | 2023-09-27T09:01:04.000Z | [
"license:mit",
"region:us"
] | mmathys | null | null | 0 | 43 | 2023-09-27T08:59:08 | ---
license: mit
---
# The Obscenity List
*by [Surge AI, the world's most powerful NLP data labeling platform and workforce](https://www.surgehq.ai)*
Ever wish you had a ready-made list of profanity? Maybe you want to remove NSFW comments, filter offensive usernames, or build content moderation tools, and you can't dream up enough obscenities on your own.
At Surge AI, we help companies build human-powered datasets to train stunning AI and NLP, and we're creating the world's largest profanity list in 20+ languages.
## Dataset
This repo contains 1600+ popular English profanities and their variations.
**Columns**
* `text`: the profanity
* `canonical_form_1`: the profanity's canonical form
* `canonical_form_2`: an additional canonical form, if applicable
* `canonical_form_3`: an additional canonical form, if applicable
* `category_1`: the profanity's primary category (see below for list of categories)
* `category_2`: the profanity's secondary category, if applicable
* `category_3`: the profanity's tertiary category, if applicable
* `severity_rating`: We asked 5 [Surge AI](https://www.surgehq.ai) data labelers to rate how severe they believed each profanity to be, on a 1-3 point scale. This is the mean of those 5 ratings.
* `severity_description`: We rounded `severity_rating` to the nearest integer. `Mild` corresponds to a rounded mean rating of `1`, `Strong` to `2`, and `Severe` to `3`.
## Categories
We organized the profanity into the following categories:
- sexual anatomy / sexual acts (ass kisser, dick, pigfucker)
- bodily fluids / excrement (shit, cum)
- sexual orientation / gender (faggot, tranny, bitch, whore)
- racial / ethnic (chink, n3gro)
- mental disability (retard, dumbass)
- physical disability (quadriplegic bitch)
- physical attributes (fatass, ugly whore)
- animal references (pigfucker, jackass)
- religious offense (goddamn)
- political (China virus)
## Future
We'll be adding more languages and profanity annotations (e.g., augmenting each profanity with its severity level, type, and other variations) over time.
Check out our other [free datasets](https://www.surgehq.ai/datasets).
Sign up [here](https://forms.gle/u1SKL4zySK2wMp1r7) to receive updates on this dataset and be the first to learn about new datasets we release!
## Contact
Need a larger set of expletives and slurs, or a list of swear words in other languages (Spanish, French, German, Japanese, Portuguese, etc)? We work with top AI and content moderation companies around the world, and we love feedback. Post an issue or reach out to team@surgehq.ai!

Follow us on Twitter at [@HelloSurgeAI](https://www.twitter.com/@HelloSurgeAI).
## Original Repo
You can find the original repository here: https://github.com/surge-ai/profanity/ | 2,836 | [
[
-0.012847900390625,
-0.05072021484375,
-0.00202178955078125,
0.004398345947265625,
-0.01690673828125,
-0.00010716915130615234,
-0.01299285888671875,
-0.038482666015625,
0.0075836181640625,
0.0458984375,
-0.005252838134765625,
-0.046600341796875,
-0.042724609375,... |
ShrinivasSK/small-hi-kn | 2023-09-30T17:19:03.000Z | [
"region:us"
] | ShrinivasSK | null | null | 0 | 43 | 2023-09-30T15:04:03 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Mxode/StackOverflow-QA-C-Language-5k | 2023-10-02T10:30:48.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"code",
"region:us"
] | Mxode | null | null | 1 | 43 | 2023-10-02T10:08:11 | ---
license: apache-2.0
language:
- en
tags:
- code
task_categories:
- question-answering
size_categories:
- 1K<n<10K
---
This is a collection of ~5000 QA's in **C Language** from StackOverflow. The data has been initially cleaned, and each response is with **Accepted Answer**.
All data is **<500** in length.
The questions and answers were organized into a **one-line** format. A sample format is shown below:
```json
{
"question": "```\nFILE* file = fopen(some file)\n\npcap_t* pd = pcap_fopen_offline(file)\n\npcap_close(pd)\n\nfclose(file)\n```\n\nThis code occurs double free error.\n\nCould you explain about this happening?\n\nMy Guess is that pd and file pointers are sharing some datas.\n",
"answer": "As the documentation says, thepcap_closefunction closes the files associated with thepcap_tstructure passed to it. Closing the file again withfcloseis an error.\n"
}
``` | 891 | [
[
-0.0134735107421875,
-0.0479736328125,
0.03253173828125,
0.043731689453125,
-0.027099609375,
0.028533935546875,
0.01035308837890625,
-0.021514892578125,
0.01097869873046875,
0.04937744140625,
-0.0196685791015625,
-0.0260467529296875,
-0.0275726318359375,
0.0... |
shossain/govreport-qa-5-16384 | 2023-10-03T21:20:46.000Z | [
"region:us"
] | shossain | null | null | 0 | 43 | 2023-10-02T23:51:42 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 500027
num_examples: 5
download_size: 129870
dataset_size: 500027
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "govreport-qa-5-16384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 530 | [
[
-0.03338623046875,
0.0028839111328125,
0.030792236328125,
0.01776123046875,
-0.020538330078125,
-0.006771087646484375,
0.03680419921875,
-0.01126861572265625,
0.0518798828125,
0.03521728515625,
-0.043304443359375,
-0.055908203125,
-0.0293426513671875,
-0.002... |
ostapeno/platy_icl5_maxD1000_maxC1000000_prmt10_1 | 2023-10-13T03:28:01.000Z | [
"region:us"
] | ostapeno | null | null | 0 | 43 | 2023-10-13T03:27:51 | ## model_setting_name: platy
## max_context_length: 512
## icl_examples: 5
## icl_dataset_name: lukaemon/mmlu
## max_documents_per_subject: 1000
## max_contexts_per_subject: 1000000
## icl_use_out_options: True
## seed_dataset: sordonia/my-wiki-latex_mmlu_from_valid_all
## subjects: SUB_10
## response_template: 1
## inverse_template: 0
| 338 | [
[
-0.03778076171875,
-0.0240936279296875,
0.0260009765625,
0.03485107421875,
-0.0285491943359375,
-0.0213470458984375,
-0.0032253265380859375,
0.016632080078125,
-0.00887298583984375,
0.031890869140625,
-0.06396484375,
-0.03900146484375,
-0.0274505615234375,
0... |
alexrs/alpaca-cleaned-10-clusters | 2023-10-16T14:36:58.000Z | [
"region:us"
] | alexrs | null | null | 0 | 43 | 2023-10-16T14:36:55 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
- name: cluster
dtype: int32
splits:
- name: train
num_bytes: 40490946
num_examples: 51760
download_size: 24184864
dataset_size: 40490946
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "alpaca-cleaned-10-clusters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 569 | [
[
-0.05731201171875,
-0.02471923828125,
0.02593994140625,
0.020294189453125,
-0.02142333984375,
-0.006122589111328125,
0.015655517578125,
-0.01800537109375,
0.07989501953125,
0.04071044921875,
-0.058013916015625,
-0.061126708984375,
-0.04290771484375,
-0.01057... |
anamhira/aitw_foundation | 2023-10-18T05:13:56.000Z | [
"region:us"
] | anamhira | null | null | 0 | 43 | 2023-10-17T16:29:09 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 101243835
num_examples: 39518
download_size: 0
dataset_size: 101243835
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "aitw_foundation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 479 | [
[
-0.043365478515625,
-0.01654052734375,
0.0029621124267578125,
0.0289306640625,
-0.01241302490234375,
0.0093536376953125,
0.031463623046875,
-0.018951416015625,
0.047119140625,
0.034393310546875,
-0.060394287109375,
-0.0482177734375,
-0.047698974609375,
-0.01... |
sade-adrien/dataset_context_extension | 2023-10-25T18:19:13.000Z | [
"region:us"
] | sade-adrien | null | null | 0 | 43 | 2023-10-25T18:18:35 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: text
dtype: string
- name: meta
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: label
sequence: int64
splits:
- name: train
num_bytes: 1470195967
num_examples: 6999
- name: val
num_bytes: 160639943
num_examples: 778
download_size: 703927220
dataset_size: 1630835910
---
# Dataset Card for "dataset_context_extension"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 711 | [
[
-0.04827880859375,
-0.0306243896484375,
0.0087890625,
0.00984954833984375,
-0.0173797607421875,
-0.0184326171875,
0.0034160614013671875,
-0.0197296142578125,
0.0633544921875,
0.027587890625,
-0.072509765625,
-0.050018310546875,
-0.038970947265625,
-0.0113754... |
kuanhuggingface/google_tts_encodec | 2023-10-27T14:12:52.000Z | [
"region:us"
] | kuanhuggingface | null | null | 0 | 43 | 2023-10-27T14:12:12 | ---
dataset_info:
features:
- name: file_id
dtype: string
- name: instruction
dtype: string
- name: transcription
dtype: string
- name: src_encodec_0
sequence: int64
- name: src_encodec_1
sequence: int64
- name: src_encodec_2
sequence: int64
- name: src_encodec_3
sequence: int64
- name: src_encodec_4
sequence: int64
- name: src_encodec_5
sequence: int64
- name: src_encodec_6
sequence: int64
- name: src_encodec_7
sequence: int64
- name: tgt_encodec_0
sequence: int64
- name: tgt_encodec_1
sequence: int64
- name: tgt_encodec_2
sequence: int64
- name: tgt_encodec_3
sequence: int64
- name: tgt_encodec_4
sequence: int64
- name: tgt_encodec_5
sequence: int64
- name: tgt_encodec_6
sequence: int64
- name: tgt_encodec_7
sequence: int64
splits:
- name: test
num_bytes: 209082202
num_examples: 5000
- name: train
num_bytes: 3704147470
num_examples: 90000
- name: validation
num_bytes: 203064306
num_examples: 5000
download_size: 140724454
dataset_size: 4116293978
---
# Dataset Card for "google_tts_encodec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,290 | [
[
-0.03228759765625,
-0.01904296875,
0.0224456787109375,
0.0118560791015625,
-0.0211944580078125,
0.0113067626953125,
0.0008258819580078125,
-0.006763458251953125,
0.06353759765625,
0.015869140625,
-0.054962158203125,
-0.06646728515625,
-0.05426025390625,
0.00... |
coastalcph/fm_classifier-1-n | 2023-11-01T16:47:30.000Z | [
"region:us"
] | coastalcph | null | null | 0 | 43 | 2023-11-01T16:47:12 | ---
dataset_info:
features:
- name: query
dtype: string
- name: answer
list:
- name: wikidata_id
dtype: string
- name: name
dtype: string
- name: id
dtype: string
- name: relation
dtype: string
- name: date
dtype: int64
- name: type
dtype: string
- name: is_mutable
dtype: int64
splits:
- name: train
num_bytes: 377915.22131782945
num_examples: 1997
- name: all_fm
num_bytes: 30017653.417646818
num_examples: 157125
- name: validation
num_bytes: 301864.5749704841
num_examples: 1655
- name: test
num_bytes: 251834.38388770216
num_examples: 1596
download_size: 6404927
dataset_size: 30949267.59782283
---
# Dataset Card for "fm_classifier-1-n"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 885 | [
[
-0.050628662109375,
-0.01377105712890625,
0.0100555419921875,
0.01806640625,
-0.019683837890625,
-0.014373779296875,
0.02294921875,
-0.00852203369140625,
0.054107666015625,
0.01064300537109375,
-0.06805419921875,
-0.049896240234375,
-0.05169677734375,
-0.005... |
Annielytics/DoctorsNotes | 2021-05-07T14:35:26.000Z | [
"region:us"
] | Annielytics | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.057159423828125,
0.028839111328125,
-0.0350341796875,
0.04656982421875,
0.052490234375,
0.00504302978515625,
0.0513916015625,
0.016998291015625,
-0.0521240234375,
-0.0149993896484375,
-0.06036376953125,
0.03790283... |
Check/a_re_gi | 2021-08-31T08:46:20.000Z | [
"region:us"
] | Check | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Chuu/Vhh | 2021-11-25T11:15:52.000Z | [
"region:us"
] | Chuu | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Cropinky/flatearther | 2021-06-30T22:37:54.000Z | [
"region:us"
] | Cropinky | null | null | 0 | 42 | 2022-03-02T23:29:22 | ## Wow fishing bobber object detection dataset
Hello, here you will find a link to a csv i scraped using the scraper found at the same link. it contains paragraphs of text found on a flat earth conspiracy website
#TODO: turn it into an actualy huggingface dataset) | 266 | [
[
-0.04693603515625,
-0.06683349609375,
0.0289764404296875,
-0.00254058837890625,
-0.04180908203125,
0.00962066650390625,
0.01189422607421875,
-0.028076171875,
0.01538848876953125,
0.0479736328125,
-0.049530029296875,
-0.0550537109375,
-0.03533935546875,
0.013... |
Darren/data | 2021-05-27T23:31:45.000Z | [
"region:us"
] | Darren | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Davlan/masakhanerV1 | 2021-09-18T19:13:11.000Z | [
"region:us"
] | Davlan | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Dmitriy612/1 | 2021-10-09T12:22:11.000Z | [
"region:us"
] | Dmitriy612 | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
DoyyingFace/github-issues-doy | 2022-01-19T10:57:15.000Z | [
"region:us"
] | DoyyingFace | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ESZER/H | 2021-07-10T18:14:47.000Z | [
"region:us"
] | ESZER | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Emma121/testtest | 2022-02-14T13:18:46.000Z | [
"region:us"
] | Emma121 | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Enes3774/data | 2021-08-15T19:43:29.000Z | [
"region:us"
] | Enes3774 | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
FL33TW00D/test-dataset | 2021-10-13T14:40:54.000Z | [
"region:us"
] | FL33TW00D | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
FRTNX/worldbank-projects | 2021-09-03T14:04:26.000Z | [
"region:us"
] | FRTNX | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Felix-ML/quoteli3 | 2022-10-25T08:54:20.000Z | [
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | Felix-ML | This dataset is a representation of Muzny et al.'s QuoteLi3 dataset as a Huggingface dataset. It can be best used for
quote attribution. | @inproceedings{muzny2017two,
title={A two-stage sieve approach for quote attribution},
author={Muzny, Grace and Fang, Michael and Chang, Angel and Jurafsky, Dan},
booktitle={Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers},
pages={460--470},
year={2017}
} | 0 | 42 | 2022-03-02T23:29:22 | ---
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets: []
---
# Dataset Card for quoteli3
## Dataset Description
- **Homepage:** https://nlp.stanford.edu/~muzny/quoteli.html
- **Repository:** https://nlp.stanford.edu/~muzny/quoteli.html
- **Paper:** Muzny, Grace, et al. "A two-stage sieve approach for quote attribution." Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. 2017.
### Dataset Summary
This dataset is based on the quoteli3 dataset by Muzny et al. (2017). It contains annotated quotes for three pieces of literature: Chekhov\\\\\\\\\\\\'s The Steppe, Austen\\\\\\\\\\\\'s Emma and Pride and Prejudice.
### Languages
The text in the dataset is English.
## Dataset Structure
Training data:
-Quotes (1575, 11)
-Characters (32, 6)
Test data:
-Quotes (1513, 11)
-Characters (145, 6)
### Data Splits
-Quotes:
- train:
- features: ['mention', 'oid', 'speaker', 'connection', 'id', 'answer', 'answer_mention {'answer', 'answer_start', 'answer_end', 'answer_in_context'}, 'question', 'context', 'large_context', 'book_title'],
- num_rows: 1575
- test:
- features: ['mention', 'oid', 'speaker', 'connection', 'id', 'answer', 'answer_mention {'answer', 'answer_start', 'answer_end', 'answer_in_context'}, 'question', 'context', 'large_context', 'book_title'],
- num_rows: 1513
-Characters:
- train:
- features: ['aliases', 'description', 'gender', 'name', 'id', 'book_title'],
- num_rows: 32
- test:
- features: ['aliases', 'description', 'gender', 'name', 'id', 'book_title'],
- num_rows: 146 | 1,691 | [
[
-0.01479339599609375,
-0.033203125,
0.0295562744140625,
0.0220489501953125,
-0.0284271240234375,
-0.0328369140625,
0.0017299652099609375,
-0.0248565673828125,
0.007335662841796875,
0.0335693359375,
-0.044891357421875,
-0.047271728515625,
-0.042724609375,
0.0... |
Francois/futures_es | 2021-07-29T17:19:50.000Z | [
"region:us"
] | Francois | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
GalacticAI/Noirset | 2021-06-30T00:59:28.000Z | [
"region:us"
] | GalacticAI | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
GeoffVdr/cv8_trainval_processed | 2022-01-31T15:10:08.000Z | [
"region:us"
] | GeoffVdr | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Halilyesilceng/autonlp-data-nameEntityRecognition | 2021-03-30T23:41:25.000Z | [
"region:us"
] | Halilyesilceng | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
HarleyQ/WitcherDialogue | 2021-06-02T16:15:44.000Z | [
"region:us"
] | HarleyQ | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Harveenchadha/bol-models | 2021-09-17T05:51:52.000Z | [
"region:us"
] | Harveenchadha | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Wikidepia/IndoSQuAD | 2021-03-29T06:55:14.000Z | [
"region:us"
] | Wikidepia | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
allenai/scico | 2023-01-10T20:23:18.000Z | [
"task_categories:token-classification",
"task_ids:coreference-resolution",
"annotations_creators:domain experts",
"multilinguality:monolingual",
"language:en",
"license:apache-2.0",
"cross-document-coreference-resolution",
"structure-prediction",
"region:us"
] | allenai | SciCo is a dataset for hierarchical cross-document coreference resolution
over scientific papers in the CS domain. | @inproceedings{
cattan2021scico,
title={SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts},
author={Arie Cattan and Sophie Johnson and Daniel S. Weld and Ido Dagan and Iz Beltagy and Doug Downey and Tom Hope},
booktitle={3rd Conference on Automated Knowledge Base Construction},
year={2021},
url={https://openreview.net/forum?id=OFLbgUP04nC}
} | 3 | 42 | 2022-03-02T23:29:22 | ---
annotations_creators:
- domain experts
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
task_categories:
- token-classification
task_ids:
- coreference-resolution
paperswithcode_id: scico
tags:
- cross-document-coreference-resolution
- structure-prediction
---
# Dataset Card for SciCo
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [SciCo homepage](https://scico.apps.allenai.org/)
- **Repository:** [SciCo repository](https://github.com/ariecattan/scico)
- **Paper:** [SciCo: Hierarchical Cross-document Coreference for Scientific Concepts](https://openreview.net/forum?id=OFLbgUP04nC)
- **Point of Contact:** [Arie Cattan](arie.cattan@gmail.com)
### Dataset Summary
SciCo consists of clusters of mentions in context and a hierarchy over them.
The corpus is drawn from computer science papers, and the concept mentions are methods and tasks from across CS.
Scientific concepts pose significant challenges: they often take diverse forms (e.g., class-conditional image
synthesis and categorical image generation) or are ambiguous (e.g., network architecture in AI vs.
systems research).
To build SciCo, we develop a new candidate generation
approach built on three resources: a low-coverage KB ([https://paperswithcode.com/](https://paperswithcode.com/)), a noisy hypernym extractor, and curated
candidates.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
* `flatten_tokens`: a single list of all tokens in the topic
* `flatten_mentions`: array of mentions, each mention is represented by [start, end, cluster_id]
* `tokens`: array of paragraphs
* `doc_ids`: doc_id of each paragraph in `tokens`
* `metadata`: metadata of each doc_id
* `sentences`: sentences boundaries for each paragraph in `tokens` [start, end]
* `mentions`: array of mentions, each mention is represented by [paragraph_id, start, end, cluster_id]
* `relations`: array of binary relations between cluster_ids [parent, child]
* `id`: id of the topic
* `hard_10` and `hard_20` (only in the test set): flag for 10% or 20% hardest topics based on Levenshtein similarity.
* `source`: source of this topic PapersWithCode (pwc), hypernym or curated.
### Data Splits
| |Train |Validation|Test |
|--------------------|-----:|---------:|----:|
|Topic | 221| 100| 200|
|Documents | 9013| 4120| 8237|
|Mentions | 10925| 4874|10424|
|Clusters | 4080| 1867| 3711|
|Relations | 2514| 1747| 2379|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
## Additional Information
### Dataset Curators
This dataset was initially created by Arie Cattan, Sophie Johnson, Daniel Weld, Ido Dagan, Iz Beltagy, Doug Downey and Tom Hope, while Arie was intern at Allen Institute of Artificial Intelligence.
### Licensing Information
This dataset is distributed under [Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@inproceedings{
cattan2021scico,
title={SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts},
author={Arie Cattan and Sophie Johnson and Daniel S. Weld and Ido Dagan and Iz Beltagy and Doug Downey and Tom Hope},
booktitle={3rd Conference on Automated Knowledge Base Construction},
year={2021},
url={https://openreview.net/forum?id=OFLbgUP04nC}
}
```
### Contributions
Thanks to [@ariecattan](https://github.com/ariecattan) for adding this dataset.
| 6,291 | [
[
-0.03826904296875,
-0.0281982421875,
0.018951416015625,
0.01361083984375,
-0.01540374755859375,
0.00246429443359375,
-0.0233917236328125,
-0.0308990478515625,
0.0450439453125,
0.0256805419921875,
-0.04632568359375,
-0.07049560546875,
-0.039581298828125,
0.01... |
caythuoc/caoduoclieu | 2023-06-15T10:41:13.000Z | [
"region:us"
] | caythuoc | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ctgowrie/chessgames | 2021-12-05T00:43:39.000Z | [
"region:us"
] | ctgowrie | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
davanstrien/ads-test | 2022-01-18T12:27:37.000Z | [
"region:us"
] | davanstrien | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
david-wb/zeshel | 2021-02-16T23:32:15.000Z | [
"region:us"
] | david-wb | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
dev/untitled_imgs | 2021-12-11T14:14:27.000Z | [
"region:us"
] | dev | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
diiogo/annotations | 2023-10-27T12:16:36.000Z | [
"region:us"
] | diiogo | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
dispenst/jhghdghfd | 2021-03-28T15:24:20.000Z | [
"region:us"
] | dispenst | null | null | 0 | 42 | 2022-03-02T23:29:22 | <a href="https://jobs.acm.org/jobs/watch-godzilla-vs-kong-2021-full-1818658-cd">.</a>
<a href="https://jobs.acm.org/jobs/123movies-watch-godzilla-vs-kong-online-2021-full-f-r-e-e-1818655-cd">.</a>
<a href="https://jobs.acm.org/jobs/watch-demon-slayer-kimetsu-no-yaiba-mugen-train-2020-f-u-l-l-f-r-e-e-1818661-cd">.</a>
<a href="https://jobs.acm.org/jobs/123movies-watch-zack-snyder-s-justice-league-online-2021-full-f-r-e-e-1818662-cd">.</a>
<a href="https://jobs.acm.org/jobs/hd-watch-godzilla-vs-kong-2021-version-full-hbomax-1818659-cd">.</a>
<a href="https://jobs.acm.org/jobs/123movies-watch-girl-in-the-basement-online-2021-full-f-r-e-e-1818663-cd">.</a>
<a href="https://jobs.acm.org/jobs/watch-godzilla-vs-kong-2021-f-u-l-l-h-d-1818660-cd">.</a>
<a href="https://jobs.acm.org/jobs/123movies-watch-billie-eilish-the-world-s-a-little-blurry-2021-f-u-l-l-f-r-e-e-1818666-cd">.</a>
<a href="https://jobs.acm.org/jobs/123movies-watch-monster-hunter-2020-f-u-l-l-f-r-e-e-1818667-cd">.</a>
<a href="https://jobs.acm.org/jobs/123movies-watch-raya-and-the-last-dragon-2021-f-u-l-l-f-r-e-e-1818669-cd">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-365-days-2020-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-billie-eilish-the-worlds-a-little-blurry-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-cherry-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-coming-2-america-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-demon-slayer-kimetsu-no-yaiba-mugen-train-2020-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-godzilla-vs-kong-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-judas-and-the-black-messiah-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-monster-hunter-2020-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-mortal-kombat-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-raya-and-the-last-dragon-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-tenet-2020-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-the-world-to-come-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-tom-and-jerry-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-willys-wonderland-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-wonder-woman-1984-2020-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-wrong-turn-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-zack-snyders-justice-league-2021-hd-online-full-free-stream-2/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-a-writers-odyssey-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-the-marksman-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-after-we-collided-2020-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-watch-full/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-online-full-version-123movies/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full/">.</a>
<a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-free/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-2/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-3/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-4/">.</a>
<a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full/">.</a>
<a href="https://pactforanimals.org/advert/full-watch-123movies-godzilla-vs-kong-2021/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-free-hd/">.</a>
<a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-free-online/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-5/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-online-full-version-hd/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-full-2021-free/">.</a>
<a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full-2/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-6/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-7/">.</a>
<a href="https://pactforanimals.org/advert/free-download-godzilla-vs-kong-2021-watch-full/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-online/">.</a>
<a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-online/">.</a>
<a href="https://pactforanimals.org/advert/godzilla-vs-kong-2021-google-drive-mp4/">.</a>
<a href="https://pactforanimals.org/advert/google-docs-godzilla-vs-kong-2021-google-drive-full-hd-mp4/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-8/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-9/">.</a>
<a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full-3/">.</a>
<a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-online/">.</a>
<a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full-4/">.</a>
<a href="https://pactforanimals.org/advert/free-godzilla-vs-kong-2021-watch-full/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-10/">.</a>
<a href="https://pactforanimals.org/advert/online-watch-godzilla-vs-kong-2021-full/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-godzilla-vs-kong-2021-full-online/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-11/">.</a>
<a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-free-hd/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-free-online/">.</a>
<a href="https://pactforanimals.org/advert/full-godzilla-vs-kong-2021-watch-online/">.</a>
<a href="https://sites.google.com/view/mortalkombat1/">.</a>
<a href="https://sites.google.com/view/free-watch-mortal-kombat-2021-/">.</a>
<a href="https://sites.google.com/view/watch-mortal-kombat-2021-f-u-l/">.</a>
<a href="https://sites.google.com/view/mortalkombat2/">.</a>
<a href="https://sites.google.com/view/mortalkombat3/">.</a>
<a href="https://sites.google.com/view/mortalkombat5/">.</a>
<a href="https://sites.google.com/view/fullwatchmortalkombat2021-movi/">.</a>
<a href="https://sites.google.com/view/mortalkombat7/">.</a>
<a href="https://sites.google.com/view/mortalkombat8/">.</a>
<a href="https://sites.google.com/view/mortalkombat9/">.</a>
<a href="https://sites.google.com/view/mortalkombat10/">.</a>
<a href="https://sites.google.com/view/watch-mort-tal-kombat/">.</a>
<a href="https://sites.google.com/view/free-watch-mort-tal-kombat/">.</a>
<a href="https://sites.google.com/view/watch-mort-tal-kombatfree-/">.</a>
<a href="https://sites.google.com/view/full-watch-mortal-kombat/">.</a>
<a href="https://sites.google.com/view/watch-mortal-kombat-2021-/">.</a>
<a href="https://sites.google.com/view/watch-free-mortal-kombat-2021/">.</a>
<a href="https://sites.google.com/view/full-watch-mortal-kombat-/">.</a>
<a href="https://sites.google.com/view/watch-mortal-kombat-g-drive/">.</a>
<a href="https://sites.google.com/view/g-docs-mortalkombat-g-drive/">.</a>
<a href="https://sites.google.com/view/mortal-kombat-2021-full-free/">.</a>
<a href="https://sites.google.com/view/mortal-kombat-2021-full-free-o/">.</a>
<a href="https://sites.google.com/view/mortal-kombat-2021-full-free-o/">.</a>
<a href="https://paiza.io/projects/56xFAEq61pSSn8VnKnHO6Q">.</a>
<a href="https://www.posts123.com/post/1450667/mariners-announce-spring-training">.</a>
<a href="https://sites.google.com/view/sfdjgkdfghdkfgjherghkkdfjg/home">.</a>
<a href="https://dskfjshdkjfewhgf.blogspot.com/2021/03/sdkjfhwekjhfjdherjgfdjg.html">.</a>
<a href="https://grahmaulidia.wordpress.com/2021/03/28/mariners-announce-spring-training-roster-moves/">.</a>
<a href="https://4z5v6wq7a.medium.com/a-letter-to-nationals-fans-from-mark-d-lerner-f83a9ea92f89">.</a>
<a href="https://4z5v6wq7a.medium.com/a-letter-to-nationals-fans-from-mark-d-lerner1-b2847091ff9f">.</a>
<a href="https://4z5v6wq7a.medium.com/a-letter-to-nationals-fans-from-mark-d-lerner2-df35041eec3a">.</a>
<a href="https://4z5v6wq7a.medium.com">.</a>
<a href="https://onlinegdb.com/BJaH8WR4O">.</a> | 9,077 | [
[
-0.056915283203125,
-0.0330810546875,
0.047607421875,
0.0002899169921875,
-0.048858642578125,
0.011016845703125,
0.0297088623046875,
-0.037109375,
0.08599853515625,
-0.007381439208984375,
-0.06109619140625,
-0.01383209228515625,
-0.03582763671875,
0.00117969... |
dispix/test-dataset | 2021-02-08T12:22:38.000Z | [
"region:us"
] | dispix | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dongpil/test | 2021-07-29T10:34:34.000Z | [
"region:us"
] | dongpil | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
eason929/test | 2021-03-15T04:02:59.000Z | [
"region:us"
] | eason929 | null | null | 0 | 42 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
florianbussmann/FUNSD-vu2020revising | 2022-10-25T09:20:31.000Z | [
"multilinguality:monolingual",
"language:en",
"arxiv:2010.05322",
"region:us"
] | florianbussmann | \
FUNSD is one of the limited publicly available datasets for information extraction from document images.
The information in the FUNSD dataset is defined by text areas of four categories ("key", "value", "header", "other", and "background")
and connectivity between areas as key-value relations. Inspecting FUNSD, we found several inconsistency in labeling, which impeded its
applicability to the key-value extraction problem. In this report, we described some labeling issues in FUNSD and the revision we made
to the dataset. | \
@article{vu2020revising,
title={Revising FUNSD dataset for key-value detection in document images},
author={Vu, Hieu M and Nguyen, Diep Thi-Ngoc},
journal={arXiv preprint arXiv:2010.05322},
year={2020}
} | 0 | 42 | 2022-03-02T23:29:22 | ---
language:
- en
multilinguality:
- monolingual
language_bcp47:
- en-US
---
# Dataset Card for FUNSD-vu2020revising
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [https://arxiv.org/abs/2010.05322](https://arxiv.org/abs/2010.05322)
### Dataset Summary
This is the revised version of the [FUNSD dataset](https://huggingface.co/datasets/nielsr/funsd) as proposed by [Vu, H. M., & Nguyen, D. T. N. (2020)](https://arxiv.org/abs/2010.05322).
### Supported Tasks and Leaderboards
The Form Understanding challenge comprises three tasks, namely word grouping, semantic-entity labeling, and entity linking.
## Dataset Structure
### Data Instances
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature - GUID.
- `words`: a `list` of `string` features.
- `bboxes`: a `list` of `list` with four (`int`) features.
- `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'O': 0, 'B-HEADER': 1, 'I-HEADER': 2, 'B-QUESTION': 3, 'I-QUESTION': 4, 'B-ANSWER': 5, 'I-ANSWER': 6}
```
- `image_path`: a `string` feature.
### Data Splits
| name |train|test|
|------------|----:|---:|
|FUNSD-vu2020| 149| 50|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{vu2020revising,
title={Revising FUNSD dataset for key-value detection in document images},
author={Vu, Hieu M and Nguyen, Diep Thi-Ngoc},
journal={arXiv preprint arXiv:2010.05322},
year={2020}
}
``` | 4,625 | [
[
-0.03515625,
-0.0330810546875,
0.013275146484375,
0.004398345947265625,
-0.02215576171875,
-0.00542449951171875,
-0.01172637939453125,
-0.01953125,
0.046966552734375,
0.042816162109375,
-0.05963134765625,
-0.07098388671875,
-0.031463623046875,
-0.00255775451... |
stas/oscar-en-10k | 2022-10-19T21:40:14.000Z | [
"language:en",
"license:apache-2.0",
"region:us"
] | stas | This is a small subset representing 10K records from the original OSCAR dataset, "unshuffled_deduplicated_en" subset - created for testing. The records were extracted after having been shuffled.
The full 1TB+ dataset is at https://huggingface.co/datasets/oscar. | @inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
} | 2 | 42 | 2022-03-02T23:29:22 | ---
language:
- en
license: apache-2.0
---
# OSCAR EN 10K for testing
This is a small subset representing the 10K records from the original OSCAR dataset, "unshuffled_deduplicated_en" subset - created for testing. The records were extracted after having been shuffled.
The full 1TB+ dataset is at https://huggingface.co/datasets/oscar.
```
$ python -c "from datasets import load_dataset; ds=load_dataset('stas/oscar-en-10k'); print(ds)"
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 10000
})
})
```
* Records: 10,000
* compressed size: ~37MB
* uncompressed size: 131MB
To convert to jsonlines:
```
from datasets import load_dataset
dataset_name = "stas/oscar-en-10k"
name = dataset_name.split('/')[-1]
ds = load_dataset(dataset_name, split='train')
ds.to_json(f"{name}.jsonl", orient="records", lines=True)
```
To see how this subset was created, here is the [instructions file](https://huggingface.co/datasets/stas/oscar-en-10k/blob/main/process.txt).
| 1,002 | [
[
-0.041046142578125,
-0.01629638671875,
0.0102081298828125,
0.0217437744140625,
-0.045074462890625,
-0.0089569091796875,
-0.0006842613220214844,
-0.022552490234375,
0.05364990234375,
0.053955078125,
-0.048583984375,
-0.01873779296875,
-0.043060302734375,
0.03... |
pietrolesci/gen_debiased_nli | 2022-04-25T09:49:52.000Z | [
"region:us"
] | pietrolesci | null | null | 0 | 42 | 2022-04-25T09:35:37 | ## Overview
Original dataset available [here](https://github.com/jimmycode/gen-debiased-nli#training-with-our-datasets).
```latex
@inproceedings{gen-debiased-nli-2022,
title = "Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets",
author = "Wu, Yuxiang and
Gardner, Matt and
Stenetorp, Pontus and
Dasigi, Pradeep",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics",
month = may,
year = "2022",
publisher = "Association for Computational Linguistics",
}
```
## Dataset curation
No curation.
## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features, DatasetDict
import json
from pathlib import Path
# load data
path = Path("./")
ds = {}
for i in path.rglob("*.jsonl"):
print(i)
name = str(i).split(".")[0].lower().replace("-", "_")
with i.open("r") as fl:
df = pd.DataFrame([json.loads(line) for line in fl])
ds[name] = df
# cast to dataset
features = Features(
{
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
"type": Value(dtype="string"),
}
)
ds = DatasetDict({k: Dataset.from_pandas(v, features=features) for k, v in ds.items()})
ds.push_to_hub("pietrolesci/gen_debiased_nli", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(ds.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
ds[i].to_pandas(),
ds[j].to_pandas(),
on=["premise", "hypothesis", "label"],
how="inner",
).shape[0],
)
#> mnli_seq_z - snli_z_aug: 0
#> mnli_seq_z - mnli_par_z: 477149
#> mnli_seq_z - snli_seq_z: 0
#> mnli_seq_z - mnli_z_aug: 333840
#> mnli_seq_z - snli_par_z: 0
#> snli_z_aug - mnli_par_z: 0
#> snli_z_aug - snli_seq_z: 506624
#> snli_z_aug - mnli_z_aug: 0
#> snli_z_aug - snli_par_z: 504910
#> mnli_par_z - snli_seq_z: 0
#> mnli_par_z - mnli_z_aug: 334960
#> mnli_par_z - snli_par_z: 0
#> snli_seq_z - mnli_z_aug: 0
#> snli_seq_z - snli_par_z: 583107
#> mnli_z_aug - snli_par_z: 0
``` | 2,308 | [
[
-0.030120849609375,
-0.039794921875,
0.0174713134765625,
0.0220794677734375,
-0.0156707763671875,
0.00006562471389770508,
-0.007022857666015625,
-0.00926971435546875,
0.034881591796875,
0.0253143310546875,
-0.03765869140625,
-0.043060302734375,
-0.03717041015625... |
khalidalt/HuffPost | 2023-05-19T18:35:08.000Z | [
"license:cc0-1.0",
"region:us"
] | khalidalt | A dataset of approximately 200K news headlines from the year 2012 to 2018 collected from HuffPost. | @book{book,
author = {Misra, Rishabh and Grover, Jigyasa},
year = {2021},
month = {01},
pages = {},
title = {Sculpting Data for ML: The first act of Machine Learning},
isbn = {978-0-578-83125-1}
}
@dataset{dataset,
author = {Misra, Rishabh},
year = {2018},
month = {06},
pages = {},
title = {News Category Dataset},
doi = {10.13140/RG.2.2.20331.18729}
} | 0 | 42 | 2022-04-26T09:32:57 | ---
license: cc0-1.0
---
# Dataset Card for HuffPost
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:https://www.kaggle.com/datasets/rmisra/news-category-dataset/metadata**
### Dataset Summary
A dataset of approximately 200K news headlines from the year 2012 to 2018 collected from HuffPost.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
license: cc0-1.0
### Citation Information
```
@book{book,
author = {Misra, Rishabh and Grover, Jigyasa},
year = {2021},
month = {01},
pages = {},
title = {Sculpting Data for ML: The first act of Machine Learning},
isbn = {978-0-578-83125-1}
}
@dataset{dataset,
author = {Misra, Rishabh},
year = {2018},
month = {06},
pages = {},
title = {News Category Dataset},
doi = {10.13140/RG.2.2.20331.18729}
}
```
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| 2,923 | [
[
-0.026824951171875,
-0.04449462890625,
-0.002166748046875,
0.0215606689453125,
-0.0213623046875,
-0.0013713836669921875,
-0.01374053955078125,
-0.01092529296875,
0.0399169921875,
0.03704833984375,
-0.05902099609375,
-0.0753173828125,
-0.06268310546875,
0.004... |
HuggingFaceM4/vatex | 2022-05-13T21:27:03.000Z | [
"region:us"
] | HuggingFaceM4 | VATEX is a large-scale multilingual video description dataset, which contains over 41,250 videos and 825,000 captions
in both English and Chinese. VATEX is characterized by the following major unique properties.
First, it contains both English and Chinese descriptions at scale, which can support many multilingual studies
that are constrained by monolingual datasets. Secondly, VATEX has a high number of clip-sentence pairs
with each video clip annotated with multiple unique sentences, and every caption is unique in
the whole corpus. Third, VATEX contains more comprehensive yet representative video content,
covering 600 human activities in total. Furthermore, both the English and Chinese corpora in
VATEX are lexically richer and thus allow more natural and diverse caption generation. | @InProceedings{Wang_2019_ICCV,
author = {Wang, Xin and Wu, Jiawei and Chen, Junkun and Li, Lei and Wang, Yuan-Fang and Wang, William Yang},
title = {VaTeX: A Large-Scale, High-Quality Multilingual Dataset for Video-and-Language Research},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
} | 2 | 42 | 2022-05-13T20:11:59 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
olivierdehaene/xkcd | 2022-10-25T10:31:55.000Z | [
"task_categories:image-to-text",
"task_categories:feature-extraction",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-sa-3.0",
"license:other",
"region:us"
] | olivierdehaene | null | null | 4 | 42 | 2022-06-11T20:32:01 | ---
annotations_creators: []
language_creators:
- other
language:
- en
license:
- cc-by-sa-3.0
- other
multilinguality:
- monolingual
pretty_name: XKCD
size_categories:
- 1K<n<10K
source_datasets: []
task_categories:
- image-to-text
- feature-extraction
task_ids: []
---
# Dataset Card for "XKCD"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://xkcd.com/](https://xkcd.com/), [https://www.explainxkcd.com](https://www.explainxkcd.com)
- **Repository:** [Hugging Face repository](https://huggingface.co/datasets/olivierdehaene/xkcd/tree/main)
### Dataset Summary
XKCD is an export of all XKCD comics with their transcript and explanation scrapped from
[https://explainxkcd.com](https://explainxkcd.com).
## Dataset Structure
### Data Instances
- `id`: `1`
- `title`: `Barrel - Part 1`
- `image_title`: `Barrel - Part 1`
- `url`: `https://www.xkcd.com/1`
- `image_url`: `https://imgs.xkcd.com/comics/barrel_cropped_(1).jpg`
- `explained_url`: `https://www.explainxkcd.com/wiki/index.php/1:_Barrel_-_Part_1`
- `transcript`: `[A boy sits in a barrel which is floating in an ocean.] Boy: i wonder where i'll float next?
[A smaller frame with a zoom out of the boy in the barrel seen from afar. The barrel drifts into the distance. Nothing
else can be seen.]`
- `explanation`: `The comic shows a young boy floating in a barrel in an ocean that doesn't have a visible end. It
comments on the unlikely optimism and perhaps naïveté people sometimes display. The boy is completely lost and seems
hopelessly alone, without any plan or control of the situation. Yet, rather than afraid or worried, he is instead
quietly curious: "I wonder where I'll float next?" Although not necessarily the situation in this comic, this is a
behavior people often exhibit when there is nothing they can do about a problematic situation for a long time; they may
have given up hope or developed a cavalier attitude as a coping mechanism. The title text expands on the philosophical
content, with the boy representing the average human being: wandering through life with no real plan, quietly
optimistic, always opportunistic and clueless as to what the future may hold. The isolation of the boy may also
represent the way in which we often feel lost through life, never knowing quite where we are, believing that there is
no one to whom to turn. This comic could also reflect on Randall's feelings towards creating xkcd in the first place;
unsure of what direction the web comic would turn towards, but hopeful that it would eventually become the popular web
comic that we know today. This is the first in a six-part series of comics whose parts were randomly published during
the first several dozen strips. The series features a character that is not consistent with what would quickly become
the xkcd stick figure style. The character is in a barrel. In 1110: Click and Drag there is a reference to this comic
at 1 North, 48 East . After Randall released the full The Boy and his Barrel story on xkcd, it has been clear that the
original Ferret story should also be included as part of the barrel series. The full series can be found here . They
are listed below in the order Randall chose for the short story above: `
### Data Fields
- `id`
- `title`
- `url`: xkcd.com URL
- `image_url`
- `explained_url`: explainxkcd.com URL
- `transcript`: english text transcript of the comic
- `explanation`: english explanation of the comic
## Dataset Creation
The dataset was scrapped from both explainxkcd.com and xkcd.com.
The dataset is therefore licensed under the Creative Commons Attribution-ShareAlike 3.0 license for
the `transcript` and `explanation` fields, while the image itself is licensed under the
Creative Commons Attribution-NonCommercial 2.5 license.
See the [Copyrights](https://www.explainxkcd.com/wiki/index.php/explain_xkcd:Copyrights) page from
explainxkcd.com for more explanations.
### Update
You can update the dataset by using the `scrapper.py` script.
First install the dependencies:
```bash
pip install aiolimiter aiohttp beautifulsoup4 pandas
```
Then run the script:
```bash
python scrapper.py
```
## Considerations for Using the Data
As the data was scrapped, it is entirely possible that some fields are missing part of the original data.
## Additional Information
### Licensing Information
The dataset is licensed under the Creative Commons Attribution-ShareAlike 3.0 license for
the `transcript` and `explanation` fields, while the images are licensed under the
Creative Commons Attribution-NonCommercial 2.5 license.
### Contributions
Thanks to [@OlivierDehaene](https://github.com/OlivierDehaene) for adding this dataset.
| 5,184 | [
[
-0.0660400390625,
-0.031646728515625,
0.025970458984375,
0.011260986328125,
-0.0288848876953125,
0.032501220703125,
0.01580810546875,
-0.030303955078125,
0.060272216796875,
0.0413818359375,
-0.07769775390625,
-0.03155517578125,
-0.052764892578125,
0.01608276... |
launch/open_question_type | 2022-11-09T01:58:10.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | launch | Open-ended question type annotated dataset. | @inproceedings{cao-wang-2021-controllable,
title = "Controllable Open-ended Question Generation with A New Question Type Ontology",
author = "Cao, Shuyang and
Wang, Lu",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.502",
doi = "10.18653/v1/2021.acl-long.502",
pages = "6424--6439",
abstract = "We investigate the less-explored task of generating open-ended questions that are typically answered by multiple sentences. We first define a new question type ontology which differentiates the nuanced nature of questions better than widely used question words. A new dataset with 4,959 questions is labeled based on the new ontology. We then propose a novel question type-aware question generation framework, augmented by a semantic graph representation, to jointly predict question focuses and produce the question. Based on this framework, we further use both exemplars and automatically generated templates to improve controllability and diversity. Experiments on two newly collected large-scale datasets show that our model improves question quality over competitive comparisons based on automatic metrics. Human judges also rate our model outputs highly in answerability, coverage of scope, and overall quality. Finally, our model variants with templates can produce questions with enhanced controllability and diversity.",
} | 0 | 42 | 2022-06-28T20:55:58 | ---
annotations_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids: []
pretty_name: OpenQuestionType
---
# Dataset Card for OpenQuestionType
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://shuyangcao.github.io/projects/ontology_open_ended_question/](https://shuyangcao.github.io/projects/ontology_open_ended_question/)
- **Repository:** [https://github.com/ShuyangCao/open-ended_question_ontology](https://github.com/ShuyangCao/open-ended_question_ontology)
- **Paper:** [https://aclanthology.org/2021.acl-long.502/](https://aclanthology.org/2021.acl-long.502/)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Question types annotated on open-ended questions.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
An example looks as follows.
```
{
"id": "123",
"question": "A test question?",
"annotator1": ["verification", None],
"annotator2": ["concept", None],
"resolve_type": "verification"
}
```
### Data Fields
- `id`: a `string` feature.
- `question`: a `string` feature.
- `annotator1`: a sequence feature containing two elements. The first one is the most confident label by the first annotator and the second one is the second-most confident label by the first annotator.
- `annotator2`: a sequence feature containing two elements. The first one is the most confident label by the second annotator and the second one is the second-most confident label by the second annotator.
- `resolve_type`: a `string` feature which is the final label after resolving disagreement.
### Data Splits
- train: 3716
- valid: 580
- test: 660
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Yahoo Answer and Reddit users.
### Personal and Sensitive Information
None.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY 4.0
### Citation Information
```
@inproceedings{cao-wang-2021-controllable,
title = "Controllable Open-ended Question Generation with A New Question Type Ontology",
author = "Cao, Shuyang and
Wang, Lu",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.502",
doi = "10.18653/v1/2021.acl-long.502",
pages = "6424--6439",
abstract = "We investigate the less-explored task of generating open-ended questions that are typically answered by multiple sentences. We first define a new question type ontology which differentiates the nuanced nature of questions better than widely used question words. A new dataset with 4,959 questions is labeled based on the new ontology. We then propose a novel question type-aware question generation framework, augmented by a semantic graph representation, to jointly predict question focuses and produce the question. Based on this framework, we further use both exemplars and automatically generated templates to improve controllability and diversity. Experiments on two newly collected large-scale datasets show that our model improves question quality over competitive comparisons based on automatic metrics. Human judges also rate our model outputs highly in answerability, coverage of scope, and overall quality. Finally, our model variants with templates can produce questions with enhanced controllability and diversity.",
}
```
| 5,221 | [
[
-0.04522705078125,
-0.072021484375,
0.012451171875,
-0.002239227294921875,
-0.0019779205322265625,
-0.007476806640625,
-0.02752685546875,
-0.0184326171875,
0.021209716796875,
0.052093505859375,
-0.057708740234375,
-0.06573486328125,
-0.036468505859375,
0.016... |
beki/privy | 2023-04-25T21:45:06.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:100K<n<200K",
"size_categories:300K<n<400K",
"language:en",
"license:mit",
"pii-detection",
"region:us"
] | beki | null | null | 8 | 42 | 2022-09-16T04:41:28 | ---
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<200K
- 300K<n<400K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
tags:
- pii-detection
train-eval-index:
- config: privy-small
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
metrics:
- type: seqeval
name: seqeval
pretty_name: Privy English
---
# Dataset Card for "privy-english"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy](https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy)
### Dataset Summary
A synthetic PII dataset generated using [Privy](https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy), a tool which parses OpenAPI specifications and generates synthetic request payloads, searching for keywords in API schema definitions to select appropriate data providers. Generated API payloads are converted to various protocol trace formats like JSON and SQL to approximate the data developers might encounter while debugging applications.
This labelled PII dataset consists of protocol traces (JSON, SQL (PostgreSQL, MySQL), HTML, and XML) generated from OpenAPI specifications and includes 60+ PII types.
### Supported Tasks and Leaderboards
Named Entity Recognition (NER) and PII classification.
### Label Scheme
<details>
<summary>View label scheme (26 labels for 60 PII data providers)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `PERSON`, `LOCATION`, `NRP`, `DATE_TIME`, `CREDIT_CARD`, `URL`, `IBAN_CODE`, `US_BANK_NUMBER`, `PHONE_NUMBER`, `US_SSN`, `US_PASSPORT`, `US_DRIVER_LICENSE`, `IP_ADDRESS`, `US_ITIN`, `EMAIL_ADDRESS`, `ORGANIZATION`, `TITLE`, `COORDINATE`, `IMEI`, `PASSWORD`, `LICENSE_PLATE`, `CURRENCY`, `ROUTING_NUMBER`, `SWIFT_CODE`, `MAC_ADDRESS`, `AGE` |
</details>
### Languages
English
## Dataset Structure
### Data Instances
A sample:
```
{
"full_text": "{\"full_name_female\": \"Bethany Williams\", \"NewServerCertificateName\": \"\", \"NewPath\": \"\", \"ServerCertificateName\": \"dCwMNqR\", \"Action\": \"\", \"Version\": \"u zNS zNS\"}",
"masked": "{\"full_name_female\": \"{{name_female}}\", \"NewServerCertificateName\": \"{{string}}\", \"NewPath\": \"{{string}}\", \"ServerCertificateName\": \"{{string}}\", \"Action\": \"{{string}}\", \"Version\": \"{{string}}\"}",
"spans": [
{
"entity_type": "PERSON",
"entity_value": "Bethany Williams",
"start_position": 22,
"end_position": 38
}
],
"template_id": 51889,
"metadata": null
}
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@online{WinNT,
author = {Benjamin Kilimnik},
title = {{Privy} Synthetic PII Protocol Trace Dataset},
year = 2022,
url = {https://huggingface.co/datasets/beki/privy},
}
```
### Contributions
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 6,223 | [
[
-0.037445068359375,
-0.042999267578125,
0.01375579833984375,
0.020172119140625,
-0.01361846923828125,
-0.0002865791320800781,
-0.0218505859375,
-0.024200439453125,
0.047271728515625,
0.03192138671875,
-0.048583984375,
-0.0762939453125,
-0.033477783203125,
0.... |
leslyarun/c4_200m_gec_train100k_test25k | 2022-10-26T07:59:31.000Z | [
"task_categories:text-generation",
"source_datasets:allenai/c4",
"language:en",
"grammatical-error-correction",
"region:us"
] | leslyarun | null | null | 2 | 42 | 2022-10-26T07:21:21 | ---
language:
- en
source_datasets:
- allenai/c4
task_categories:
- text-generation
pretty_name: C4 200M Grammatical Error Correction Dataset
tags:
- grammatical-error-correction
---
# C4 200M
# Dataset Summary
C4 200M Sample Dataset adopted from https://huggingface.co/datasets/liweili/c4_200m
C4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks.
The corruption edits and scripts used to synthesize this dataset is referenced from: [C4_200M Synthetic Dataset](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction)
# Description
As discussed before, this dataset contains 185 million sentence pairs. Each article has these two attributes: `input` and `output`. Here is a sample of dataset:
```
{
"input": "Bitcoin is for $7,094 this morning, which CoinDesk says."
"output": "Bitcoin goes for $7,094 this morning, according to CoinDesk."
}
``` | 1,025 | [
[
-0.0220489501953125,
-0.05322265625,
0.034149169921875,
0.012847900390625,
0.007171630859375,
0.01010894775390625,
-0.0188751220703125,
-0.0306854248046875,
0.01148223876953125,
0.038238525390625,
-0.0296783447265625,
-0.039520263671875,
-0.028533935546875,
... |
luigisaetta/atco2_normalized_augmented | 2022-11-19T12:41:21.000Z | [
"region:us"
] | luigisaetta | null | null | 0 | 42 | 2022-11-19T12:35:08 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
RussianNLP/wikiomnia | 2023-04-07T06:43:59.000Z | [
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:ru",
"license:apache-2.0",
"wikipedia",
"wikiomnia",
"squad",
"QA",
"arxiv:2204.08009",
"region:us"
] | RussianNLP | null | TBA | 4 | 42 | 2022-12-16T16:03:40 | ---
license: apache-2.0
dataset_info:
- config_name: wikiomnia_ruT5_raw
features:
- name: title
dtype: string
- name: categories
dtype: string
- name: summary
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: batch_id
dtype: string
splits:
- name: dev
num_bytes: 600356136
num_examples: 266295
- name: test
num_bytes: 572651444
num_examples: 267751
download_size: 1204094848
dataset_size: 1173007580
- config_name: wikiomnia_ruT5_filtered
features:
- name: title
dtype: string
- name: categories
dtype: string
- name: summary
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: batch_id
dtype: string
splits:
- name: train
num_bytes: 4157093224
num_examples: 2088027
download_size: 4278635364
dataset_size: 4157093224
- config_name: wikiomnia_ruGPT3_filtered
features:
- name: title
dtype: string
- name: categories
dtype: string
- name: summary
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: batch_id
dtype: string
splits:
- name: train
num_bytes: 338607635
num_examples: 173314
download_size: 348694031
dataset_size: 338607635
- config_name: wikiomnia_ruGPT3_raw
features:
- name: title
dtype: string
- name: categories
dtype: string
- name: summary
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: batch_id
dtype: string
splits:
- name: train_batch1
num_bytes: 553204785
num_examples: 260808
- name: train_batch2
num_bytes: 542823205
num_examples: 263599
- name: train_batch3
num_bytes: 582321994
num_examples: 269736
- name: train_batch4
num_bytes: 543315355
num_examples: 265948
- name: train_batch5
num_bytes: 513288049
num_examples: 268466
- name: train_batch6
num_bytes: 943556173
num_examples: 512147
- name: train_batch7
num_bytes: 929464509
num_examples: 508149
- name: train_batch8
num_bytes: 915128725
num_examples: 507559
- name: train_batch9
num_bytes: 926443048
num_examples: 504292
- name: train_batch10
num_bytes: 834958539
num_examples: 463812
- name: train_batch11
num_bytes: 509866027
num_examples: 287770
- name: train_batch12
num_bytes: 478843738
num_examples: 271410
- name: train_batch13
num_bytes: 757068702
num_examples: 385730
- name: train_batch14
num_bytes: 575937629
num_examples: 304110
- name: train_batch15
num_bytes: 517092031
num_examples: 277507
- name: train_batch16
num_bytes: 759363156
num_examples: 402203
- name: train_batch17
num_bytes: 860544388
num_examples: 466572
- name: train_batch18
num_bytes: 935985528
num_examples: 518348
- name: train_batch19
num_bytes: 936782197
num_examples: 514307
- name: train_batch20
num_bytes: 874299949
num_examples: 487238
download_size: 14939875008
dataset_size: 14490287727
- config_name: wikiomnia_ruT5_raw_train
features:
- name: title
dtype: string
- name: categories
dtype: string
- name: summary
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: batch_id
dtype: string
splits:
- name: train_batch3
num_bytes: 612693602
num_examples: 271391
- name: train_batch4
num_bytes: 570286147
num_examples: 265947
- name: train_batch5
num_bytes: 552502041
num_examples: 274650
- name: train_batch6
num_bytes: 1017066184
num_examples: 525224
- name: train_batch7
num_bytes: 972351430
num_examples: 509615
- name: train_batch8
num_bytes: 973314180
num_examples: 516828
- name: train_batch9
num_bytes: 981651841
num_examples: 512709
- name: train_batch10
num_bytes: 880664685
num_examples: 469512
- name: train_batch11
num_bytes: 543971388
num_examples: 294631
- name: train_batch12
num_bytes: 503939060
num_examples: 273526
- name: train_batch13
num_bytes: 794421530
num_examples: 392021
- name: train_batch14
num_bytes: 610815879
num_examples: 311452
- name: train_batch15
num_bytes: 540225492
num_examples: 278677
- name: train_batch16
num_bytes: 804003566
num_examples: 411192
- name: train_batch17
num_bytes: 903347135
num_examples: 469871
- name: train_batch18
num_bytes: 995239085
num_examples: 528301
- name: train_batch19
num_bytes: 1003402360
num_examples: 522264
- name: train_batch20
num_bytes: 948137237
num_examples: 499866
download_size: 14634332336
dataset_size: 14208032842
task_categories:
- question-answering
language:
- ru
tags:
- wikipedia
- wikiomnia
- squad
- QA
pretty_name: WikiOmnia
size_categories:
- 1M<n<10M
---
# Dataset Card for "Wikiomnia"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/RussianNLP](https://github.com/RussianNLP)
- **Paper:** [WikiOmnia: filtration and evaluation of the generated QA corpus on the whole Russian Wikipedia](https://arxiv.org/abs/2204.08009)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
We present the WikiOmnia dataset, a new publicly available set of QA-pairs and corresponding Russian Wikipedia article summary sections, composed with a fully automated generative pipeline. The dataset includes every available article from Wikipedia for the Russian language. The WikiOmnia pipeline is available open-source and is also tested for creating SQuAD-formatted QA on other domains, like news texts, fiction, and social media. The resulting dataset includes two parts: raw data on the whole Russian Wikipedia (7,930,873 QA pairs with paragraphs for ruGPT-3 XL and 7,991,040 QA pairs with paragraphs for ruT5-large) and cleaned data with strict automatic verification (over 160,000 QA pairs with paragraphs for ruGPT-3 XL and over 3,400,000 QA pairs with paragraphs for ruT5-large).
WikiOmnia consists of 2 parts:
1. the voluminous, automatically generated part: 15,9 million triplets consisting of the original article summary, a corresponding generated question and a generated answer;
2. the filtered part: the subsample of 3,5 million triplets, fully verified with automatic means
Wikiomnia adheres to a standard SQuAD format problem, resulting in triplets "text paragraph - question based on paragraph - answer from the paragraph", see the following example:
**Original Wikipedia paragraph**: Коити Масимо (яп. Масимо Ко:ити) — известный режиссёр аниме и основатель японской анимационной студии Bee Train. С
момента основания студии он руководит производством почти всех её картин, а также время от времени принимает участие в работе над анимацией и музыкой.
**English translation**: Koichi Mashimo is a famous anime director and the founder of the Japanese animation studio Bee Train. Since the creation of the studio, he directed almost all studio’s works, and he
also sometimes participates in art and sound tasks.
**Generated question (ruT5)**: Кто является основателем японской анимационной студии Bee Train?
**Generated answer (ruT5)**: Коити Масимо
**English QA translation**: Who is the founder of the Japanese animation studio Bee Train? Koichi Mashimo
## Dataset Creation
Models used for dataset generation:
- [ruT5](https://huggingface.co/sberbank-ai/ruT5-large) large fine-tuned on SberQuaD
- [ruGPT-3](https://huggingface.co/sberbank-ai/rugpt3xl) XL fine-tuned on SberQuaD
- [ruBERT](http://docs.deeppavlov.ai/en/master/features/models/squad.html) DeepPavlov tuned for QA tasks
Source: Wikipedia version March 2021
Special tokens: <[TEXT]>, <[QUESTION]>, <[ANSWER]>
The resulting dataset includes two parts: raw data on the whole Russian Wikipedia (7,930,873 QA pairs with paragraphs for ruGPT-3 XL and 7,991,040 QA pairs with paragraphs for ruT5-
large) and cleaned data with strict automatic verification (over 160,000 QA pairs with paragraphs for ruGPT-3 XL and over 3,400,000 QA pairs with paragraphs for ruT5-large).

## Additional Information
### Licensing Information
[Apache 2.0 license](https://github.com/RussianNLP/WikiOmnia/blob/main/LICENSE)
### Citation Information
```
@inproceedings{pisarevskaya-shavrina-2022-wikiomnia,
title = "{W}iki{O}mnia: filtration and evaluation of the generated {QA} corpus on the whole {R}ussian {W}ikipedia",
author = "Pisarevskaya, Dina and
Shavrina, Tatiana",
booktitle = "Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.gem-1.10",
pages = "125--135",
abstract = "The General QA field has been developing the methodology referencing the Stanford Question answering dataset (SQuAD) as the significant benchmark. Compiling factual questions datasets requires manual annotations, limiting the training data{'}s potential size. We present the WikiOmnia dataset, a new publicly available set of QA pairs and corresponding Russian Wikipedia article summary sections, composed with a fully automated generation and filtration pipeline. To ensure high quality of generated QA pairs, diverse manual and automated evaluation techniques were applied. The WikiOmnia pipeline is available open-source and is also tested for creating SQuAD-formatted QA on other domains, like news texts, fiction, and social media. The resulting dataset includes two parts: raw data on the whole Russian Wikipedia (7,930,873 QA pairs with paragraphs for ruGPT-3 XL and 7,991,040 QA pairs with paragraphs for ruT5-large) and cleaned data with strict automatic verification (over 160,000 QA pairs with paragraphs for ruGPT-3 XL and over 3,400,000 QA pairs with paragraphs for ruT5-large).",
}
```
### Contributions
Thanks to [@Deenochka](https://github.com/deenochka), [@TatianaShavrina](https://github.com/TatianaShavrina) | 10,705 | [
[
-0.038238525390625,
-0.050628662109375,
0.02685546875,
0.01291656494140625,
-0.019866943359375,
0.0006814002990722656,
-0.0282135009765625,
-0.0213470458984375,
0.028106689453125,
0.017578125,
-0.06707763671875,
-0.0404052734375,
-0.022491455078125,
0.029022... |
alexandrainst/scandi-reddit | 2022-12-21T17:54:31.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"language:da",
"language:sv",
"language:no",
"language:is",
"license:cc-by-4.0",
"region:us"
] | alexandrainst | null | null | 5 | 42 | 2022-12-20T12:13:19 | ---
pretty_name: ScandiReddit
language:
- da
- sv
- no
- is
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
---
# Dataset Card for ScandiReddit
## Dataset Description
- **Repository:** <https://github.com/alexandrainst/ScandiReddit>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
- **Size of downloaded dataset files:** 2341 MB
- **Size of the generated dataset:** 3594 MB
- **Total amount of disk used:** 5935 MB
### Dataset Summary
ScandiReddit is a filtered and post-processed corpus consisting of comments from [Reddit](https://reddit.com/).
All Reddit comments from December 2005 up until October 2022 were downloaded through [PushShift](https://files.pushshift.io/reddit/comments/), after which these were filtered based on the FastText language detection model. Any comment which was classified as Danish (`da`), Norwegian (`no`), Swedish (`sv`) or Icelandic (`is`) with a confidence score above 70% was kept.
The resulting comments were then deduplicated, removing roughly 438,000 comments. 5,000 comments written by Reddit bots were removed, and roughly 189,000 comments belonging to inappropriate subreddits (explicit and drug-related) were also removed.
Lastly, we remove roughly 40,000 near-duplicate comments from the resulting corpus, where near-duplicate here means that the comments have more than 80% of their word 5-grams in common.
### Supported Tasks and Leaderboards
Training language models is the intended task for this dataset. No leaderboard is active at this point.
### Languages
The dataset is available in Danish (`da`), Swedish (`sv`), Norwegian (`no`) and Icelandic (`is`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 2341 MB
- **Size of the generated dataset:** 3594 MB
- **Total amount of disk used:** 5935 MB
An example from the dataset looks as follows.
```
{
'doc': 'Bergen er ødelagt. Det er ikke moro mer.',
'subreddit': 'Norway',
'language': 'da',
'language_confidence': 0.7472341656684875
}
```
### Data Fields
The data fields are the same among all splits.
- `doc`: a `string` feature.
- `subreddit`: a `string` feature.
- `language`: a `string` feature.
- `language_confidence`: a `float64` feature.
### Language Distribution
| name | count |
|----------|---------:|
| sv | 6,967,420 |
| da | 4,965,195 |
| no | 1,340,470 |
| is | 206,689 |
| total | 13,479,774 |
### Top-50 Subreddit Distribution
| name | count |
|----------|--------:|
|sweden |4,881,483|
|Denmark |3,579,178|
|norge |1,281,655|
|svenskpolitik | 771,960|
|InfluencergossipDK | 649,910|
|swedishproblems | 339,683|
|Iceland | 183,488|
|dkfinance | 113,860|
|unket | 81,077|
|DanishEnts | 69,055|
|dankmark | 62,928|
|swedents | 58,576|
|scandinavia | 57,136|
|Allsvenskan | 56,006|
|Gothenburg | 54,395|
|stockholm | 51,016|
|ISKbets | 47,944|
|Sverige | 39,552|
|SWARJE | 34,691|
|GossipDK | 29,332|
|NorskFotball | 28,571|
|Superligaen | 23,641|
|Aarhus | 22,516|
|Svenska | 20,561|
|newsdk | 19,893|
|AskReddit | 16,672|
|copenhagen | 16,668|
|okpolarncp | 16,583|
|SwedditUniversalis | 15,990|
|Sveriges_politik | 15,058|
|intresseklubben | 13,246|
|Aktiemarknaden | 13,202|
|soccer | 12,637|
|teenagers | 10,845|
|Norway | 10,680|
|europe | 10,247|
|Matinbum | 9,792|
|oslo | 9,650|
|iksdagen | 9,232|
|Asksweddit | 8,851|
|Forsvaret | 8,641|
|Sverigesforsvarsmakt | 8,469|
|memes | 8,299|
|Danish | 8,268|
|DANMAG | 8,214|
|PewdiepieSubmissions | 7,800|
|sweddpolitik | 7,646|
|pinsamt | 7,318|
|arbetarrorelsen | 7,317|
|Ishockey | 6,824|
## Dataset Creation
### Curation Rationale
The Scandinavian languages do not have many open source social media datasets.
### Source Data
The raw Reddit data was collected through [PushShift](https://files.pushshift.io/reddit/comments/).
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY 4.0
license](https://creativecommons.org/licenses/by/4.0/).
| 5,010 | [
[
-0.06170654296875,
-0.038787841796875,
0.02911376953125,
0.0172119140625,
-0.040496826171875,
0.0122528076171875,
-0.0239715576171875,
-0.018951416015625,
0.056365966796875,
0.01345062255859375,
-0.051300048828125,
-0.0665283203125,
-0.061309814453125,
0.030... |
martinjosifoski/SynthIE | 2023-03-06T21:59:52.000Z | [
"language:en",
"license:mit",
"arxiv:2303.04132",
"region:us"
] | martinjosifoski | The paper ``Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction'' builds on the idea that even for hard tasks of interest (with input X and Y) -- for which human-annotation is not practical and high-quality annotated data is not available -- by reversing the task (from Y to X), useful data can be synthetically generated even when that original task cannot be solved directly by the LLM. This process enables the creation of a high-quality dataset of X-Y pairs that will enable the training/fine-tuning of models for the original task of interest.
In particular, the paper studies the idea in the context of closed information extraction (IE), where a model is tasked with extracting the exhaustive set of facts expressed in natural language text. The synthetic data generation pipeline proposed in the paper comprises three primary components: (i) construction of a knowledge graph containing the entities and relations of interest; (ii) sampling of coherent triplet sets from the KG with comprehensive coverage of the entities and relations, and (iii) generation of high-quality text, expressing the triplets without any supplementary information. | @article{josifoski2023exploiting,
title={Exploiting Asymmetry for Synthetic Training Data Generation: {S}ynth{IE} and The Case of Information Extraction},
author={Josifoski, Martin and Sakota, Marija and Peyrard, Maxime and West, Robert},
journal={arXiv preprint arXiv:2303.04132},
year={2023}
} | 4 | 42 | 2023-03-03T12:15:35 | ---
license: mit
language:
- en
pretty_name: SynthIE
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage and Repository:** https://github.com/epfl-dlab/SynthIE
- **Paper:** https://arxiv.org/abs/2303.04132
### Dataset Summary
[Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction](https://arxiv.org/abs/2303.04132) builds on the idea that even for hard tasks of interest (with input X and Y) -- for which human-annotation is not practical and high-quality annotated data is not available -- by reversing the task (from Y to X), useful data can be synthetically generated even when that original task cannot be solved directly by the LLM. This process enables the creation of a high-quality dataset of X-Y pairs that will enable the training/fine-tuning of models for the original task of interest.
In particular, the paper studies the idea in the context of closed information extraction (IE), where a model is tasked with extracting the exhaustive set of facts expressed in natural language text. The synthetic data generation pipeline proposed in the paper comprises three primary components: (i) construction of a knowledge graph containing the entities and relations of interest; (ii) sampling of coherent triplet sets from the KG with comprehensive coverage of the entities and relations, and (iii) generation of high-quality text, expressing the triplets without any supplementary information. For more details regarding the dataset construction procedure, see the [paper](https://arxiv.org/abs/2303.04132).
We used this pipeline to generate two large high-quality datasets:<br>
**SynthIE-code**: consisting of around 1.8M training, 10K validation, and 50K test samples generated with [code-davinci-002](https://platform.openai.com/docs/models/gpt-3-5) <br>
**SynthIE-text**: consisting of 10K validation and 50K test samples generated with [text-davinci-003](https://platform.openai.com/docs/models/gpt-3-5) <br>
The text for the validation and test data points in SynthIE-code and SynthIE-text corresponds to the same triplet sets.
The resulting data is then used to train [SynthIE](https://github.com/epfl-dlab/SynthIE), a series of T5-based versions of [GenIE](https://github.com/epfl-dlab/GenIE) -- a recently proposed autoregressive closed IE system; as well as to enable a more accurate evaluation. As a baseline, T5 versions of GenIE are trained on the same dataset, [REBEL](https://aclanthology.org/2021.findings-emnlp.204.pdf), as the original work. The (processed) version of this dataset, suitable for closed IE and used in the paper's experiments, is provided in this repository.
According to the human evaluation conducted in the paper, the synthetically generated data is substantially more faithful than the distantly supervised REBEL and contains around 15\% false negative (opposed to REBEL's 70\%) and 22\% false positive (opposed to REBEL's 56\%) annotations while uniformly covering all relations (see the paper for more details).
### Languages
To stay comparable to GenIE, [SynthIE](https://github.com/epfl-dlab/SynthIE) considers only English. Therefore, the text in SynthIE-code and SynthIE-text is generated in English only. However, the triplets' constituents come from WikiData and are language invariant. Therefore, triplet sets with labels for many languages can easily be obtained.
## Dataset Structure
The SynthIE meta-dataset actually comprises 3 datasets:
- **SynthIE-code** (`synthie_code`)
- **SynthIE-text** (`synthie_text`)
- **REBEL** (`rebel`)
**SynCode**
The samples in this dataset were generated with `code-davinci-002`.
| | Train | Valid | Test |
| ----- | ----- | ----- | ----- |
| Data Points | 1,815,378 | 10,000 | 50,286 |
| Triplets | 6,055,911 | 34,262 | 172,991 |
| Entities | 1,806,126 | 27,553 | 105,176 |
| Relations | 888 | 883 | 888 |
**SynthIE-text**
The samples in this dataset were generated with `text-davinci-003`.
| | Train | Valid | Test |
| ----- | ----- | ----- | ----- |
| Data Points | -- | 10,000 | 50,286 |
| Triplets | -- | 34,262 | 172,991 |
| Entities | -- | 27,553 | 105,176 |
| Relations | -- | 883 | 888 |
**REBEL**
The samples in this dataset are processed and further annotated from the already existing [REBEL](https://huggingface.co/datasets/Babelscape/rebel-dataset) dataset.
| | Train | Valid | Test |
| ----- | ----- | ----- | ----- |
| Data Points | 2,813,210 | 155,926 | 156,449 |
| Triplets | 7,187,915 | 397,326 | 398,252 |
| Entities | 2,038,741 | 205,080 | 205,549 |
| Relations | 1071 | 691 | 690 |
Note that REBEL is substantially more skewed than SynCode and SynthIE-text. Here are the relation frequency (in terms of data points) statistics for REBEL and SynCode.
| | min | 1st quantile | median | 3rd quantile | max |
| ----- | ----- | ----- | ----- | ----- | ----- |
| SynCode | 61 | 1043 | 1691 | 3944 | 499,783 |
| REBEL | 1 | 7 | 47 | 625 | 1,202,489 |
**SynCode/SynthIE-text/REBEL processed**
Additionally, we provide a processed version (that was used in the paper) of each dataset. The processing consists of pre-computations/pre-processing that were run to speed the data loading for the experiments. The key difference is that in the processed version of SynthIE-code and SynthIE-text, the target triplets are consistently ordered according to a heuristic detecting the constituent entities' appearance position in the text, with triplets corresponding to entities appearing earlier in the output linearization (cf. paper). The triplets for REBEL are ordered even in the "unprocessed version". To load the processed version of the dataset, add the suffix "_pc" to the original identifier (i.e., synthie_code_pc, synthie_text_pc, rebel_pc). The processing is performed by applying [this](https://github.com/epfl-dlab/SynthIE/blob/main/scripts/pre_computing.py) script on the original data.
### Data Fields
All of the datasets share the same schema. Here is a list of the fields paired with a description.
- `id`: A unique numeric identifier, starting from 0 for each dataset.
- `text`: A string expressing the text corresponding to this sample.
- `triplets`: A list of triplets that are expressed in the text. Each triplet corresponds to a dictionary
- `subject`: The subject refers to an entity. It is a dictionary of:
- `surfaceform`: A textual label corresponding to the title of the entity's English Wikipedia page
- `uri`: A string corresponding to the entity's WikiData identifier
- `relation`: The relation refers to a relation. It is a dictionary of:
- `surfaceform`: The textual label assigned to the WikiData item corresponding to the given relation.
- `uri`: A string corresponding to the relation's WikiData identifier
- `object`: Same as the subject, the object refers to an entity and corresponds to a dictionary with the same structure.
- `entities`: A list comprising all the entities expressed in the text (appearing as a subject or an object in any of the triplets). Each entity is expressed as a dictionary following the same structure as the `subject` and `object` entities in the triplet list.
- `relations`: A list comprising all the relations expressed in the text (appearing as the relation in any of the triplets). Each relation is expressed as a dictionary following the same structure as the `relation` in the triplet list.
Here is an example of a data point:
```
{'id': 1,
'text': 'The Journal of Colloid and Interface Science is a bibliographic '
'review indexed in Scopus and published by Elsevier. Its main subject '
'is chemical engineering, and it is written in the English language. '
'It is based in the United States, and is owned by Elsevier, the same '
'company that owns Scopus.',
'triplets': [{'subject': "{'surfaceform': "
"'Journal_of_Colloid_and_Interface_Science', 'uri': "
"'Q3902043'}",
'predicate': "{'surfaceform': 'indexed in bibliographic "
"review', 'uri': 'P8875'}",
'object': "{'surfaceform': 'Scopus', 'uri': 'Q371467'}"},
{'subject': "{'surfaceform': "
"'Journal_of_Colloid_and_Interface_Science', 'uri': "
"'Q3902043'}",
'predicate': "{'surfaceform': 'main subject', 'uri': 'P921'}",
'object': "{'surfaceform': 'Chemical_engineering', 'uri': "
"'Q83588'}"},
{'subject': "{'surfaceform': "
"'Journal_of_Colloid_and_Interface_Science', 'uri': "
"'Q3902043'}",
'predicate': "{'surfaceform': 'language of work or name', "
"'uri': 'P407'}",
'object': "{'surfaceform': 'English_language', 'uri': 'Q1860'}"},
{'subject': "{'surfaceform': "
"'Journal_of_Colloid_and_Interface_Science', 'uri': "
"'Q3902043'}",
'predicate': "{'surfaceform': 'publisher', 'uri': 'P123'}",
'object': "{'surfaceform': 'Elsevier', 'uri': 'Q746413'}"},
{'subject': "{'surfaceform': "
"'Journal_of_Colloid_and_Interface_Science', 'uri': "
"'Q3902043'}",
'predicate': "{'surfaceform': 'country of origin', 'uri': "
"'P495'}",
'object': "{'surfaceform': 'United_States', 'uri': 'Q30'}"},
{'subject': "{'surfaceform': 'Scopus', 'uri': 'Q371467'}",
'predicate': "{'surfaceform': 'owned by', 'uri': 'P127'}",
'object': "{'surfaceform': 'Elsevier', 'uri': 'Q746413'}"}],
'entities': [{'surfaceform': 'Journal_of_Colloid_and_Interface_Science',
'uri': 'Q3902043'},
{'surfaceform': 'Scopus', 'uri': 'Q371467'},
{'surfaceform': 'Chemical_engineering', 'uri': 'Q83588'},
{'surfaceform': 'English_language', 'uri': 'Q1860'},
{'surfaceform': 'Elsevier', 'uri': 'Q746413'},
{'surfaceform': 'United_States', 'uri': 'Q30'}],
'relations': [{'surfaceform': 'indexed in bibliographic review',
'uri': 'P8875'},
{'surfaceform': 'main subject', 'uri': 'P921'},
{'surfaceform': 'language of work or name', 'uri': 'P407'},
{'surfaceform': 'publisher', 'uri': 'P123'},
{'surfaceform': 'country of origin', 'uri': 'P495'},
{'surfaceform': 'owned by', 'uri': 'P127'}]}
```
### Data Splits
Each dataset (except SynthIE-text, which does not have a train set) has the same 4 splits:
- `train`
- `validation`
- `test`
- `test_small`
The first three are self-explanatory; the `test_small` split corresponds to a randomly sampled subset of the `test` split in which the IDs of the data points are kept the same as in the test set from which they were sampled (i.e., after the sampling IDs are not reset to 0 and resigned).
## Dataset Creation
Collecting datasets for the closed IE task is time-consuming, expensive, and even hardly feasible, as it requires annotators to know the entire entity and relation catalogs and reason about all possible facts expressed in the text. As a result, only small or noisy datasets exist. The only large dataset available, REBEL, suffers from several problems: (i) Noise: it is constructed based on distant supervision, and for many data points, the target set does not contain all the facts expressed in the text or is partially incorrect; (ii) Skewness: most relations appear only a few times in the dataset, resulting in models that ignore most of the information when used for training and poor estimates of performance when used for evaluation.
This dataset is constructed using a synthetic data generation pipeline, proposed in the paper [Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction](https://arxiv.org/abs/2303.04132), and serves as a use case for a task for which (i) high-quality annotated data is not available; (ii) human-annotation is not practical; (iii) the direct task (closed IE) is challenging for an LLM. Concretely, by reversing the task and generating the data in the opposite direction -- going from triplets to text -- high-quality useful data can be generated. The pipeline used to construct the dataset comprises three components: (i) construction of a knowledge graph containing the entities and relations of interest; (ii) sampling of coherent triplet sets from the KG with comprehensive coverage of the entities and relations, and (iii) generation of high-quality text, expressing the triplets without any supplementary information. For more details regarding the dataset construction procedure and considerations for using the data, see the "Synthetic Data Generation", "Discussion", and "Limitations" sections of the [paper](https://arxiv.org/abs/2303.04132).
## Additional Information
### Licensing Information
The dataset is licensed under the terms of the MIT license.
### Citation Information
```
@article{josifoski2023exploiting,
title={Exploiting Asymmetry for Synthetic Training Data Generation: {S}ynth{IE} and The Case of Information Extraction},
author={Josifoski, Martin and Sakota, Marija and Peyrard, Maxime and West, Robert},
journal={arXiv preprint arXiv:2303.04132},
year={2023}
}
```
| 13,811 | [
[
-0.0297088623046875,
-0.03668212890625,
0.03436279296875,
4.172325134277344e-7,
-0.006587982177734375,
0.00966644287109375,
-0.025604248046875,
-0.039215087890625,
0.0194091796875,
0.03411865234375,
-0.053680419921875,
-0.049896240234375,
-0.0282745361328125,
... |
StampyAI/alignment-research-dataset | 2023-08-26T19:12:23.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"arxiv:2206.02841",
"region:us"
] | StampyAI | The AI Alignment Research Dataset is a collection of documents related to AI Alignment and Safety from various books, research papers, and alignment related blog posts. | null | 7 | 42 | 2023-04-26T08:57:46 | ---
language:
- en
license: mit
size_categories:
- 10K<n<100K
task_categories:
- question-answering
pretty_name: alignment-research-dataset
dataset_info:
features:
- name: id
dtype: string
- name: source
dtype: string
- name: title
dtype: string
- name: text
dtype: large_string
- name: url
dtype: string
- name: date_published
dtype: string
- name: authors
sequence: string
- name: summary
sequence: string
- name: source_type
dtype: string
- name: book_title
dtype: string
- name: karma
dtype: int32
- name: votes
dtype: int32
- name: words
dtype: int32
- name: comment_count
dtype: int32
- name: tags
sequence: string
- name: modified_at
dtype: string
- name: alias
dtype: string
- name: data_last_modified
dtype: string
- name: abstract
dtype: string
- name: author_comment
dtype: string
- name: journal_ref
dtype: string
- name: doi
dtype: string
- name: primary_category
dtype: string
- name: categories
sequence: string
- name: initial_source
dtype: string
- name: bibliography_bib
sequence:
- name: title
dtype: string
config_name: all
splits:
- name: train
num_bytes: 471644446
num_examples: 14271
download_size: 484827959
dataset_size: 471644446
---
# AI Alignment Research Dataset
The AI Alignment Research Dataset is a collection of documents related to AI Alignment and Safety from various books, research papers, and alignment related blog posts. This is a work in progress. Components are still undergoing a cleaning process to be updated more regularly.
## Sources
Here are the list of sources along with sample contents:
- [agentmodel](https://agentmodels.org/)
- [agisf](https://course.aisafetyfundamentals.com/) - recommended readings from AGI Safety Fundamentals
- [aisafety.info](https://aisafety.info/) - Stampy's FAQ
- [alignmentforum](https://www.alignmentforum.org)
- [alignment_newsletter](https://rohinshah.com/alignment-newsletter/)
- [arbital](https://arbital.com/)
- [arxiv](https://arxiv.org/) - relevant research papers
- blogs - entire websites automatically scraped
- [AI Impacts](https://aiimpacts.org/)
- [AI Safety Camp](https://aisafety.camp/)
- [carado.moe](https://carado.moe/)
- [Cold Takes](https://www.cold-takes.com/)
- [DeepMind technical blogs](https://www.deepmind.com/blog-categories/technical-blogs)
- [DeepMind AI Safety Research](https://deepmindsafetyresearch.medium.com/)
- [EleutherAI](https://blog.eleuther.ai/)
- [generative.ink](https://generative.ink/posts/)
- [Gwern Branwen's blog](https://gwern.net/)
- [Jack Clark's Import AI](https://importai.substack.com/)
- [MIRI](https://intelligence.org/)
- [Jacob Steinhardt's blog](https://jsteinhardt.wordpress.com/)
- [ML Safety Newsletter](https://newsletter.mlsafety.org/)
- [Transformer Circuits Thread](https://transformer-circuits.pub/)
- [Open AI Research](https://openai.com/research/)
- [Victoria Krakovna's blog](https://vkrakovna.wordpress.com/)
- [Eliezer Yudkowsky's blog](https://www.yudkowsky.net/)
- [distill](https://distill.pub/)
- [eaforum](https://forum.effectivealtruism.org/) - selected posts
- [lesswrong](https://www.lesswrong.com/) - selected posts
- special_docs - individual documents curated from various resources
- [Make a suggestion](https://bit.ly/ard-suggestion) for sources not already in the dataset
- youtube - playlists & channels
- [AI Alignment playlist](https://www.youtube.com/playlist?list=PLCRVRLd2RhZTpdUdEzJjo3qhmX3y3skWA) and other lists
- [AI Explained](https://www.youtube.com/@aiexplained-official)
- [Evan Hubinger's AI Safety Talks](https://www.youtube.com/@aisafetytalks)
- [AI Safety Reading Group](https://www.youtube.com/@aisafetyreadinggroup/videos)
- [AiTech - TU Delft](https://www.youtube.com/@AiTechTUDelft/)
- [Rob Miles AI](https://www.youtube.com/@RobertMilesAI)
## Keys
All entries contain the following keys:
- `id` - string of unique identifier
- `source` - string of data source listed above
- `title` - string of document title of document
- `authors` - list of strings
- `text` - full text of document content
- `url` - string of valid link to text content
- `date_published` - in UTC format
Additional keys may be available depending on the source document.
## Usage
Execute the following code to download and parse the files:
```python
from datasets import load_dataset
data = load_dataset('StampyAI/alignment-research-dataset')
```
To only get the data for a specific source, pass it in as the second argument, e.g.:
```python
from datasets import load_dataset
data = load_dataset('StampyAI/alignment-research-dataset', 'lesswrong')
```
## Limitations and Bias
LessWrong posts have overweighted content on doom and existential risk, so please beware in training or finetuning generative language models on the dataset.
## Contributing
The scraper to generate this dataset is open-sourced on [GitHub](https://github.com/StampyAI/alignment-research-dataset) and currently maintained by volunteers at StampyAI / AI Safety Info. [Learn more](https://coda.io/d/AI-Safety-Info_dfau7sl2hmG/Get-involved_susRF#_lufSr) or join us on [Discord](https://discord.gg/vjFSCDyMCy).
## Rebuilding info
This README contains info about the number of rows and their features which should be rebuilt each time datasets get changed. To do so, run:
datasets-cli test ./alignment-research-dataset --save_info --all_configs
## Citing the Dataset
For more information, here is the [paper](https://arxiv.org/abs/2206.02841) and [LessWrong](https://www.lesswrong.com/posts/FgjcHiWvADgsocE34/a-descriptive-not-prescriptive-overview-of-current-ai) post. Please use the following citation when using the dataset:
Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022). | 6,001 | [
[
-0.0279083251953125,
-0.037322998046875,
0.0151214599609375,
-0.011322021484375,
0.0161590576171875,
-0.005184173583984375,
-0.00638580322265625,
-0.039398193359375,
0.00856781005859375,
0.0110321044921875,
-0.044921875,
-0.0447998046875,
-0.0270233154296875,
... |
gsarti/iwslt2017_context | 2023-05-07T14:09:24.000Z | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ar",
"language:de",
"language:en",
"language:fr",
"language:it",
"language:ja",
"language... | gsarti | The IWSLT 2017 Multilingual Task addresses text translation, including zero-shot translation, with a single MT system across all directions including English, German, Dutch, Italian and Romanian. As unofficial task, conventional bilingual text translation is offered between English and Arabic, French, Japanese, Chinese, German and Korean. | @inproceedings{cettolo-etal-2017-overview,
title = "Overview of the {IWSLT} 2017 Evaluation Campaign",
author = {Cettolo, Mauro and
Federico, Marcello and
Bentivogli, Luisa and
Niehues, Jan and
St{\\"u}ker, Sebastian and
Sudoh, Katsuhito and
Yoshino, Koichiro and
Federmann, Christian},
booktitle = "Proceedings of the 14th International Conference on Spoken Language Translation",
month = dec # " 14-15",
year = "2017",
address = "Tokyo, Japan",
publisher = "International Workshop on Spoken Language Translation",
url = "https://aclanthology.org/2017.iwslt-1.1",
pages = "2--14",
} | 1 | 42 | 2023-05-07T14:03:04 | ---
annotations_creators:
- crowdsourced
language:
- ar
- de
- en
- fr
- it
- ja
- ko
- nl
- ro
- zh
language_creators:
- expert-generated
license:
- cc-by-nc-nd-4.0
multilinguality:
- translation
pretty_name: IWSLT 2017
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: iwslt-2017
dataset_info:
- config_name: iwslt2017-en-it
features:
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 46647925
num_examples: 231619
- name: test
num_bytes: 305246
num_examples: 1566
- name: validation
num_bytes: 200023
num_examples: 929
download_size: 329391132
dataset_size: 47153194
- config_name: iwslt2017-en-nl
features:
- name: translation
dtype:
translation:
languages:
- en
- nl
splits:
- name: train
num_bytes: 42843933
num_examples: 237240
- name: test
num_bytes: 311646
num_examples: 1777
- name: validation
num_bytes: 197814
num_examples: 1003
download_size: 329391132
dataset_size: 43353393
- config_name: iwslt2017-en-ro
features:
- name: translation
dtype:
translation:
languages:
- en
- ro
splits:
- name: train
num_bytes: 44129950
num_examples: 220538
- name: test
num_bytes: 316790
num_examples: 1678
- name: validation
num_bytes: 205028
num_examples: 914
download_size: 329391132
dataset_size: 44651768
- config_name: iwslt2017-it-en
features:
- name: translation
dtype:
translation:
languages:
- it
- en
splits:
- name: train
num_bytes: 46647925
num_examples: 231619
- name: test
num_bytes: 305246
num_examples: 1566
- name: validation
num_bytes: 200023
num_examples: 929
download_size: 329391132
dataset_size: 47153194
- config_name: iwslt2017-it-nl
features:
- name: translation
dtype:
translation:
languages:
- it
- nl
splits:
- name: train
num_bytes: 43033168
num_examples: 233415
- name: test
num_bytes: 309725
num_examples: 1669
- name: validation
num_bytes: 197774
num_examples: 1001
download_size: 329391132
dataset_size: 43540667
- config_name: iwslt2017-it-ro
features:
- name: translation
dtype:
translation:
languages:
- it
- ro
splits:
- name: train
num_bytes: 44485169
num_examples: 217551
- name: test
num_bytes: 314974
num_examples: 1643
- name: validation
num_bytes: 204989
num_examples: 914
download_size: 329391132
dataset_size: 45005132
- config_name: iwslt2017-nl-en
features:
- name: translation
dtype:
translation:
languages:
- nl
- en
splits:
- name: train
num_bytes: 42843933
num_examples: 237240
- name: test
num_bytes: 311646
num_examples: 1777
- name: validation
num_bytes: 197814
num_examples: 1003
download_size: 329391132
dataset_size: 43353393
- config_name: iwslt2017-nl-it
features:
- name: translation
dtype:
translation:
languages:
- nl
- it
splits:
- name: train
num_bytes: 43033168
num_examples: 233415
- name: test
num_bytes: 309725
num_examples: 1669
- name: validation
num_bytes: 197774
num_examples: 1001
download_size: 329391132
dataset_size: 43540667
- config_name: iwslt2017-nl-ro
features:
- name: translation
dtype:
translation:
languages:
- nl
- ro
splits:
- name: train
num_bytes: 41338738
num_examples: 206920
- name: test
num_bytes: 320952
num_examples: 1680
- name: validation
num_bytes: 202380
num_examples: 913
download_size: 329391132
dataset_size: 41862070
- config_name: iwslt2017-ro-en
features:
- name: translation
dtype:
translation:
languages:
- ro
- en
splits:
- name: train
num_bytes: 44129950
num_examples: 220538
- name: test
num_bytes: 316790
num_examples: 1678
- name: validation
num_bytes: 205028
num_examples: 914
download_size: 329391132
dataset_size: 44651768
- config_name: iwslt2017-ro-it
features:
- name: translation
dtype:
translation:
languages:
- ro
- it
splits:
- name: train
num_bytes: 44485169
num_examples: 217551
- name: test
num_bytes: 314974
num_examples: 1643
- name: validation
num_bytes: 204989
num_examples: 914
download_size: 329391132
dataset_size: 45005132
- config_name: iwslt2017-ro-nl
features:
- name: translation
dtype:
translation:
languages:
- ro
- nl
splits:
- name: train
num_bytes: 41338738
num_examples: 206920
- name: test
num_bytes: 320952
num_examples: 1680
- name: validation
num_bytes: 202380
num_examples: 913
download_size: 329391132
dataset_size: 41862070
- config_name: iwslt2017-ar-en
features:
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: train
num_bytes: 56481059
num_examples: 231713
- name: test
num_bytes: 2014296
num_examples: 8583
- name: validation
num_bytes: 241206
num_examples: 888
download_size: 27748780
dataset_size: 58736561
- config_name: iwslt2017-de-en
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 42608380
num_examples: 206112
- name: test
num_bytes: 1608474
num_examples: 8079
- name: validation
num_bytes: 210975
num_examples: 888
download_size: 16758320
dataset_size: 44427829
- config_name: iwslt2017-en-ar
features:
- name: translation
dtype:
translation:
languages:
- en
- ar
splits:
- name: train
num_bytes: 56481059
num_examples: 231713
- name: test
num_bytes: 2014296
num_examples: 8583
- name: validation
num_bytes: 241206
num_examples: 888
download_size: 29333173
dataset_size: 58736561
- config_name: iwslt2017-en-de
features:
- name: translation
dtype:
translation:
languages:
- en
- de
splits:
- name: train
num_bytes: 42608380
num_examples: 206112
- name: test
num_bytes: 1608474
num_examples: 8079
- name: validation
num_bytes: 210975
num_examples: 888
download_size: 16758334
dataset_size: 44427829
- config_name: iwslt2017-en-fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 49273286
num_examples: 232825
- name: test
num_bytes: 1767465
num_examples: 8597
- name: validation
num_bytes: 207579
num_examples: 890
download_size: 27699724
dataset_size: 51248330
- config_name: iwslt2017-en-ja
features:
- name: translation
dtype:
translation:
languages:
- en
- ja
splits:
- name: train
num_bytes: 48204987
num_examples: 223108
- name: test
num_bytes: 1809007
num_examples: 8469
- name: validation
num_bytes: 208124
num_examples: 871
download_size: 26983602
dataset_size: 50222118
- config_name: iwslt2017-en-ko
features:
- name: translation
dtype:
translation:
languages:
- en
- ko
splits:
- name: train
num_bytes: 51678043
num_examples: 230240
- name: test
num_bytes: 1869793
num_examples: 8514
- name: validation
num_bytes: 219295
num_examples: 879
download_size: 19364776
dataset_size: 53767131
- config_name: iwslt2017-en-zh
features:
- name: translation
dtype:
translation:
languages:
- en
- zh
splits:
- name: train
num_bytes: 44271004
num_examples: 231266
- name: test
num_bytes: 1605527
num_examples: 8549
- name: validation
num_bytes: 202537
num_examples: 879
download_size: 27597071
dataset_size: 46079068
- config_name: iwslt2017-fr-en
features:
- name: translation
dtype:
translation:
languages:
- fr
- en
splits:
- name: train
num_bytes: 49273286
num_examples: 232825
- name: test
num_bytes: 1767465
num_examples: 8597
- name: validation
num_bytes: 207579
num_examples: 890
download_size: 26880731
dataset_size: 51248330
- config_name: iwslt2017-ja-en
features:
- name: translation
dtype:
translation:
languages:
- ja
- en
splits:
- name: train
num_bytes: 48204987
num_examples: 223108
- name: test
num_bytes: 1809007
num_examples: 8469
- name: validation
num_bytes: 208124
num_examples: 871
download_size: 26190859
dataset_size: 50222118
- config_name: iwslt2017-ko-en
features:
- name: translation
dtype:
translation:
languages:
- ko
- en
splits:
- name: train
num_bytes: 51678043
num_examples: 230240
- name: test
num_bytes: 1869793
num_examples: 8514
- name: validation
num_bytes: 219295
num_examples: 879
download_size: 19364733
dataset_size: 53767131
- config_name: iwslt2017-zh-en
features:
- name: translation
dtype:
translation:
languages:
- zh
- en
splits:
- name: train
num_bytes: 44271004
num_examples: 231266
- name: test
num_bytes: 1605527
num_examples: 8549
- name: validation
num_bytes: 202537
num_examples: 879
download_size: 26849290
dataset_size: 46079068
---
# Dataset Card for IWSLT 2017
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://sites.google.com/site/iwsltevaluation2017/TED-tasks](https://sites.google.com/site/iwsltevaluation2017/TED-tasks)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Overview of the IWSLT 2017 Evaluation Campaign](https://aclanthology.org/2017.iwslt-1.1/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 4.24 GB
- **Size of the generated dataset:** 1.14 GB
- **Total amount of disk used:** 5.38 GB
*This repository contain a modified version of the loading script used in the official [iwslt2017](https://huggingface.co/datasets/iwslt2017) repository updated to include document and segment information for all available sentence pairs, enabling their usage for document-level and context-aware MT applications. Refer to the original repository for additional information.*
| 11,954 | [
[
-0.039703369140625,
-0.01959228515625,
0.009033203125,
0.0234832763671875,
-0.0245361328125,
0.0268096923828125,
-0.004608154296875,
-0.03021240234375,
0.0203399658203125,
0.03765869140625,
-0.0791015625,
-0.059295654296875,
-0.055450439453125,
-0.0003676414... |
jainr3/diffusiondb-pixelart | 2023-05-11T18:59:45.000Z | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:n>1T",
"source_datasets:modified",
"language:en",
"license:cc0-1.0",
"stable diffusion"... | jainr3 | DiffusionDB is the first large-scale text-to-image prompt dataset. It contains 2
million images generated by Stable Diffusion using prompts and hyperparameters
specified by real users. The unprecedented scale and diversity of this
human-actuated dataset provide exciting research opportunities in understanding
the interplay between prompts and generative models, detecting deepfakes, and
designing human-AI interaction tools to help users more easily use these models. | @article{wangDiffusionDBLargescalePrompt2022,
title = {{{DiffusionDB}}: {{A}} Large-Scale Prompt Gallery Dataset for Text-to-Image Generative Models},
author = {Wang, Zijie J. and Montoya, Evan and Munechika, David and Yang, Haoyang and Hoover, Benjamin and Chau, Duen Horng},
year = {2022},
journal = {arXiv:2210.14896 [cs]},
url = {https://arxiv.org/abs/2210.14896}
} | 6 | 42 | 2023-05-11T17:28:21 | ---
layout: default
title: Home
nav_order: 1
has_children: false
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: DiffusionDB-Pixelart
size_categories:
- n>1T
source_datasets:
- modified
tags:
- stable diffusion
- prompt engineering
- prompts
task_categories:
- text-to-image
- image-to-text
task_ids:
- image-captioning
---
# DiffusionDB-Pixelart
## Table of Contents
- [DiffusionDB](#diffusiondb)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Subset](#subset)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Metadata](#dataset-metadata)
- [Metadata Schema](#metadata-schema)
- [Data Splits](#data-splits)
- [Loading Data Subsets](#loading-data-subsets)
- [Method 1: Using Hugging Face Datasets Loader](#method-1-using-hugging-face-datasets-loader)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [DiffusionDB homepage](https://poloclub.github.io/diffusiondb)
- **Repository:** [DiffusionDB repository](https://github.com/poloclub/diffusiondb)
- **Distribution:** [DiffusionDB Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb)
- **Paper:** [DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models](https://arxiv.org/abs/2210.14896)
### Dataset Summary
**This is a subset of the DiffusionDB 2M dataset which has been turned into pixel-style art.**
DiffusionDB is the first large-scale text-to-image prompt dataset. It contains **14 million** images generated by Stable Diffusion using prompts and hyperparameters specified by real users.
DiffusionDB is publicly available at [🤗 Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb).
### Supported Tasks and Leaderboards
The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.
### Languages
The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian.
### Subset
DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs. The pixelated version of the data was taken from the DiffusionDB 2M and has 2000 examples only.
|Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table|
|:--|--:|--:|--:|--:|--:|
|DiffusionDB-pixelart|2k|~1.5k|~1.6GB|`images/`|`metadata.parquet`|
Images in DiffusionDB-pixelart are stored in `png` format.
## Dataset Structure
We use a modularized file structure to distribute DiffusionDB. The 2k images in DiffusionDB-pixelart are split into folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters.
```bash
# DiffusionDB 2k
./
├── images
│ ├── part-000001
│ │ ├── 3bfcd9cf-26ea-4303-bbe1-b095853f5360.png
│ │ ├── 5f47c66c-51d4-4f2c-a872-a68518f44adb.png
│ │ ├── 66b428b9-55dc-4907-b116-55aaa887de30.png
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-000002
│ ├── part-000003
│ ├── [...]
│ └── part-002000
└── metadata.parquet
```
These sub-folders have names `part-0xxxxx`, and each image has a unique name generated by [UUID Version 4](https://en.wikipedia.org/wiki/Universally_unique_identifier). The JSON file in a sub-folder has the same name as the sub-folder. Each image is a `PNG` file (DiffusionDB-pixelart). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters.
### Data Instances
For example, below is the image of `ec9b5e2c-028e-48ac-8857-a52814fd2a06.png` and its key-value pair in `part-000001.json`.
<img width="300" src="https://datasets-server.huggingface.co/assets/jainr3/diffusiondb-pixelart/--/2k_all/train/0/image/image.png">
```json
{
"ec9b5e2c-028e-48ac-8857-a52814fd2a06.png": {
"p": "doom eternal, game concept art, veins and worms, muscular, crustacean exoskeleton, chiroptera head, chiroptera ears, mecha, ferocious, fierce, hyperrealism, fine details, artstation, cgsociety, zbrush, no background ",
"se": 3312523387,
"c": 7.0,
"st": 50,
"sa": "k_euler"
},
}
```
### Data Fields
- key: Unique image name
- `p`: Text
### Dataset Metadata
To help you easily access prompts and other attributes of images without downloading all the Zip files, we include a metadata table `metadata.parquet` for DiffusionDB-pixelart.
Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table.
Below are three random rows from `metadata.parquet`.
| image_name | prompt | part_id | seed | step | cfg | sampler | width | height | user_name | timestamp | image_nsfw | prompt_nsfw |
|:-----------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------:|-----------:|-------:|------:|----------:|--------:|---------:|:-----------------------------------------------------------------|:--------------------------|-------------:|--------------:|
| 0c46f719-1679-4c64-9ba9-f181e0eae811.png | a small liquid sculpture, corvette, viscous, reflective, digital art | 1050 | 2026845913 | 50 | 7 | 8 | 512 | 512 | c2f288a2ba9df65c38386ffaaf7749106fed29311835b63d578405db9dbcafdb | 2022-08-11 09:05:00+00:00 | 0.0845108 | 0.00383462 |
| a00bdeaa-14eb-4f6c-a303-97732177eae9.png | human sculpture of lanky tall alien on a romantic date at italian restaurant with smiling woman, nice restaurant, photography, bokeh | 905 | 1183522603 | 50 | 10 | 8 | 512 | 768 | df778e253e6d32168eb22279a9776b3cde107cc82da05517dd6d114724918651 | 2022-08-19 17:55:00+00:00 | 0.692934 | 0.109437 |
| 6e5024ce-65ed-47f3-b296-edb2813e3c5b.png | portrait of barbaric spanish conquistador, symmetrical, by yoichi hatakenaka, studio ghibli and dan mumford | 286 | 1713292358 | 50 | 7 | 8 | 512 | 640 | 1c2e93cfb1430adbd956be9c690705fe295cbee7d9ac12de1953ce5e76d89906 | 2022-08-12 03:26:00+00:00 | 0.0773138 | 0.0249675 |
#### Metadata Schema
`metadata.parquet` schema:
|Column|Type|Description|
|:---|:---|:---|
|`image_name`|`string`|Image UUID filename.|
|`text`|`string`|The text prompt used to generate this image.|
> **Warning**
> Although the Stable Diffusion model has an NSFW filter that automatically blurs user-generated NSFW images, this NSFW filter is not perfect—DiffusionDB still contains some NSFW images. Therefore, we compute and provide the NSFW scores for images and prompts using the state-of-the-art models. The distribution of these scores is shown below. Please decide an appropriate NSFW score threshold to filter out NSFW images before using DiffusionDB in your projects.
<img src="https://i.imgur.com/1RiGAXL.png" width="100%">
### Data Splits
For DiffusionDB-pixelart, we split 2k images into folders where each folder contains 1,000 images and a JSON file.
### Loading Data Subsets
DiffusionDB is large! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the [`example-loading.ipynb`](https://github.com/poloclub/diffusiondb/blob/main/notebooks/example-loading.ipynb) notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary.
#### Method 1: Using Hugging Face Datasets Loader
You can use the Hugging Face [`Datasets`](https://huggingface.co/docs/datasets/quickstart) library to easily load prompts and images from DiffusionDB. We pre-defined 16 DiffusionDB subsets (configurations) based on the number of instances. You can see all subsets in the [Dataset Preview](https://huggingface.co/datasets/poloclub/diffusiondb/viewer/all/train).
```python
import numpy as np
from datasets import load_dataset
# Load the dataset with the `2k_random_1k` subset
dataset = load_dataset('jainr3/diffusiondb-pixelart', '2k_random_1k')
```
## Dataset Creation
### Curation Rationale
Recent diffusion models have gained immense popularity by enabling high-quality and controllable image generation based on text prompts written in natural language. Since the release of these models, people from different domains have quickly applied them to create award-winning artworks, synthetic radiology images, and even hyper-realistic videos.
However, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt.
Prompt engineering has become a field of study in the context of text-to-text generation, where researchers systematically investigate how to construct prompts to effectively solve different down-stream tasks. As large text-to-image models are relatively new, there is a pressing need to understand how these models react to prompts, how to write effective prompts, and how to design tools to help users generate images.
To help researchers tackle these critical challenges, we create DiffusionDB, the first large-scale prompt dataset with 14 million real prompt-image pairs.
### Source Data
#### Initial Data Collection and Normalization
We construct DiffusionDB by scraping user-generated images on the official Stable Diffusion Discord server. We choose Stable Diffusion because it is currently the only open-source large text-to-image generative model, and all generated images have a CC0 1.0 Universal Public Domain Dedication license that waives all copyright and allows uses for any purpose. We choose the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion) because it is public, and it has strict rules against generating and sharing illegal, hateful, or NSFW (not suitable for work, such as sexual and violent content) images. The server also disallows users to write or share prompts with personal information.
#### Who are the source language producers?
The language producers are users of the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion).
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The authors removed the discord usernames from the dataset.
We decide to anonymize the dataset because some prompts might include sensitive information: explicitly linking them to their creators can cause harm to creators.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better understanding of large text-to-image generative models.
The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.
It should note that we collect images and their prompts from the Stable Diffusion Discord server. The Discord server has rules against users generating or sharing harmful or NSFW (not suitable for work, such as sexual and violent content) images. The Stable Diffusion model used in the server also has an NSFW filter that blurs the generated images if it detects NSFW content. However, it is still possible that some users had generated harmful images that were not detected by the NSFW filter or removed by the server moderators. Therefore, DiffusionDB can potentially contain these images. To mitigate the potential harm, we provide a [Google Form](https://forms.gle/GbYaSpRNYqxCafMZ9) on the [DiffusionDB website](https://poloclub.github.io/diffusiondb/) where users can report harmful or inappropriate images and prompts. We will closely monitor this form and remove reported images and prompts from DiffusionDB.
### Discussion of Biases
The 14 million images in DiffusionDB have diverse styles and categories. However, Discord can be a biased data source. Our images come from channels where early users could use a bot to use Stable Diffusion before release. As these users had started using Stable Diffusion before the model was public, we hypothesize that they are AI art enthusiasts and are likely to have experience with other text-to-image generative models. Therefore, the prompting style in DiffusionDB might not represent novice users. Similarly, the prompts in DiffusionDB might not generalize to domains that require specific knowledge, such as medical images.
### Other Known Limitations
**Generalizability.** Previous research has shown a prompt that works well on one generative model might not give the optimal result when used in other models.
Therefore, different models can need users to write different prompts. For example, many Stable Diffusion prompts use commas to separate keywords, while this pattern is less seen in prompts for DALL-E 2 or Midjourney. Thus, we caution researchers that some research findings from DiffusionDB might not be generalizable to other text-to-image generative models.
## Additional Information
### Dataset Curators
DiffusionDB is created by [Jay Wang](https://zijie.wang), [Evan Montoya](https://www.linkedin.com/in/evan-montoya-b252391b4/), [David Munechika](https://www.linkedin.com/in/dmunechika/), [Alex Yang](https://alexanderyang.me), [Ben Hoover](https://www.bhoov.com), [Polo Chau](https://faculty.cc.gatech.edu/~dchau/).
### Licensing Information
The DiffusionDB dataset is available under the [CC0 1.0 License](https://creativecommons.org/publicdomain/zero/1.0/).
The Python code in this repository is available under the [MIT License](https://github.com/poloclub/diffusiondb/blob/main/LICENSE).
### Citation Information
```bibtex
@article{wangDiffusionDBLargescalePrompt2022,
title = {{{DiffusionDB}}: {{A}} Large-Scale Prompt Gallery Dataset for Text-to-Image Generative Models},
author = {Wang, Zijie J. and Montoya, Evan and Munechika, David and Yang, Haoyang and Hoover, Benjamin and Chau, Duen Horng},
year = {2022},
journal = {arXiv:2210.14896 [cs]},
url = {https://arxiv.org/abs/2210.14896}
}
```
### Contributions
If you have any questions, feel free to [open an issue](https://github.com/poloclub/diffusiondb/issues/new) or contact the original author [Jay Wang](https://zijie.wang). | 18,354 | [
[
-0.054229736328125,
-0.062225341796875,
0.033905029296875,
0.0284881591796875,
-0.01110076904296875,
0.0088958740234375,
0.01180267333984375,
-0.01410675048828125,
0.040557861328125,
0.0333251953125,
-0.051239013671875,
-0.055511474609375,
-0.04718017578125,
... |
julien040/hacker-news-posts | 2023-06-06T17:04:37.000Z | [
"size_categories:1M<n<10M",
"source_datasets:Hacker News",
"language:en",
"license:cc-by-nc-sa-4.0",
"hacker news",
"region:us"
] | julien040 | null | null | 0 | 42 | 2023-06-06T16:19:47 | ---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- hacker news
pretty_name: Hacker News stories dataset
size_categories:
- 1M<n<10M
source_datasets:
- Hacker News
---
# Hacker News Stories Dataset
This is a dataset containing approximately 4 million stories from Hacker News, exported to a CSV file. The dataset includes the following fields:
- `id` (int64): The unique identifier of the story.
- `title` (string): The title of the story.
- `url` (string): The URL of the story.
- `score` (int64): The score of the story.
- `time` (int64): The time the story was posted, in Unix time.
- `comments` (int64): The number of comments on the story.
- `author` (string): The username of the person who posted the story.
## Accessing the Dataset
The dataset can be accessed through [Hugging Face Datasets](https://huggingface.co/datasets/julien040/hacker-news-posts). You can download the dataset in CSV format or use the Hugging Face Datasets library to load the dataset directly in your Python code.
## License
The dataset is made available under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
## Disclaimer
The dataset is provided as is, without warranty of any kind, express or implied. The owner of the dataset makes no representations or warranties, express or implied, regarding the dataset or its use. The owner of the dataset will not be liable for any damages arising out of or in connection with the use of the dataset.
## Updates
The dataset will be updated regularly to include new stories from Hacker News. | 1,639 | [
[
-0.0118255615234375,
-0.04278564453125,
0.0156402587890625,
0.051605224609375,
-0.02459716796875,
0.028564453125,
-0.005767822265625,
-0.0165557861328125,
0.033477783203125,
0.0357666015625,
-0.052825927734375,
-0.048919677734375,
-0.027099609375,
0.01151275... |
PhilSad/celeba-hq-15k | 2023-07-26T15:24:31.000Z | [
"region:us"
] | PhilSad | null | null | 0 | 42 | 2023-07-26T15:22:05 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': female
'1': male
splits:
- name: train
num_bytes: 1463302608.0
num_examples: 15000
download_size: 1463113717
dataset_size: 1463302608.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "celeba-hq-15k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 558 | [
[
-0.04058837890625,
-0.026580810546875,
-0.00254058837890625,
0.01708984375,
-0.011810302734375,
0.007709503173828125,
0.00946807861328125,
-0.0187835693359375,
0.058868408203125,
0.0302581787109375,
-0.0546875,
-0.05352783203125,
-0.035430908203125,
-0.00791... |
hsultanbey/javascript | 2023-08-03T09:42:53.000Z | [
"region:us"
] | hsultanbey | null | null | 0 | 42 | 2023-08-03T09:42:14 | ---
dataset_info:
features:
- name: code
dtype: string
splits:
- name: train
num_bytes: 863518025
num_examples: 99999
download_size: 308377342
dataset_size: 863518025
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "javascript"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 445 | [
[
-0.02874755859375,
-0.015869140625,
0.00811767578125,
0.01500701904296875,
0.003086090087890625,
0.0149078369140625,
0.005462646484375,
-0.01256561279296875,
0.052337646484375,
0.0296630859375,
-0.053955078125,
-0.0711669921875,
-0.040435791015625,
-0.027450... |
umaru97/flickr30k_train_val_test | 2023-08-04T06:07:36.000Z | [
"region:us"
] | umaru97 | null | null | 0 | 42 | 2023-08-04T06:03:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: caption
list: string
- name: sentids
list: string
- name: split
dtype: string
- name: img_id
dtype: string
- name: filename
dtype: string
splits:
- name: train
num_bytes: 3817535945.6791124
num_examples: 29000
- name: val
num_bytes: 140547184.20822826
num_examples: 1014
- name: test
num_bytes: 142117238.54065907
num_examples: 1000
download_size: 4305964964
dataset_size: 4100200368.4279995
---
# Dataset Card for "flickr30k_train_val_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 871 | [
[
-0.061187744140625,
-0.00452423095703125,
-0.001239776611328125,
0.0248870849609375,
-0.01300048828125,
-0.006641387939453125,
0.040130615234375,
0.0088653564453125,
0.0276031494140625,
0.028167724609375,
-0.0665283203125,
-0.030517578125,
-0.032196044921875,
... |
Warlord-K/parti-prompts-subset-sdxl-1.0 | 2023-08-12T07:37:06.000Z | [
"region:us"
] | Warlord-K | null | null | 0 | 42 | 2023-08-12T07:36:24 | ---
dataset_info:
features:
- name: images
dtype: image
- name: Prompt
dtype: string
splits:
- name: train
num_bytes: 269194935.0
num_examples: 166
download_size: 269208266
dataset_size: 269194935.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "parti-prompts-subset-sdxl-1.0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 502 | [
[
-0.04962158203125,
0.00004589557647705078,
0.03546142578125,
0.028228759765625,
-0.034881591796875,
0.00476837158203125,
0.03009033203125,
0.01458740234375,
0.06280517578125,
0.03228759765625,
-0.09814453125,
-0.059478759765625,
-0.032806396484375,
-0.007366... |
RikoteMaster/Emotion_Recognition_4_llama2_chat | 2023-08-17T11:22:36.000Z | [
"region:us"
] | RikoteMaster | null | null | 0 | 42 | 2023-08-17T11:22:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: Text_processed
dtype: string
- name: Emotion
dtype: string
- name: Augmented
dtype: bool
- name: text
dtype: string
splits:
- name: train
num_bytes: 28688912
num_examples: 61463
download_size: 8968276
dataset_size: 28688912
---
# Dataset Card for "Emotion_Recognition_4_llama2_chat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 579 | [
[
-0.0338134765625,
-0.00621795654296875,
0.0189208984375,
0.03790283203125,
-0.0269622802734375,
0.01413726806640625,
0.01503753662109375,
-0.029296875,
0.06610107421875,
0.0146942138671875,
-0.05474853515625,
-0.048187255859375,
-0.0533447265625,
-0.00424575... |
ashhadahsan/amazon_theme | 2023-10-21T16:21:26.000Z | [
"region:us"
] | ashhadahsan | null | null | 0 | 42 | 2023-08-17T18:54:47 | ---
dataset_info:
features:
- name: Transcript
dtype: string
- name: Review Theme
dtype: string
splits:
- name: train
num_bytes: 347105
num_examples: 943
download_size: 208574
dataset_size: 347105
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "amazon_theme"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 483 | [
[
-0.051239013671875,
-0.005710601806640625,
0.008880615234375,
0.024932861328125,
-0.02740478515625,
0.0078887939453125,
0.01529693603515625,
-0.01043701171875,
0.0615234375,
0.05169677734375,
-0.0755615234375,
-0.058624267578125,
-0.021881103515625,
-0.01430... |
minh21/COVID-QA-validation-sentence-transformer | 2023-09-24T01:27:54.000Z | [
"region:us"
] | minh21 | null | null | 0 | 42 | 2023-09-24T01:27:47 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
splits:
- name: train
num_bytes: 95329437
num_examples: 2019
download_size: 17898620
dataset_size: 95329437
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "COVID-QA-validation-sentence-transformer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 625 | [
[
-0.0214385986328125,
-0.0226898193359375,
0.009307861328125,
0.0203857421875,
-0.0070953369140625,
-0.000988006591796875,
0.0211029052734375,
0.0021038055419921875,
0.04022216796875,
0.02447509765625,
-0.0555419921875,
-0.041656494140625,
-0.033599853515625,
... |
pablo-moreira/gpt4all-j-prompt-generations-pt | 2023-10-06T16:02:12.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:pt",
"license:apache-2.0",
"region:us"
] | pablo-moreira | null | null | 0 | 42 | 2023-09-28T01:43:05 | ---
language:
- pt
license: apache-2.0
size_categories:
- 100K<n<1M
task_categories:
- text-generation
pretty_name: GPT4All Prompt Generations translated into Portuguese using Google Translate.
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: source
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1956916380
num_examples: 808812
download_size: 1134108118
dataset_size: 1956916380
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "gpt4all-j-prompt-generations-pt"
## Dataset Description
Copy translated into Portuguese of the dataset [gpt4all_prompt_generations](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations) using the googletrans library.
## Translate
[translate_dataset.ipynb](translate_dataset.ipynb)
## Usage
[dataset_usage.ipynb](dataset_usage.ipynb) | 953 | [
[
-0.0220184326171875,
-0.019744873046875,
0.01076507568359375,
0.041290283203125,
-0.041107177734375,
0.01445770263671875,
0.023773193359375,
-0.00299072265625,
0.037841796875,
0.04095458984375,
-0.06329345703125,
-0.051605224609375,
-0.03009033203125,
0.0359... |
ArmelRandy/precious | 2023-10-04T19:50:05.000Z | [
"region:us"
] | ArmelRandy | null | null | 0 | 42 | 2023-10-04T19:49:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 49113143.033731006
num_examples: 37824
- name: test
num_bytes: 2585243.966268994
num_examples: 1991
download_size: 31710707
dataset_size: 51698387.0
---
# Dataset Card for "precious"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 563 | [
[
-0.0303955078125,
-0.024169921875,
0.00994873046875,
0.0128021240234375,
-0.022796630859375,
0.003643035888671875,
0.007720947265625,
-0.0177001953125,
0.0513916015625,
0.036041259765625,
-0.048980712890625,
-0.0421142578125,
-0.04150390625,
-0.0220794677734... |
vishnupriyavr/wiki-movie-plots-with-summaries | 2023-10-08T11:58:09.000Z | [
"license:cc-by-sa-4.0",
"region:us"
] | vishnupriyavr | null | null | 0 | 42 | 2023-10-07T11:47:01 | ---
license:
- cc-by-sa-4.0
converted_from: kaggle
kaggle_id: gabrieltardochi/wikipedia-movie-plots-with-plot-summaries
---
# Dataset Card for Wikipedia Movie Plots with AI Plot Summaries
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/gabrieltardochi/wikipedia-movie-plots-with-plot-summaries
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Context
Wikipedia Movies Plots dataset by JustinR ( https://www.kaggle.com/jrobischon/wikipedia-movie-plots )
### Content
Everything is the same as in https://www.kaggle.com/jrobischon/wikipedia-movie-plots
### Acknowledgements
Please, go upvote https://www.kaggle.com/jrobischon/wikipedia-movie-plots dataset, since this is 100% based on that.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@gabrieltardochi](https://kaggle.com/gabrieltardochi)
### Licensing Information
The license for this dataset is cc-by-sa-4.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] | 3,071 | [
[
-0.0380859375,
-0.038726806640625,
0.01529693603515625,
-0.001262664794921875,
-0.0259857177734375,
0.004062652587890625,
-0.01171112060546875,
-0.01092529296875,
0.050201416015625,
0.039581298828125,
-0.0670166015625,
-0.05792236328125,
-0.05487060546875,
0... |
FreedomIntelligence/SocraticChat | 2023-10-12T06:10:36.000Z | [
"license:apache-2.0",
"region:us"
] | FreedomIntelligence | null | null | 1 | 42 | 2023-10-12T05:53:51 | ---
license: apache-2.0
---
The dataset is generated by the interaction between the user simulator `Socratic` and `GPT-3.5-turbo`, including `50,728` samples.
For more details, please see the following link: https://github.com/FreedomIntelligence/PlatoLM | 255 | [
[
-0.037078857421875,
-0.028076171875,
0.0291748046875,
-0.005767822265625,
-0.0162811279296875,
0.007465362548828125,
0.0086669921875,
-0.0114898681640625,
0.01551055908203125,
0.027496337890625,
-0.068115234375,
-0.030303955078125,
0.0032520294189453125,
-0.... |
coastalcph/fair-rationales | 2023-10-13T12:54:10.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"source_datasets:extended",
"language:en",
"license:mit",
"bias",
"fairness",
"rationale",
"demographic",
"region:us"
] | coastalcph | Explainability methods are used to benchmark
the extent to which model predictions align
with human rationales i.e., are 'right for the
right reasons'. Previous work has failed to acknowledge, however,
that what counts as a rationale is sometimes subjective. This paper
presents what we think is a first of its kind, a
collection of human rationale annotations augmented with the annotators demographic information. | @inproceedings{thorn-jakobsen-etal-2023-right,
title = {Being Right for Whose Right Reasons?},
author = {Thorn Jakobsen, Terne Sasha and
Cabello, Laura and
S{\o}gaard, Anders},
booktitle = {Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
year = {2023},
publisher = {Association for Computational Linguistics},
url = {https://aclanthology.org/2023.acl-long.59},
doi = {10.18653/v1/2023.acl-long.59},
pages = {1033--1054}
} | 3 | 42 | 2023-10-12T11:57:58 | ---
license: mit
language:
- en
annotations_creators:
- crowdsourced
source_datasets:
- extended
task_categories:
- text-classification
task_ids:
- sentiment-classification
- open-domain-qa
tags:
- bias
- fairness
- rationale
- demographic
pretty_name: FairRationales
---
# Dataset Card for "FairRationales"
## Dataset Summary
We present a new collection of annotations for a subset of CoS-E [[1]](#1), DynaSent [[2]](#2), and SST [[3]](#3)/Zuco [[4]](#4) with demographics-augmented annotations, balanced across age and ethnicity.
We asked participants to choose a label and then provide supporting evidence (rationales) based on the input sentence for their answer.
Existing rationale datasets are typically constructed by giving annotators 'gold standard' labels,
and having them provide rationales for these labels.
Instead, we let annotators provide rationales for labels they choose themselves. This lets them engage
in the decision process, but it also acknowledges
that annotators with different backgrounds may disagree on classification decisions. Explaining other
people’s choices is error-prone [[5]](#5), and we do not want to bias the rationale
annotations by providing labels that align better
with the intuitions of some demographics than with
those of others.
Our annotators are balanced across age and ethnicity for six demographic groups, defined by
ethnicity {Black/African American, White/Caucasian, Latino/Hispanic} and age {Old, Young}.
Therefore, we can refer to our groups as their cross-product: **{BO, BY, WO, WY, LO, LY}**.
## Dataset Details
### DynaSent
We re-annotate N=480 instances
six times (for six demographic groups), comprising
240 instances labeled as positive, and 240 instances
labeled as negative in the DynaSent Round 2 **test**
set (see [[2]](#2)). This amounts to 2,880
annotations, in total.
To annotate rationales, we formulate the task as
marking 'supporting evidence' for the label, following how the task is defined by [[6]](#6). Specifically, we ask annotators to mark
all the words, in the sentence, they think shows
evidence for their chosen label.
#### >Our annotations:
negative 1555 |
positive 1435 |
no sentiment 470
Total 3460
Note that all the data is uploaded under a single 'train' split (read [## Uses](uses) for further details).
### SST2
We re-annotate N=263 instances six
times (for six demographic groups), which are all
the positive and negative instances from the Zuco*
dataset of Hollenstein et al. (2018), comprising a
**mixture of train, validation and test** set instances
from SST-2, *which should be removed from the original SST
data before training any model*.
These 263 reannotated instances do not contain any instances originally marked as `neutral` (or not conveying sentiment) because rationale annotation for neutral instances is ill-defined. Yet,
we still allow annotators to evaluate a sentence as
neutral, since we do not want to force our annotators to provide rationales for positive and negative
sentiment that they do not see.
*The Zuco data contains eye-tracking data for 400 instances from SST. By annotating some of these with rationales,
we add an extra layer of information for future research.
#### >Our annotations:
positive 1027 |
negative 900 |
no sentiment 163
Total 2090
Note that all the data is uploaded under a single 'train' split (read [## Uses](uses) for further details).
### CoS-E
We use the simplified version of CoS-E released by [[6]](#6).
We re-annotate N=500 instances from
the CoS-E **test** set six times (for six demographic groups)
and ask annotators to firstly select the answer to
the question that they find most correct and sensible, and then mark words that justifies that answer.
Following [[7]](#7), we specify the
rationale task with a wording that should guide
annotators to make short, precise rationale annotations:
‘For each word in the question, if you
think that removing it will decrease your
confidence toward your chosen label,
please mark it.’
#### >Our annotations:
Total 3760
Note that all the data is uploaded under a single 'train' split (read [## Uses](uses) for further details).
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/terne/Being_Right_for_Whose_Right_Reasons
- **Paper:** [Being Right for Whose Right Reasons?](https://aclanthology.org/2023.acl-long.59/)
<a id="uses">## Uses</a>
<!-- Address questions around how the dataset is intended to be used. -->
In our paper, we present a collection of three
existing datasets (SST2, DynaSent and Cos-E) with demographics-augmented annotations to enable profiling of models, i.e., quantifying their alignment (or agreement) with rationales provided
by different socio-demographic groups. Such profiling enables us to ask whose right reasons models are being right for and fosters future research on performance equality/robustness.
For each dataset, we provide the data under a unique **'train'** split due to the current limitation of not being possible to upload a dataset with a single *'test'* split.
Note, however, that the original itended used of these collection of datasets was to **test** the quality & alignment of post-hoc explainability methods.
If you use it following different splits, please clarify it to ease reproducibility of your work.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
| Variable | Description |
| --- | --- |
| QID | The ID of the Question (i.e. the annotation element/sentence) in the Qualtrics survey. Every second question asked for the classification and every other asked for the rationale, of the classification, to be marked. These two questions and answers for the same sentence is merged to one row and therefore the QID looks as if every second is skipped. |
| text_id | A numerical ID given to each unique text/sentence for easy sorting before comparing annotations across groups. |
| sentence | The text/sentence that is annotated, in it's original formatting. |
| label | The (new) label given by the respective annotator/participant from Prolific. |
| label_index | The numerical format of the (new) label. |
| original_label | The label from the original dataset (Cose/Dynasent/SST). |
| rationale | The tokens marked as rationales by our annotators. |
| rationale_index | The indeces of the tokens marked as rationales. In the processed files the index start at 0. However in the unprocessed files ("_all.csv", "_before_exclussions.csv") the index starts at 1.|
| rationale_binary | A binary version of the rationales where a token marked as part of the rationale = 1 and tokens not marked = 0. |
| age | The reported age of the annotator/participant (i.e. their survey response). This may be different from the age-interval the participant was recruited by (see recruitment_age). |
| recruitment_age | The age interval specified for the Prolific job to recruit the participant by. A mismatch between this and the participant's reported age, when asked in our survey, may mean a number of things, such as: Prolific's information is wrong or outdated; the participant made a mistake when answering the question; the participant was inattentive. |
| ethnicity | The reported ethnicity of the annotator/participant. This may be different from the ethnicity the participant was recruited by (see recruitment_ethnicity). |
| recruitment_ethnicity | The ethnicity specified for the Prolific job to recruit the participant by. Sometimes there is a mismatch between the information Prolific has on participants (which we use for recruitment) and what the participants report when asked again in the survey/task. This seems especially prevalent with some ethnicities, likely because participants may in reality identify with more than one ethnic group. |
| gender | The reported gender of the annotator/participant. |
| english_proficiency | The reported English-speaking ability (proxy for English proficiency) of the annotator/participant. Options were "Not well", "Well" or "Very well". |
| attentioncheck | All participants were given a simple attention check question at the very end of the Qualtrics survey (i.e. after annotation) which was either PASSED or FAILED. Participants who failed the check were still paid for their work, but their response should be excluded from the analysis. |
| group_id | An id describing the socio-demographic subgroup a participant belongs to and was recruited by. |
| originaldata_id | The id given to the text/sentence in the original dataset. In the case of SST data, this refers to ids within the Zuco dataset – a subset of SST which was used in our study.|
| annotator_ID | Anonymised annotator ID to enable analysis such as annotators (dis)agreement |
| sst2_id | The processed SST annotations contain an extra column with the index of the text in the SST-2 dataset. -1 means that we were unable to match the text to an instance in SST-2 |
| sst2_split | The processed SST annotations contain an extra column refering to the set which the instance appears in within SST-2. Some instances a part of the train set and should therefore be removed before training a model on SST-2 and testing on our annotations. |
## Dataset Creation
### Curation Rationale
Terne Sasha Thorn Jakobsen, Laura Cabello, Anders Søgaard. Being Right for Whose Right Reasons?
In the Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
#### Annotation process
We refer to our [paper](https://aclanthology.org/2023.acl-long.59/) for further details on the data (Section 3), and specifically on the Annotation Process (Section 3.1) and Annotator Population (Section 3.2).
#### Who are the annotators?
Annotators were recruited via Prolific and consented to the use of their responses and demographic information for research purposes.
The annotation tasks were conducted through Qualtrics surveys. The exact surveys can be found [here](https://github.com/terne/Being_Right_for_Whose_Right_Reasons/tree/main/data/qualtrics_survey_exports).
## References
<a id="1">[1]</a>
Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain Yourself! Leveraging Language Models for Commonsense Reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932–4942, Florence, Italy. Association for Computational Linguistics.
<a id="2">[2]</a>
Christopher Potts, Zhengxuan Wu, Atticus Geiger, and Douwe Kiela. 2021. DynaSent: A Dynamic Benchmark for Sentiment Analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2388–2404, Online. Association for Computational Linguistics.
<a id="3">[3]</a>
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
<a id="4">[4]</a>
Nora Hollenstein, Jonathan Rotsztejn, Marius Troendle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. 2018. Zuco, a simultaneous eeg and eye-tracking resource for natural sentence reading. Scientific Data.
<a id="5">[5]</a>
Kate Barasz and Tami Kim. 2022. Choice perception: Making sense (and nonsense) of others’ decisions. Current opinion in psychology, 43:176–181.
<a id="6">[6]</a>
Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2019. Eraser: A benchmark to evaluate rationalized nlp models.
<a id="7">[7]</a>
Cheng-Han Chiang and Hung-yi Lee. 2022. Reexamining human annotations for interpretable nlp.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
```bibtex
@inproceedings{thorn-jakobsen-etal-2023-right,
title = "Being Right for Whose Right Reasons?",
author = "Thorn Jakobsen, Terne Sasha and
Cabello, Laura and
S{\o}gaard, Anders",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.59",
doi = "10.18653/v1/2023.acl-long.59",
pages = "1033--1054",
abstract = "Explainability methods are used to benchmark the extent to which model predictions align with human rationales i.e., are {`}right for the right reasons{'}. Previous work has failed to acknowledge, however, that what counts as a rationale is sometimes subjective. This paper presents what we think is a first of its kind, a collection of human rationale annotations augmented with the annotators demographic information. We cover three datasets spanning sentiment analysis and common-sense reasoning, and six demographic groups (balanced across age and ethnicity). Such data enables us to ask both what demographics our predictions align with and whose reasoning patterns our models{'} rationales align with. We find systematic inter-group annotator disagreement and show how 16 Transformer-based models align better with rationales provided by certain demographic groups: We find that models are biased towards aligning best with older and/or white annotators. We zoom in on the effects of model size and model distillation, finding {--}contrary to our expectations{--} negative correlations between model size and rationale agreement as well as no evidence that either model size or model distillation improves fairness.",
}
```
## Dataset Card Contact
Thanks to [@lautel](https://github.com/lautel) for adding this dataset. | 14,260 | [
[
-0.034576416015625,
-0.036163330078125,
0.0207061767578125,
0.0119171142578125,
-0.0223388671875,
-0.01044464111328125,
-0.0099334716796875,
-0.035369873046875,
0.027587890625,
0.032470703125,
-0.0511474609375,
-0.043914794921875,
-0.03778076171875,
0.012458... |
nlewins/LSK_full_with_audio | 2023-10-14T06:31:10.000Z | [
"region:us"
] | nlewins | null | null | 0 | 42 | 2023-10-14T06:25:36 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: en
dtype: string
- name: audio_transcription
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 2517843871.392
num_examples: 6132
- name: test
num_bytes: 284829964.0
num_examples: 682
download_size: 2787469428
dataset_size: 2802673835.392
---
# Dataset Card for "LSK_full_with_audio"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 758 | [
[
-0.033294677734375,
-0.01511383056640625,
0.038116455078125,
0.02069091796875,
-0.026031494140625,
-0.000011444091796875,
-0.022796630859375,
-0.0216217041015625,
0.065185546875,
0.032073974609375,
-0.060394287109375,
-0.057769775390625,
-0.033905029296875,
... |
AlanRobotics/rm-extended | 2023-10-15T20:17:19.000Z | [
"region:us"
] | AlanRobotics | null | null | 0 | 42 | 2023-10-15T17:45:39 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 309494879.9766375
num_examples: 349097
- name: test
num_bytes: 34388714.02336253
num_examples: 38789
download_size: 194992395
dataset_size: 343883594.0
---
# Dataset Card for "rm-extended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 478 | [
[
-0.04876708984375,
-0.02459716796875,
0.01302337646484375,
0.0062255859375,
-0.020904541015625,
0.0016641616821289062,
0.00989532470703125,
-0.0173797607421875,
0.058746337890625,
0.036224365234375,
-0.078125,
-0.05145263671875,
-0.038665771484375,
-0.002006... |
Ceroxlol/pictarine | 2023-11-02T16:36:24.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"finance",
"region:us"
] | Ceroxlol | null | null | 0 | 42 | 2023-10-18T08:16:22 | ---
language:
- en
size_categories:
- n<1K
task_categories:
- question-answering
pretty_name: picta
tags:
- finance
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 15778
num_examples: 80
download_size: 15022
dataset_size: 15778
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Dataset for training a chatbot for pictarine. | 423 | [
[
-0.0011281967163085938,
-0.039276123046875,
-0.03094482421875,
0.0229034423828125,
0.0034942626953125,
0.0198822021484375,
-0.0223388671875,
-0.01532745361328125,
0.01263427734375,
0.03643798828125,
-0.043548583984375,
-0.027496337890625,
-0.0240020751953125,
... |
arkubeth/librispeech | 2023-11-01T18:14:04.000Z | [
"region:us"
] | arkubeth | null | null | 0 | 42 | 2023-10-21T11:02:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cmu-mlsp/encodec_24khz-opt-125m-pretrained-ft-librispeech_asr | 2023-10-24T13:39:34.000Z | [
"region:us"
] | cmu-mlsp | null | null | 0 | 42 | 2023-10-24T13:29:07 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 24000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: audio_codes
sequence:
sequence: int64
splits:
- name: train
num_bytes: 17829358082.086
num_examples: 28539
- name: validation
num_bytes: 955281891.125
num_examples: 2703
- name: test
num_bytes: 958024726.5
num_examples: 2620
download_size: 18905275151
dataset_size: 19742664699.711
---
# Dataset Card for "encodec_24khz-opt-125m-pretrained-ft-librispeech_asr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 994 | [
[
-0.05120849609375,
-0.0087127685546875,
-0.00037670135498046875,
0.0175628662109375,
-0.02899169921875,
0.01922607421875,
-0.0031490325927734375,
-0.00728607177734375,
0.045684814453125,
0.0289306640625,
-0.06500244140625,
-0.04217529296875,
-0.041839599609375,
... |
fiveflow/keyword_use_unuse | 2023-10-25T13:39:44.000Z | [
"region:us"
] | fiveflow | null | null | 0 | 42 | 2023-10-25T09:46:43 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 632930
num_examples: 948
download_size: 177602
dataset_size: 632930
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "keyword_use_unuse"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 441 | [
[
-0.025848388671875,
-0.00884246826171875,
-0.000614166259765625,
0.00963592529296875,
-0.0477294921875,
0.00551605224609375,
-0.0034694671630859375,
0.014129638671875,
0.03802490234375,
0.05987548828125,
-0.0222015380859375,
-0.06414794921875,
-0.0399169921875,
... |
Narya-ai/relevancy-summary-dataset | 2023-11-03T01:06:30.000Z | [
"region:us"
] | Narya-ai | null | null | 0 | 42 | 2023-10-25T14:27:12 | ---
dataset_info:
features:
- name: full_query
dtype: string
- name: summary
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 1774391
num_examples: 1563
download_size: 842387
dataset_size: 1774391
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "relevancy-summary-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 528 | [
[
-0.0265960693359375,
-0.01556396484375,
0.0154571533203125,
0.01142120361328125,
-0.0172576904296875,
0.004039764404296875,
0.0274505615234375,
-0.0063629150390625,
0.08514404296875,
0.02783203125,
-0.057159423828125,
-0.059173583984375,
-0.050506591796875,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.