author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingnft | null | null | null | false | 1 | false | huggingnft/trippytoadznft | 2022-04-16T17:59:07.000Z | null | false | 9ccb67fe13acc0f05adbaf8883ba978d4673f857 | [] | [
"tags:huggingnft",
"tags:nft",
"tags:huggan",
"tags:gan",
"tags:image",
"tags:images",
"task:unconditional-image-generation",
"datasets:huggingnft/trippytoadznft",
"license:mit"
] | https://huggingface.co/datasets/huggingnft/trippytoadznft/resolve/main/README.md | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/trippytoadznft
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/trippytoadznft).
Model is available [here](https://huggingface.co/huggingnft/trippytoadznft).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/trippytoadznft")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
huggingnft | null | null | null | false | 1 | false | huggingnft/boredapeyachtclub | 2022-04-16T17:59:08.000Z | null | false | 2d572e61e1204fe8374ca7768511f0a6b57639ac | [] | [
"tags:huggingnft",
"tags:nft",
"tags:huggan",
"tags:gan",
"tags:image",
"tags:images",
"task:unconditional-image-generation",
"datasets:huggingnft/boredapeyachtclub",
"license:mit"
] | https://huggingface.co/datasets/huggingnft/boredapeyachtclub/resolve/main/README.md | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/boredapeyachtclub
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/boredapeyachtclub).
Model is available [here](https://huggingface.co/huggingnft/boredapeyachtclub).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/boredapeyachtclub")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
iluvvatar | null | null | null | false | 1 | false | iluvvatar/RuREBus | 2022-10-23T05:39:57.000Z | null | false | 95963eef1d1bb95abde1032961459dbab22fee58 | [] | [
"language:ru",
"multilinguality:monolingual",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/iluvvatar/RuREBus/resolve/main/README.md | ---
language:
- ru
multilinguality:
- monolingual
pretty_name: RuREBus
task_categories:
- structure-prediction
task_ids:
- named-entity-recognition
---
# RuREBus dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Citation Information](#citation-information)
- [Contacts](#contacts)
## Dataset Description
RuREBus dataset (https://github.com/dialogue-evaluation/RuREBus) is
a Russian dataset for named entity recognition and relation extraction.
## Dataset Structure
There are two subsets of the dataset.
Using
`load_dataset('MalakhovIlya/RuREBus')`
you can download annotated data (DatasetDict) for named entity recognition task and
relation extraction tasks.
This subset consists of two splits: "train" and "test".
Using
`load_dataset('MalakhovIlya/NEREL', 'raw_txt')['raw_txt']`
you can download (Dataset) large corpus (~3gb) raw texts of the same subject
area, but without any annotations.
"entities" are used in named-entity recognition task (see https://en.wikipedia.org/wiki/Named-entity_recognition).
"relations" are used in relationship extraction task (see https://en.wikipedia.org/wiki/Relationship_extraction).
Each entity is represented by a string of the following format:
`"<id>\t<type> <start> <stop>\t<text>"`, where
`<id>` is an entity id,
`<type>` is one of entity types,
`<start>` is a position of the first symbol of entity in text,
`<stop>` is the last symbol position in text +1.
Each relation is represented by a string of the following format:
`"<id>\t<type> Arg1:<arg1_id> Arg2:<arg2_id>"`, where
`<id>` is a relation id,
`<arg1_id>` and `<arg2_id>` are entity ids.
## Citation Information
@inproceedings{rurebus,
Address = {Moscow, Russia},
Author = {Ivanin, Vitaly and Artemova, Ekaterina and Batura, Tatiana and Ivanov, Vladimir and Sarkisyan, Veronika and Tutubalina, Elena and Smurov, Ivan},
Title = {RuREBus-2020 Shared Task: Russian Relation Extraction for Business},
Booktitle = {Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference “Dialog” [Komp’iuternaia Lingvistika i Intellektual’nye Tehnologii: Trudy Mezhdunarodnoj Konferentsii “Dialog”]},
Year = {2020}
}
## Contacts
Malakhov Ilya
Telegram - https://t.me/noname_4710
|
lewtun | null | null | null | false | 1 | false | lewtun/top_quark_tagging_old | 2022-04-10T16:24:28.000Z | null | false | 5804347ff724db187d3aa0260f2e23e4af5a111c | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/lewtun/top_quark_tagging_old/resolve/main/README.md | ---
license: cc-by-4.0
---
|
nsusemiehl | null | null | null | false | 1 | false | nsusemiehl/SciERC | 2022-04-10T16:56:55.000Z | null | false | 0c4d3efa8324ce171b8e8393b713786f64c63612 | [] | [] | https://huggingface.co/datasets/nsusemiehl/SciERC/resolve/main/README.md | SCIERC (Luan et al., 2018) via "Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks" (Gururangan et al., 2020) reuploaded because of error encountered when trying to load zj88zj/SCIERC with the huggingfaces/datasets library. |
arjundd | null | null | null | false | 1 | false | arjundd/skm-tea-mini | 2022-05-02T20:01:34.000Z | null | false | 6cfe8e5afe107823c07b64d48e333b9b85ae332b | [] | [
"arxiv:2203.06823",
"language:en",
"license:other",
"tags:mri",
"tags:quantitative mri",
"tags:reconstruction",
"tags:segmentation",
"tags:detection"
] | https://huggingface.co/datasets/arjundd/skm-tea-mini/resolve/main/README.md | ---
language: en
license: other
tags:
- mri
- quantitative mri
- reconstruction
- segmentation
- detection
---
# SKM-TEA Sample Data
This dataset consists of a subset of scans from the [SKM-TEA dataset](https://arxiv.org/abs/2203.06823). It can be used to build tutorials / demos with the SKM-TEA dataset.
To access to the full dataset, please follow instructions on [Github](https://github.com/StanfordMIMI/skm-tea/blob/main/DATASET.md).
**NOTE**: This dataset subset *should not* be used for reporting/publishing metrics. All metrics should be computed on the full SKM-TEA test split.
## Details
This mini dataset (~30GB) consists of 2 training scans, 1 validation scan, and 1 test scan from the SKM-TEA dataset. HDF5 files for the Raw Data Track are [lzf-compressed](http://www.h5py.org/lzf/) to reduce size while maximizing speed for decompression.
## License
By using this dataset, you agree to the [Stanford University Dataset Research Use Agreement](https://stanfordaimi.azurewebsites.net/datasets/4aaeafb9-c6e6-4e3c-9188-3aaaf0e0a9e7).
## Reference
If you use this dataset, please reference the SKM-TEA paper:
```
@inproceedings{
desai2021skmtea,
title={{SKM}-{TEA}: A Dataset for Accelerated {MRI} Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation},
author={Arjun D Desai and Andrew M Schmidt and Elka B Rubin and Christopher Michael Sandino and Marianne Susan Black and Valentina Mazzoli and Kathryn J Stevens and Robert Boutin and Christopher Re and Garry E Gold and Brian Hargreaves and Akshay Chaudhari},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=YDMFgD_qJuA}
}
```
|
mattgmcadams | null | null | null | false | 1 | false | mattgmcadams/AirDrums | 2022-04-11T00:40:23.000Z | null | false | 456a91903148a8a02f7903b4941ef21ef6f7366f | [] | [
"language:en",
"tags:sensor",
"tags:location",
"datasets:2d_images",
"datasets:3d_images",
"datasets:2d_imu",
"datasets:3d_imu"
] | https://huggingface.co/datasets/mattgmcadams/AirDrums/resolve/main/README.md | # AirDrums Data
This dataset contains all data needed for training
`2d_images` contains raw unsegmented image data for 2-dimensional dataset. filenames are representative of timestamp
`3d_images` contains raw unsegmented image data (paired) for 3-dimensional dataset. filenames are representative of timestamp and camera angle
images from both of the previous sets are to be segmented and converted to a coordinate and direction
`2d_imu` contains IMU data for training in 2-dimensional space (xy) with segmented images from above
`3d_imu` contains IMU data for training in 3-dimensional space(xyz) with segmented images from above and front (xy and yz planes)
---
language:
- en
tags:
- sensor
- location
datasets:
- 2d_images
- 3d_images
- 2d_imu
- 3d_imu
---
|
surdan | null | null | null | false | 2 | false | surdan/nerel_short | 2022-10-25T10:06:49.000Z | null | false | 4fcdce42bb4668907d572c4ae6ac03307847a7ff | [] | [
"language:ru",
"multilinguality:monolingual",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/surdan/nerel_short/resolve/main/README.md | ---
language: ru
multilinguality: monolingual
task_ids:
- named-entity-recognition
---
### About DataSet
The dataset based on NEREL corpus.
For more information about original data, please visit this [source](https://github.com/dialogue-evaluation/RuNNE)
Example of preparing original data illustrated in <Prepare_original_data.ipynb>
### Additional info
The dataset consist 29 entities, each of them can be as beginner part of entity "B-" as inner "I-".
Frequency for each entity:
- I-AGE: 284
- B-AGE: 247
- B-AWARD: 285
- I-AWARD: 466
- B-CITY: 1080
- I-CITY: 39
- B-COUNTRY: 2378
- I-COUNTRY: 128
- B-CRIME: 214
- I-CRIME: 372
- B-DATE: 2701
- I-DATE: 5437
- B-DISEASE: 136
- I-DISEASE: 80
- B-DISTRICT: 98
- I-DISTRICT: 73
- B-EVENT: 3369
- I-EVENT: 2524
- B-FACILITY: 376
- I-FACILITY: 510
- B-FAMILY: 27
- I-FAMILY: 22
- B-IDEOLOGY: 271
- I-IDEOLOGY: 20
- B-LANGUAGE: 32
- I-LAW: 1196
- B-LAW: 297
- B-LOCATION: 242
- I-LOCATION: 139
- B-MONEY: 147
- I-MONEY: 361
- B-NATIONALITY: 437
- I-NATIONALITY: 41
- B-NUMBER: 1079
- I-NUMBER: 328
- B-ORDINAL: 485
- I-ORDINAL: 6
- B-ORGANIZATION: 3339
- I-ORGANIZATION: 3354
- B-PENALTY: 73
- I-PENALTY: 104
- B-PERCENT: 51
- I-PERCENT: 37
- B-PERSON: 5148
- I-PERSON: 3635
- I-PRODUCT: 48
- B-PRODUCT: 197
- B-PROFESSION: 3869
- I-PROFESSION: 2598
- B-RELIGION: 102
- I-RELIGION: 1
- B-STATE_OR_PROVINCE: 436
- I-STATE_OR_PROVINCE: 154
- B-TIME: 187
- I-TIME: 529
- B-WORK_OF_ART: 133
- I-WORK_OF_ART: 194
You can find mapper for entity ids in <id_to_label_map.pickle> file:
```python
import pickle
with open('id_to_label_map.pickle', 'rb') as f:
mapper = pickle.load(f)
``` |
enimai | null | null | null | false | 1 | false | enimai/MuST-C-de | 2022-04-11T08:25:26.000Z | null | false | 527ab728c4a1ffca313d6423f9d837577f477a95 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/enimai/MuST-C-de/resolve/main/README.md | ---
license: afl-3.0
---
|
huggingface | null | null | null | false | 1,306 | false | huggingface/semantic-segmentation-test-sample | 2022-04-11T09:15:24.000Z | null | false | 820e1da2eaf57add263d470621bc2a3f43a021e7 | [] | [] | https://huggingface.co/datasets/huggingface/semantic-segmentation-test-sample/resolve/main/README.md | This dataset contains 10 examples of the [segments/sidewalk-semantic](https://huggingface.co/datasets/segments/sidewalk-semantic) dataset (i.e. 10 images with corresponding ground-truth segmentation maps). |
westphal-jan | null | null | null | false | 1 | false | westphal-jan/mnli_matched | 2022-04-16T12:02:51.000Z | null | false | 02f598f31161ab47a167d725b0de3dc3c0efdde8 | [] | [
"source_datasets:multi_nli",
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring"
] | https://huggingface.co/datasets/westphal-jan/mnli_matched/resolve/main/README.md | ---
source_datasets:
- multi_nli
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
---
## Dataset Description
This dataset provides easier accessibility to the original [MNLI dataset](https://huggingface.co/datasets/multi_nli).
We randomly choose 10% of the original `validation_matched` split and use it as the validation split.
The remaining 90% are used for the test split.
The train split remains unchanged. |
csebuetnlp | null | @misc{bhattacharjee2021banglabert,
title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding},
author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar},
year={2021},
eprint={2101.00204},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | SQuAD-bn is derived from the SQuAD-2.0 and TyDI-QA datasets. | false | 30 | false | csebuetnlp/squad_bn | 2022-08-21T13:17:43.000Z | null | false | 3b2935a74731f120004bdcbc3f9fd73f7d854c96 | [] | [
"arxiv:2101.00204",
"arxiv:2007.01852",
"arxiv:1606.05250",
"arxiv:2003.05002",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"task_categories:question-answering",
"task_ids:open-domain... | https://huggingface.co/datasets/csebuetnlp/squad_bn/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- found
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- question-answering
task_ids:
- open-domain-qa
- extractive-qa
language:
- bn
license:
- cc-by-nc-sa-4.0
---
# Dataset Card for `squad_bn`
## Table of Contents
- [Dataset Card for `squad_bn`](#dataset-card-for-squad_bn)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/csebuetnlp/banglabert](https://github.com/csebuetnlp/banglabert)
- **Paper:** [**"BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding"**](https://arxiv.org/abs/2101.00204)
- **Point of Contact:** [Tahmid Hasan](mailto:tahmidhasan@cse.buet.ac.bd)
### Dataset Summary
This is a Question Answering (QA) dataset for Bengali, curated from the [SQuAD 2.0](), [TyDI-QA]() datasets and using the state-of-the-art English to Bengali translation model introduced **[here](https://aclanthology.org/2020.emnlp-main.207/).**
### Supported Tasks and Leaderboards
[More information needed](https://github.com/csebuetnlp/banglabert)
### Languages
* `Bengali`
### Usage
```python
from datasets import load_dataset
dataset = load_dataset("csebuetnlp/squad_bn")
```
## Dataset Structure
### Data Instances
One example from the dataset is given below in JSON format.
```
{
"title": "শেখ মুজিবুর রহমান",
"paragraphs": [
{
"qas": [
{
"answers": [
{
"answer_start": 19,
"text": "১৭ মার্চ ১৯২০"
}
],
"id": "bengali--981248442377505718-0-2649",
"question": "শেখ মুজিবুর রহমান কবে জন্মগ্রহণ করেন ?"
}
],
"context": "শেখ মুজিবুর রহমান (১৭ মার্চ ১৯২০ - ১৫ আগস্ট ১৯৭৫) বাংলাদেশের প্রথম রাষ্ট্রপতি ও ভারতীয় উপমহাদেশের একজন অন্যতম প্রভাবশালী রাজনৈতিক ব্যক্তিত্ব যিনি বাঙালীর অধিকার রক্ষায় ব্রিটিশ ভারত থেকে ভারত বিভাজন আন্দোলন এবং পরবর্তীতে পূর্ব পাকিস্তান থেকে বাংলাদেশ প্রতিষ্ঠার সংগ্রামে নেতৃত্ব প্রদান করেন। প্রাচীন বাঙ্গালি সভ্যতার আধুনিক স্থপতি হিসাবে শেখ মুজিবুর রহমানকে বাংলাদেশের জাতির জনক বা জাতির পিতা বলা হয়ে থাকে। তিনি মাওলানা আব্দুল হামিদ খান ভাসানী প্রতিষ্ঠিত আওয়ামী লীগের সভাপতি, বাংলাদেশের প্রথম রাষ্ট্রপতি এবং পরবর্তীতে এদেশের প্রধানমন্ত্রীর দায়িত্ব পালন করেন। জনসাধারণের কাছে তিনি শেখ মুজিব এবং শেখ সাহেব হিসাবে বেশি পরিচিত এবং তার উপাধি বঙ্গবন্ধু। তার কন্যা শেখ হাসিনা বাংলাদেশ আওয়ামী লীগের বর্তমান সভানেত্রী এবং বাংলাদেশের বর্তমান প্রধানমন্ত্রী।"
}
]
}
```
### Data Fields
The data fields are as follows:
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| split |count |
|----------|--------|
|`train`| 127771 |
|`validation`| 2502 |
|`test`| 2504 |
## Dataset Creation
For the training set, we translated the complete [SQuAD 2.0](https://aclanthology.org/N18-1101/) dataset using the English to Bangla translation model introduced [here](https://aclanthology.org/2020.emnlp-main.207/). Due to the possibility of incursions of error during automatic translation, we used the [Language-Agnostic BERT Sentence Embeddings (LaBSE)](https://arxiv.org/abs/2007.01852) of the translations and original sentences to compute their similarity. A datapoint was accepted if all of its constituent sentences had a similarity score over 0.7.
Since the TyDI-QA Gold Passage task guarantees that the given context contains the answer and we want to pose our QA task analogous to SQuAD 2.0, we also consider examples from the Passage selection task that don't have an answer for the given question. We distribute the resultant examples from the TyDI-QA training and validation sets (which are publicly available) evenly to our test and validation sets.
### Curation Rationale
[More information needed](https://github.com/csebuetnlp/banglabert)
### Source Data
[SQuAD 2.0](https://arxiv.org/abs/1606.05250), [TyDi-QA](https://arxiv.org/abs/2003.05002)
#### Initial Data Collection and Normalization
[More information needed](https://github.com/csebuetnlp/banglabert)
#### Who are the source language producers?
[More information needed](https://github.com/csebuetnlp/banglabert)
### Annotations
[More information needed](https://github.com/csebuetnlp/banglabert)
#### Annotation process
[More information needed](https://github.com/csebuetnlp/banglabert)
#### Who are the annotators?
[More information needed](https://github.com/csebuetnlp/banglabert)
### Personal and Sensitive Information
[More information needed](https://github.com/csebuetnlp/banglabert)
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed](https://github.com/csebuetnlp/banglabert)
### Discussion of Biases
[More information needed](https://github.com/csebuetnlp/banglabert)
### Other Known Limitations
[More information needed](https://github.com/csebuetnlp/banglabert)
## Additional Information
### Dataset Curators
[More information needed](https://github.com/csebuetnlp/banglabert)
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use the dataset, please cite the following paper:
```
@misc{bhattacharjee2021banglabert,
title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding},
author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar},
year={2021},
eprint={2101.00204},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset. |
openclimatefix | null | null | null | false | 1 | false | openclimatefix/prepared-batches | 2022-04-13T11:31:18.000Z | null | false | 68b251a36a30a7a5e636ce0f55dcebb43bcd576f | [] | [
"license:mit"
] | https://huggingface.co/datasets/openclimatefix/prepared-batches/resolve/main/README.md | ---
license: mit
---
|
AntoineLB | null | null | null | false | 1 | false | AntoineLB/Frozen-lake-dataset | 2022-04-21T12:16:39.000Z | null | false | 2cf230d6428c8e3cb35710b9aa18858cc33084bc | [] | [] | https://huggingface.co/datasets/AntoineLB/Frozen-lake-dataset/resolve/main/README.md | # Dataset Card for [FrozenLake-v1] |
irenelizihui | null | null | null | false | 1 | false | irenelizihui/Surfer100 | 2022-04-11T23:06:56.000Z | null | false | 5aade2e78656abb0c321488d6d21b331f7cdd665 | [] | [
"license:wtfpl"
] | https://huggingface.co/datasets/irenelizihui/Surfer100/resolve/main/README.md | ---
license: wtfpl
---
|
raquiba | null | null | null | false | 41 | false | raquiba/Sarcasm_News_Headline | 2022-04-14T08:19:08.000Z | null | false | 643ceefc17441e56cff66f57c03b13615545d42b | [] | [] | https://huggingface.co/datasets/raquiba/Sarcasm_News_Headline/resolve/main/README.md | Past studies in Sarcasm Detection mostly make use of Twitter datasets collected using hashtag based supervision but such datasets are noisy in terms of labels and language. Furthermore, many tweets are replies to other tweets and detecting sarcasm in these requires the availability of contextual tweets.
To overcome the limitations related to noise in Twitter datasets, this Headlines dataset for Sarcasm Detection is collected from two news website. TheOnion aims at producing sarcastic versions of current events and we collected all the headlines from News in Brief and News in Photos categories (which are sarcastic). We collect real (and non-sarcastic) news headlines from HuffPost.
This new dataset has the following advantages over the existing Twitter datasets:
Since news headlines are written by professionals in a formal manner, there are no spelling mistakes and informal usage. This reduces the sparsity and also increases the chance of finding pre-trained embeddings.
Furthermore, since the sole purpose of TheOnion is to publish sarcastic news, we get high-quality labels with much less noise as compared to Twitter datasets.
Unlike tweets which are replies to other tweets, the news headlines we obtained are self-contained. This would help us in teasing apart the real sarcastic elements. |
taln-ls2n | null | @inproceedings{hulth2003improved,
title={Improved automatic keyword extraction given more linguistic knowledge},
author={Hulth, Anette},
booktitle={Proceedings of the 2003 conference on Empirical methods in natural language processing},
pages={216--223},
year={2003}
} | Inspec benchmark dataset for keyphrase extraction an generation. | false | 114 | false | taln-ls2n/inspec | 2022-07-21T14:14:59.000Z | null | false | dd723264101153ba5ddf3451e65446346000f496 | [] | [
"annotations_creators:unknown",
"language_creators:unknown",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"task_categories:text-generation",
"task_ids:keyphrase-generation",
"task_ids:keyphrase-extraction",
"size_categories:1K<n<10K"
] | https://huggingface.co/datasets/taln-ls2n/inspec/resolve/main/README.md | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- en
license:
- unknown
multilinguality:
- monolingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- 1K<n<10K
pretty_name: Inspec
---
# Inspec Benchmark Dataset for Keyphrase Generation
## About
Inspec is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 2,000 abstracts of scientific papers collected from the [Inspec database](https://www.theiet.org/resources/inspec/).
Keyphrases were annotated by professional indexers in an uncontrolled setting (that is, not limited to thesaurus entries).
Details about the inspec dataset can be found in the original paper [(Hulth, 2003)][hulth-2003].
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021].
Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Porter's stemmer implementation provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
## Content and statistics
The dataset is divided into the following three splits:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: |
| Train | 1,000 | 141.7 | 9.79 | 78.00 | 9.85 | 6.22 | 5.93 |
| Validation | 500 | 132.2 | 9.15 | 77.96 | 9.82 | 6.75 | 5.47 |
| Test | 500 | 134.8 | 9.83 | 78.70 | 9.92 | 6.48 | 4.91 |
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
## References
- (Hulth, 2003) Anette Hulth. 2003.
[Improved automatic keyword extraction given more linguistic knowledge](https://aclanthology.org/W03-1028).
In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 216-223.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/).
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[hulth-2003]: https://aclanthology.org/W03-1028/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/ |
yarongef | null | null | null | false | 1 | false | yarongef/human_proteome_singlets | 2022-09-21T08:45:02.000Z | null | false | 29b145026c2ec661f3ad581e42f73c68be4f4a13 | [] | [
"license:mit"
] | https://huggingface.co/datasets/yarongef/human_proteome_singlets/resolve/main/README.md | ---
license: mit
---
# Dataset Description
Out of **20,577** human proteins (from [UniProt human proteome](https://www.uniprot.org/proteomes/UP000005640)), sequences shorter than 20 amino acids or longer than 512 amino acids were removed, resulting in a set of **12,703** proteins. The uShuffle algorithm ([python pacakge](https://github.com/guma44/ushuffle)) was then used to shuffle these protein sequences while maintaining their singlet distribution.
Afterwards, h-CD-HIT algorithm ([web server](http://weizhong-lab.ucsd.edu/cdhit-web-server/cgi-bin/index.cgi)) was used with three subsequent filter stages at pairwise identity cutoffs of 0.9, 0.5 and 0.1, resulting in a total of **11,698** sequences.
# **Citation**
If you use this dataset, please cite our paper:
```
@article {
author = {Geffen, Yaron and Ofran, Yanay and Unger, Ron},
title = {DistilProtBert: A distilled protein language model used to distinguish between real proteins and their randomly shuffled counterparts},
year = {2022},
doi = {10.1093/bioinformatics/btac474},
URL = {https://doi.org/10.1093/bioinformatics/btac474},
journal = {Bioinformatics}
}
``` |
Matthijs | null | @article{OpenImages2,
title={OpenImages: A public dataset for large-scale multi-label and multi-class image classification.},
author={Krasin, Ivan and Duerig, Tom and Alldrin, Neil and Ferrari, Vittorio and Abu-El-Haija, Sami and Kuznetsova, Alina and Rom, Hassan and Uijlings, Jasper and Popov, Stefan and Kamali, Shahab and Malloci, Matteo and Pont-Tuset, Jordi and Veit, Andreas and Belongie, Serge and Gomes, Victor and Gupta, Abhinav and Sun, Chen and Chechik, Gal and Cai, David and Feng, Zheyun and Narayanan, Dhyanesh and Murphy, Kevin},
journal={Dataset available from https://storage.googleapis.com/openimages/web/index.html},
year={2017}
} | null | false | 25 | false | Matthijs/snacks | 2022-04-12T14:26:59.000Z | null | false | 68b2211a56c1f2a2276d10c2d0f31a416ed9a2c9 | [] | [
"task_categories:image-classification",
"license:cc-by-4.0"
] | https://huggingface.co/datasets/Matthijs/snacks/resolve/main/README.md | ---
pretty_name: Snacks
task_categories:
- image-classification
- computer-vision
license: cc-by-4.0
---
# Dataset Card for Snacks
## Dataset Summary
This is a dataset of 20 different types of snack foods that accompanies the book [Machine Learning by Tutorials](https://www.raywenderlich.com/books/machine-learning-by-tutorials/v2.0).
The images were taken from the [Google Open Images dataset](https://storage.googleapis.com/openimages/web/index.html), release 2017_11.
## Dataset Structure
Number of images in the train/validation/test splits:
```nohighlight
train 4838
val 955
test 952
total 6745
```
Total images in each category:
```nohighlight
apple 350
banana 350
cake 349
candy 349
carrot 349
cookie 349
doughnut 350
grape 350
hot dog 350
ice cream 350
juice 350
muffin 348
orange 349
pineapple 340
popcorn 260
pretzel 204
salad 350
strawberry 348
waffle 350
watermelon 350
```
To save space in the download, the images were resized so that their smallest side is 256 pixels. All EXIF information was removed.
### Data Splits
Train, Test, Validation
## Licensing Information
Just like the images from Google Open Images, the snacks dataset is licensed under the terms of the Creative Commons license.
The images are listed as having a [CC BY 2.0](https://creativecommons.org/licenses/by/2.0/) license.
The annotations are licensed by Google Inc. under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
The **credits.csv** file contains the original URL, author information and license for each image.
|
mwong | null | null | null | false | 7 | false | mwong/fever-evidence-related | 2022-10-25T10:06:51.000Z | fever | false | 14aba009b5fcd97b1a9ee6f3e3b0da0e308cf7cb | [] | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|fever",
"task_categories:text-classification",
"task_ids:fact-checking"
] | https://huggingface.co/datasets/mwong/fever-evidence-related/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: fever
pretty_name: fever
size_categories:
- 100K<n<1M
source_datasets:
- extended|fever
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Fever dataset (https://fever.ai), pre-processed and ready to train and evaluate.
The training objective is a text classification task - given a claim and evidence, predict if evidence is related to claim. |
yuanjie | null | null | null | false | 1 | false | yuanjie/demo | 2022-04-12T10:00:12.000Z | null | false | 0f4beb45dfe3526ba6e132375982ca27cee6eb47 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/yuanjie/demo/resolve/main/README.md | ---
license: apache-2.0
---
|
null | null | @misc{cobbe2021training,
title={Training Verifiers to Solve Math Word Problems},
author={Karl Cobbe and Vineet Kosaraju and Mohammad Bavarian and Jacob Hilton and Reiichiro Nakano and Christopher Hesse and John Schulman},
year={2021},
eprint={2110.14168},
archivePrefix={arXiv},
primaryClass={cs.LG}
} | GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality
linguistically diverse grade school math word problems. The
dataset was created to support the task of question answering
on basic mathematical problems that require multi-step reasoning. | false | 4,112 | false | gsm8k | 2022-11-03T16:32:15.000Z | gsm8k | false | 4da2377df2207498acb46e737c056a62610919b9 | [] | [
"arxiv:2110.14168",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:mit",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text2text-generation",
"tags:math-word-problems"
] | https://huggingface.co/datasets/gsm8k/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: gsm8k
pretty_name: Grade School Math 8K
tags:
- math-word-problems
dataset_info:
- config_name: main
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 713732
num_examples: 1319
- name: train
num_bytes: 3963202
num_examples: 7473
download_size: 4915944
dataset_size: 4676934
- config_name: socratic
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 936859
num_examples: 1319
- name: train
num_bytes: 5198108
num_examples: 7473
download_size: 6374717
dataset_size: 6134967
---
# Dataset Card for GSM8K
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://openai.com/blog/grade-school-math/
- **Repository:** https://github.com/openai/grade-school-math
- **Paper:** https://arxiv.org/abs/2110.14168
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
For the `main` configuration, each instance contains a string for the grade-school level math question and a string for the corresponding answer with multiple steps of reasoning and calculator annotations (explained [here](https://github.com/openai/grade-school-math#calculation-annotations)).
```python
{
'question': 'Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?',
'answer': 'Natalia sold 48/2 = <<48/2=24>>24 clips in May.\nNatalia sold 48+24 = <<48+24=72>>72 clips altogether in April and May.\n#### 72',
}
```
For the `socratic` configuration, each instance contains a string for a grade-school level math question, a string for the corresponding answer with multiple steps of reasoning, calculator annotations (explained [here](https://github.com/openai/grade-school-math#calculation-annotations)), and *Socratic sub-questions*.
```python
{
'question': 'Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?',
'answer': 'How many clips did Natalia sell in May? ** Natalia sold 48/2 = <<48/2=24>>24 clips in May.\nHow many clips did Natalia sell altogether in April and May? ** Natalia sold 48+24 = <<48+24=72>>72 clips altogether in April and May.\n#### 72',
}
```
### Data Fields
The data fields are the same among `main` and `socratic` configurations and their individual splits.
- question: The question string to a grade school math problem.
- answer: The full solution string to the `question`. It contains multiple steps of reasoning with calculator annotations and the final numeric solution.
### Data Splits
| name |train|validation|
|--------|----:|---------:|
|main | 7473| 1319|
|socratic| 7473| 1319|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> We initially collected a starting set of a thousand problems and natural language solutions by hiring freelance contractors on Upwork (upwork.com). We then worked with Surge AI (surgehq.ai), an NLP data labeling platform, to scale up our data collection. After collecting the full dataset, we asked workers to re-solve all problems, with no workers re-solving problems they originally wrote. We checked whether their final answers agreed with the original solu- tions, and any problems that produced disagreements were either repaired or discarded. We then performed another round of agreement checks on a smaller subset of problems, finding that 1.7% of problems still produce disagreements among contractors. We estimate this to be the fraction of problems that con- tain breaking errors or ambiguities. It is possible that a larger percentage of problems contain subtle errors.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
Surge AI (surgehq.ai)
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The GSM8K dataset is licensed under the [MIT License](https://opensource.org/licenses/MIT).
### Citation Information
```bibtex
@article{cobbe2021gsm8k,
title={Training Verifiers to Solve Math Word Problems},
author={Cobbe, Karl and Kosaraju, Vineet and Bavarian, Mohammad and Chen, Mark and Jun, Heewoo and Kaiser, Lukasz and Plappert, Matthias and Tworek, Jerry and Hilton, Jacob and Nakano, Reiichiro and Hesse, Christopher and Schulman, John},
journal={arXiv preprint arXiv:2110.14168},
year={2021}
}
```
### Contributions
Thanks to [@jon-tow](https://github.com/jon-tow) for adding this dataset. |
HFFErica | null | null | null | false | 1 | false | HFFErica/steamreviews | 2022-05-20T14:53:06.000Z | null | false | e66185fa35291e39de1704837a5b14dcba1388ee | [] | [
"license:other"
] | https://huggingface.co/datasets/HFFErica/steamreviews/resolve/main/README.md | ---
license: other
---
|
null | null | @inproceedings{NIPS2011_5dd9db5e,
author = {Ordonez, Vicente and Kulkarni, Girish and Berg, Tamara},
booktitle = {Advances in Neural Information Processing Systems},
editor = {J. Shawe-Taylor and R. Zemel and P. Bartlett and F. Pereira and K.Q. Weinberger},
pages = {},
publisher = {Curran Associates, Inc.},
title = {Im2Text: Describing Images Using 1 Million Captioned Photographs},
url = {https://proceedings.neurips.cc/paper/2011/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf},
volume = {24},
year = {2011}
} | The SBU Captioned Photo Dataset is a collection of over 1 million images with associated text descriptions extracted from Flicker. | false | 58 | false | sbu_captions | 2022-11-03T15:51:00.000Z | sbu-captions-dataset | false | b485c0c12063b56119459da5e071aa5545c72a1e | [] | [
"annotations_creators:found",
"language_creators:found",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"task_categories:image-to-text",
"task_ids:image-captioning"
] | https://huggingface.co/datasets/sbu_captions/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- image-to-text
task_ids:
- image-captioning
paperswithcode_id: sbu-captions-dataset
pretty_name: SBU Captioned Photo Dataset
dataset_info:
features:
- name: image_url
dtype: string
- name: user_id
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 143795586
num_examples: 1000000
download_size: 49787719
dataset_size: 143795586
---
# Dataset Card for SBU Captioned Photo Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.cs.rice.edu/~vo9/sbucaptions/
- **Repository:**
- **Paper:** [Im2Text: Describing Images Using 1 Million Captioned Photographs](https://papers.nips.cc/paper/2011/hash/5dd9db5e033da9c6fb5ba83c7a7ebea9-Abstract.html)
- **Leaderboard:**
- **Point of Contact:** [Vicente Ordóñez Román](mailto:vicenteor@rice.edu)
### Dataset Summary
SBU Captioned Photo Dataset is a collection of associated captions and images from Flickr.
### Dataset Preprocessing
This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
return batch
num_threads = 20
dset = load_dataset("sbu_captions")
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
### Supported Tasks and Leaderboards
- `image-to-text`: This dataset can be used to train a model for Image Captioning where the goal is to predict a caption given the image.
### Languages
All captions are in English.
## Dataset Structure
### Data Instances
Each instance in SBU Captioned Photo Dataset represents a single image with a caption and a user_id:
```
{
'img_url': 'http://static.flickr.com/2723/4385058960_b0f291553e.jpg',
'user_id': '47889917@N08',
'caption': 'A wooden chair in the living room'
}
```
### Data Fields
- `image_url`: Static URL for downloading the image associated with the post.
- `caption`: Textual description of the image.
- `user_id`: Author of caption.
### Data Splits
All the data is contained in training split. The training set has 1M instances.
## Dataset Creation
### Curation Rationale
From the paper:
> One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually
relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results.
### Source Data
The source images come from Flickr.
#### Initial Data Collection and Normalization
One key contribution of our paper is a novel web-scale database of photographs with associated
descriptive text. To enable effective captioning of novel images, this database must be good in two
ways: 1) It must be large so that image based matches to a query are reasonably similar, 2) The
captions associated with the data base photographs must be visually relevant so that transferring
captions between pictures is useful. To achieve the first requirement we query Flickr using a huge
number of pairs of query terms (objects, attributes, actions, stuff, and scenes). This produces a very
large, but noisy initial set of photographs with associated text.
#### Who are the source language producers?
The Flickr users.
### Annotations
#### Annotation process
Text descriptions associated with the images are inherited as annotations/captions.
#### Who are the annotators?
The Flickr users.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
Vicente Ordonez, Girish Kulkarni and Tamara L. Berg.
### Licensing Information
Not specified.
### Citation Information
```bibtex
@inproceedings{NIPS2011_5dd9db5e,
author = {Ordonez, Vicente and Kulkarni, Girish and Berg, Tamara},
booktitle = {Advances in Neural Information Processing Systems},
editor = {J. Shawe-Taylor and R. Zemel and P. Bartlett and F. Pereira and K.Q. Weinberger},
pages = {},
publisher = {Curran Associates, Inc.},
title = {Im2Text: Describing Images Using 1 Million Captioned Photographs},
url = {https://proceedings.neurips.cc/paper/2011/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf},
volume = {24},
year = {2011}
}
```
### Contributions
Thanks to [@thomasw21](https://github.com/thomasw21) for adding this dataset. |
yarongef | null | null | null | false | 1 | false | yarongef/human_proteome_doublets | 2022-09-21T08:43:43.000Z | null | false | 4e2d0dd3b956ed9e3c623e087a9e800afee79572 | [] | [
"license:mit"
] | https://huggingface.co/datasets/yarongef/human_proteome_doublets/resolve/main/README.md | ---
license: mit
---
# Dataset Description
Out of **20,577** human proteins (from [UniProt human proteome](https://www.uniprot.org/proteomes/UP000005640)), sequences shorter than 20 amino acids or longer than 512 amino acids were removed, resulting in a set of **12,703** proteins. The uShuffle algorithm ([python pacakge](https://github.com/guma44/ushuffle)) was then used to shuffle these protein sequences while maintaining their doublet distribution. The very few sequences for which uShuffle failed to create a shuffled version were eliminated.
Afterwards, h-CD-HIT algorithm ([web server](http://weizhong-lab.ucsd.edu/cdhit-web-server/cgi-bin/index.cgi)) was used with three subsequent filter stages at pairwise identity cutoffs of 0.9, 0.5 and 0.1, resulting in a total of **11,658** sequences.
# Citation
If you use this dataset, please cite our paper:
```
@article {
author = {Geffen, Yaron and Ofran, Yanay and Unger, Ron},
title = {DistilProtBert: A distilled protein language model used to distinguish between real proteins and their randomly shuffled counterparts},
year = {2022},
doi = {10.1093/bioinformatics/btac474},
URL = {https://doi.org/10.1093/bioinformatics/btac474},
journal = {Bioinformatics}
}
``` |
yarongef | null | null | null | false | 1 | false | yarongef/human_proteome_triplets | 2022-09-21T08:44:27.000Z | null | false | 7a048b602b00c84360b440c8f13b198b67c14ae4 | [] | [
"license:mit"
] | https://huggingface.co/datasets/yarongef/human_proteome_triplets/resolve/main/README.md | ---
license: mit
---
# Dataset Description
Out of **20,577** human proteins (from [UniProt human proteome](https://www.uniprot.org/proteomes/UP000005640)), sequences shorter than 20 amino acids or longer than 512 amino acids were removed, resulting in a set of **12,703** proteins. The uShuffle algorithm ([python pacakge](https://github.com/guma44/ushuffle)) was then used to shuffle these protein sequences while maintaining their triplet distribution. The sequences for which uShuffle failed to create a shuffled version were eliminated.
Afterwards, h-CD-HIT algorithm ([web server](http://weizhong-lab.ucsd.edu/cdhit-web-server/cgi-bin/index.cgi)) was used with three subsequent filter stages at pairwise identity cutoffs of 0.9, 0.5 and 0.1, resulting in a total of **3,688** sequences.
# Citation
If you use this dataset, please cite our paper:
```
@article {
author = {Geffen, Yaron and Ofran, Yanay and Unger, Ron},
title = {DistilProtBert: A distilled protein language model used to distinguish between real proteins and their randomly shuffled counterparts},
year = {2022},
doi = {10.1093/bioinformatics/btac474},
URL = {https://doi.org/10.1093/bioinformatics/btac474},
journal = {Bioinformatics}
}
``` |
mwong | null | null | null | false | 1 | false | mwong/climate-evidence-related | 2022-10-25T10:06:54.000Z | climate-fever | false | 4a4b251e2258a5d44e0b258c1ea7b026d4e7147e | [] | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_fever",
"task_categories:text-classification",
"task_ids:fact-checking"
] | https://huggingface.co/datasets/mwong/climate-evidence-related/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: climate-fever
pretty_name: climate-fever
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_fever
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Fever dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever.html), pre-processed and ready to train and evaluate.
The training objective is a text classification task - given a claim and evidence, predict if evidence is related to claim. |
Pavithree | null | null | null | false | 1 | false | Pavithree/eli5 | 2022-04-23T08:38:44.000Z | null | false | 15e4d1933a321088880215257aa25b621a091335 | [] | [] | https://huggingface.co/datasets/Pavithree/eli5/resolve/main/README.md | This dataset is the subset of original eli5 dataset available on hugging face |
enimai | null | null | null | false | 1 | false | enimai/MuST-C-and-WMT16-de-en | 2022-04-12T20:03:25.000Z | null | false | 785573d6c226c7c07a7dc67f3e8739c25dad3927 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/enimai/MuST-C-and-WMT16-de-en/resolve/main/README.md | ---
license: afl-3.0
---
|
Matthijs | null | null | null | false | 13 | false | Matthijs/snacks-detection | 2022-04-12T14:26:04.000Z | null | false | ca047290468c7f565f248c16139d2a096230112b | [] | [
"task_categories:object-detection",
"license:cc-by-4.0"
] | https://huggingface.co/datasets/Matthijs/snacks-detection/resolve/main/README.md | ---
pretty_name: Snacks (Detection)
task_categories:
- object-detection
- computer-vision
license: cc-by-4.0
---
# Dataset Card for Snacks (Detection)
## Dataset Summary
This is a dataset of 20 different types of snack foods that accompanies the book [Machine Learning by Tutorials](https://www.raywenderlich.com/books/machine-learning-by-tutorials/v2.0).
The images were taken from the [Google Open Images dataset](https://storage.googleapis.com/openimages/web/index.html), release 2017_11.
## Dataset Structure
Included in the **data** folder are three CSV files with bounding box annotations for the images in the dataset, although not all images have annotations and some images have multiple annotations.
The columns in the CSV files are:
- `image_id`: the filename of the image without the .jpg extension
- `x_min, x_max, y_min, y_max`: normalized bounding box coordinates, i.e. in the range [0, 1]
- `class_name`: the class that belongs to the bounding box
- `folder`: the class that belongs to the image as a whole, which is also the name of the folder that contains the image
The class names are:
```nohighlight
apple
banana
cake
candy
carrot
cookie
doughnut
grape
hot dog
ice cream
juice
muffin
orange
pineapple
popcorn
pretzel
salad
strawberry
waffle
watermelon
```
**Note:** The image files are not part of this repo but [can be found here](https://huggingface.co/datasets/Matthijs/snacks).
### Data Splits
Train, Test, Validation
## Licensing Information
Just like the images from Google Open Images, the snacks dataset is licensed under the terms of the Creative Commons license.
The images are listed as having a [CC BY 2.0](https://creativecommons.org/licenses/by/2.0/) license.
The annotations are licensed by Google Inc. under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
arakesh | null | null | null | false | 1 | false | arakesh/PennFudanPedestrian-1024x512 | 2022-04-12T16:14:33.000Z | null | false | d3d4c6fc780dc8d0de62aeef28a38ba2fbed3606 | [] | [] | https://huggingface.co/datasets/arakesh/PennFudanPedestrian-1024x512/resolve/main/README.md | | images | semantic maps | instance ids |
| --- | --- | --- |
| available | available | available |
```
dataset-size: 107Mb
resolution: 1024x1024
license: ...
sample-size:
./pix2pixHD_person_synthesis
├── test_img [10 entries]
├── test_inst [10 entries]
├── test_label [10 entries]
├── train_img [160 entries]
├── train_inst [160 entries]
└── train_label [160 entries]
``` |
arakesh | null | null | null | false | 1 | false | arakesh/deepglobe-2448x2448 | 2022-04-12T17:20:26.000Z | null | false | b46b232f33c35877e75081f948a89357ea4f1016 | [] | [] | https://huggingface.co/datasets/arakesh/deepglobe-2448x2448/resolve/main/README.md | Data source: http://deepglobe.org/
| images | semantic maps | instance ids |
| --- | --- | --- |
| available | available | n/a |
```
dataset-size: 2.0G
resolution: 2448x2448
license: ...
sample-size:
./pix2pixHD-deepglobe-synthesis
├── test_img [30 entries]
├── test_label [30 entries]
├── train_img [773 entries]
└── train_label [773 entries]
``` |
csteinmetz1 | null | null | null | false | 1 | false | csteinmetz1/test-dataset | 2022-04-12T16:47:54.000Z | null | false | 22c6b6002b2100e073b4146f20c22fa534d6e536 | [] | [
"license:cc"
] | https://huggingface.co/datasets/csteinmetz1/test-dataset/resolve/main/README.md | ---
license: cc
---
|
arakesh | null | null | null | false | 1 | false | arakesh/uavid-15-hq-mixedres | 2022-04-12T17:19:47.000Z | null | false | bdf451283a48e53f34fea37a8ad1c475175308cf | [] | [] | https://huggingface.co/datasets/arakesh/uavid-15-hq-mixedres/resolve/main/README.md | Data source: https://uavid.nl/
| images | semantic maps | instance ids |
| --- | --- | --- |
| available | available | n/a |
```
dataset-size: 6.1G
resolution: mixed (3840x2160, 4096x2060) - because drone cameras are different for different faces.
license: ...
sample-size:
+ train: 200
+ test: 70
``` |
openenvironments | null | null | null | false | 1 | false | openenvironments/blockgroupvoting | 2022-04-12T21:19:08.000Z | null | false | e5981976381334c87f5917b4743d749726a9e21b | [] | [
"license:mit"
] | https://huggingface.co/datasets/openenvironments/blockgroupvoting/resolve/main/README.md | ---
license: mit
---
## Problem and Opportunity
In the United States, voting is largely a private matter. A registered voter is given a randomized ballot form or machine to prevent linkage between their voting choices and their identity. This disconnect supports confidence in the election process, but it provides obstacles to an election's analysis. A common solution is to field exit polls, interviewing voters immediately after leaving their polling location. This method is rife with bias, however, and functionally limited in direct demographics data collected.
For the 2020 general election, though, most states published their election results for each voting location. These publications were additionally supported by the geographical areas assigned to each location, the voting precincts. As a result, geographic processing can now be applied to project precinct election results onto Census block groups. While precinct have few demographic traits directly, their geographies have characteristics that make them projectable onto U.S. Census geographies. Both state voting precincts and U.S. Census block groups:
* are exclusive, and do not overlap
* are adjacent, fully covering their corresponding state and potentially county
* have roughly the same size in area, population and voter presence
Analytically, a projection of local demographics does not allow conclusions about voters themselves. However, the dataset does allow statements related to the geographies that yield voting behavior. One could say, for example, that an area dominated by a particular voting pattern would have mean traits of age, race, income or household structure.
The dataset that results from this programming provides voting results allocated by Census block groups. The block group identifier can be joined to Census Decennial and American Community Survey demographic estimates.
|
ghomasHudson | null | null | null | false | 1 | false | ghomasHudson/hotpotExtendedAno | 2022-04-13T11:01:17.000Z | null | false | 37d117aedb1c469ebf2adc217dae40ff31a97a23 | [] | [] | https://huggingface.co/datasets/ghomasHudson/hotpotExtendedAno/resolve/main/README.md | # hotpotQA-Extended (Annotated)
A version of [HotpotQA-Extended](https://huggingface.co/datasets/ghomasHudson/hotpotExtended) with extra annotations about what part of the input contains the answer. |
huggan | null | null | null | false | 1,706 | false | huggan/smithsonian_butterflies_subset | 2022-04-16T08:02:36.000Z | null | false | 3cdedf844922ab40393d46d4c7f81c596e1c6d45 | [] | [] | https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset/resolve/main/README.md | This a subset of "ceyda/smithsonian_butterflies" dataset with additional processing done to train the "ceyda/butterfly_gan" model.
The preprocessing includes:
- Adding "sim_score" to images with CLIP model using "pretty butterfly","one butterfly","butterfly with open wings","colorful butterfly"
- Removing butterflies with the same name(species)
- Limiting only to the top 1000 images
- Removing the background (doing another sim_scoring after bg removal did visually worse so didn't do it)
- Detecting contours
- Cropping to the bounding box of the contour with the largest area
- Converting back to RGB
|
SetFit | null | null | null | false | 13 | false | SetFit/amazon_reviews_multi_en | 2022-04-13T19:06:11.000Z | null | false | ec73b665e4be0f567b69d39425355401cfe0d29b | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/SetFit/amazon_reviews_multi_en/resolve/main/README.md | ---
license: apache-2.0
---
|
enimai | null | null | null | false | 1 | false | enimai/MuST-C-it | 2022-04-14T04:57:08.000Z | null | false | 77b43264a1186dfd3ffdc2bd4018b41f0a6bb689 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/enimai/MuST-C-it/resolve/main/README.md | ---
license: afl-3.0
---
|
bullmount | null | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | false | 15 | false | bullmount/squad_it | 2022-04-14T16:06:54.000Z | null | false | 161fc9fce16d7e942e0ed14046c1e98956437061 | [] | [] | https://huggingface.co/datasets/bullmount/squad_it/resolve/main/README.md | [Needs More Information]
# Dataset Card for squad_it
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Converted dataset version to be used in Huggingface.
Originally created by Croce et al. at 2018, the SQuAD-it The dataset contains more than 60,000 question/answer pairs in Italian derived from the original English SQuAD dataset., in Italian language. Containing 60,000+ in JSON file format.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@InProceedings{10.1007/978-3-030-03840-3_29,
author="Croce, Danilo and Zelenanska, Alexandra and Basili, Roberto",
editor="Ghidini, Chiara and Magnini, Bernardo and Passerini, Andrea and Traverso, Paolo",
title="Neural Learning for Question Answering in Italian",
booktitle="AI*IA 2018 -- Advances in Artificial Intelligence",
year="2018",
publisher="Springer International Publishing",
address="Cham",
pages="389--402",
isbn="978-3-030-03840-3"
}
``` |
Aanisha | null | null | null | false | 1 | false | Aanisha/NeonGAN_dataset | 2022-04-14T07:57:23.000Z | null | false | d40a6b60af1f6d7f320307c3362e8122e8b72006 | [] | [
"license:mit"
] | https://huggingface.co/datasets/Aanisha/NeonGAN_dataset/resolve/main/README.md | ---
license: mit
---
|
taln-ls2n | null | @InProceedings{meng-EtAl:2017:Long,
author = {Meng, Rui and Zhao, Sanqiang and Han, Shuguang and He, Daqing and Brusilovsky, Peter and Chi, Yu},
title = {Deep Keyphrase Generation},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
month = {July},
year = {2017},
address = {Vancouver, Canada},
publisher = {Association for Computational Linguistics},
pages = {582--592},
url = {http://aclweb.org/anthology/P17-1054}
} | KP20k dataset for keyphrase extraction and generation in scientific paper. | false | 12 | false | taln-ls2n/kp20k | 2022-07-21T14:14:37.000Z | null | false | b02a20bfc2e25f0d900035f0fc4397063a4897c7 | [] | [
"annotations_creators:unknown",
"language_creators:unknown",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"task_categories:text-generation",
"task_ids:keyphrase-generation",
"task_ids:keyphrase-extraction",
"size_categories:100K<n<1M"
] | https://huggingface.co/datasets/taln-ls2n/kp20k/resolve/main/README.md | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- en
license:
- unknown
multilinguality:
- monolingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- 100K<n<1M
pretty_name: KP20k
---
# KP20k Benchmark Dataset for Keyphrase Generation
## About
KP20k is a dataset for benchmarking keyphrase extraction and generation models.
The data is composed of 570 809 abstracts and their associated titles from scientific articles.
Details about the dataset can be found in the original paper:
- Meng et al 2017.
[Deep keyphrase Generation](https://aclanthology.org/P17-1054.pdf)
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 582–592
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in the following paper:
- Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/).
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
Text pre-processing (tokenization) is carried out using spacy (en_core_web_sm model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in nltk) is applied before reference keyphrases are matched against the source text.
## Content
The dataset is divided into the following three splits:
| Split | # documents | # keyphrases by document (average) | % Present | % Reordered | % Mixed | % Unseen |
| :--------- | ----------: | -----------: | --------: | ----------: | ------: | -------: |
| Train | 530 809 | 5.29 | 58.19 | 10.93 | 17.36 | 13.52 |
| Test | 20 000 | 5.28 | 58.40 | 10.84 | 17.20 | 13.56 |
| Validation | 20 000 | 5.27 | 58.20 | 10.94 | 17.26 | 13.61 |
The following data fields are available:
- **id**: unique identifier of the document. **NB** There were no ids in the original dataset. The ids were generated using the python module shortuuid (https://pypi.org/project/shortuuid/)
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
**NB**: The present keyphrases (represented by the "P" label in the PRMU column) are sorted by their apparition order in the text (title + abstract). |
patriziobellan | null | @article{DBLP:journals/corr/abs-2203-04860,
author = {Patrizio Bellan and
Han van der Aa and
Mauro Dragoni and
Chiara Ghidini and
Simone Paolo Ponzetto},
title = {{PET:} {A} new Dataset for Process Extraction from Natural Language
Text},
journal = {CoRR},
volume = {abs/2203.04860},
year = {2022},
url = {https://doi.org/10.48550/arXiv.2203.04860},
doi = {10.48550/arXiv.2203.04860},
eprinttype = {arXiv},
eprint = {2203.04860},
biburl = {https://dblp.org/rec/journals/corr/abs-2203-04860.bib}
} | Abstract. Although there is a long tradition of work in NLP on extracting entities and relations from text, to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management. For this, we develop the first corpus of business process descriptions annotated with activities, gateways, actors and flow information. We present our new resource, including a detailed overview of the annotation schema and guidelines, as well as a variety of baselines to benchmark the difficulty and challenges of business process extraction from text. | false | 1,323 | false | patriziobellan/PET | 2022-07-27T11:27:19.000Z | null | false | 3f1bc3ba42dd83211c520c02bf43d2c2cf5236a0 | [] | [
"arxiv:2203.04860",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:mit",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:Friedrich et al. original dataset",
"task_categories:token-classification",
"task_ids:token c... | https://huggingface.co/datasets/patriziobellan/PET/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: PET
size_categories:
- 1K<n<10K
source_datasets:
[Friedrich et al. original dataset]
task_categories:
- token-classification
task_ids:
- token classification
- named entity recognition
- relation extraction
---
# PET: A NEW DATASET FOR PROCESS EXTRACTION FROM TEXT
# Dataset Card for PET
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
- [Annotation Guidelines](#annotationguidelines)
- [Update](#updates)
- [Loading data](#loadingdata)
## Dataset Description
- **Homepage:** https://pdi.fbk.eu/pet-dataset/
- **Paper:** https://arxiv.org/abs/2203.04860
- **Point of Contact:** [Patrizio Bellan](pbellan@fbk.eu)
### Dataset Summary
Abstract. Although there is a long tradition of work in NLP on extracting entities and relations from text, to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management. For this, we develop the first corpus of business process descriptions annotated with activities, actors, activity data, gateways and their conditions. We present our new resource to benchmark the difficulty and challenges of business process extraction from text.
### Supported Tasks and Leaderboards
- Token Classification
- Named Entity Recognition
- Relations Extraction
### Languages
English
## Dataset Structure
Test set to beanchmark *Business Process Extraction from Text* approaches.
### Data Instances
#### Token Classification
For each instance, there is a document name representing the name of the document of the Friedrich *et al.* dataset, an integer representing the number of the sentence, a list of tokens representing the words of the sentence instance, and a list of *ner tags* (in IOB2 format) representing the annotation of process elements of the sentence.
Below, an example of data instance.
```
{
"document name":"doc-1.1",
"sentence-ID":1,
"tokens":["Whenever","the","sales","department","receives","an","order",",","a","new","process","instance","is","created","."],
"ner-tags":["O","B-Actor","I-Actor","I-Actor","B-Activity","B-Activity Data","I-Activity Data","O","O","O","O","O","O","O","O"]
}
```
#### Relations Extraction
For each instance, there is a document name representing the name of the document of the Friedrich *et al.* dataset, a list of tokens representing the words of the document instance, a list of interger representing the words position within each sentence of the document instance, a list of *ner tags* (in IOB2 format) representing the annotation of the token, a list of sentence id representing for each token the number of the sentence, and a list of relations of the document.
Below, an example of data instance.
```
{
"document name": "doc-1.1",
"tokens": ["A", "small", "company",...],
"tokens-IDs": [0, 1, 2, ...],
"ner_tags": ["O", "O", "O", ...],
"sentence-IDs": [0, 0, 0, ...],
"relations": {
"source-head-sentence-ID": [1, 1, 1, ...],
"source-head-word-ID": [4, 4, 4, ...],
"relation-type": ["uses", "flow", "actor recipient", ...],
"target-head-sentence-ID": [1, 2, 1,...],
"target-head-word-ID": [5, 9, 1, ...]
}
}
```
### Data Fields
#### Token Classification
- *document name*: a string used to represent the name of the document.
- *sentence-ID*: an integer (starting from 0) representing the number of the sentence within the document.
- *tokens*: a list of string representing the words of the sentence
- *ner-tags*: a list of string representing the annotation for each word.
The allowed **ner-tags** are:
- **O**: An O tag indicates that a token belongs to no chunk.
- **B-Actor**: This tag indicates the beginning of an *Actor* chunk.
- **I-Actor**: This tag indicates that the tag is inside an *Actor* chunk.
- **B-Activity**: This tag indicates the beginning of an *Activity* chunk.
- **I-Activity**: This tag indicates that the tag is inside an *Activity* chunk.
- **B-Activity Data**: This tag indicates the beginning of an *Activity Data* chunk.
- **I-Activity Data**: This tag indicates that the tag is inside an *Activity Data* chunk.
- **B-Further Specification**: This tag indicates the beginning of a *Further Specification* chunk.
- **I-Further Specification**: This tag indicates that the tag is inside a *Further Specification* chunk.
- **B-XOR Gateway**: This tag indicates the beginning of a *XOR Gateway* chunk.
- **I-XOR Gateway**: This tag indicates that the tag is inside a *XOR Gateway* chunk.
- **B-Condition Specification**: This tag indicates the beginning of a *Condition Specification* chunk.
- **I-Condition Specification**: This tag indicates that the tag is inside a *Condition Specification* chunk.
- **B-AND Gateway**: This tag indicates the beginning of an *AND Gateway* chunk.
- **I-AND Gateway**: This tag indicates that the tag is inside an *AND Gateway* chunk.
To have a complete explanation of each process element tag please refer to the [research paper](https://arxiv.org/abs/2203.04860) and the [annotation guidelines](https://pdi.fbk.eu/pet/annotation-guidelines-for-process-description.pdf).
### Relations Extraction
- *document name*: a string used to represent the name of the document.
- *tokens*: a list of string representing the words of the document
- *tokens-IDs*: a list of interger representing the word position within a sentence.
- *ner_tags*: a list of string representing the annotation for each word. (see ner-tags above)
- *sentence-IDs*: a list of interger representing the sentence number for each word of the document.
- *relations*:: a list of document relations.
- *source-head-sentence-ID*: a list of sentence ID pointing to the sentence number of the head (first token) of the source entity.
- *source-head-word-ID*: a list of token ID pointing to the word ID of the head (first token) of the source entity.
- *relation-type*: a list of relation tags.
- *target-head-sentence-ID*: a list of sentence ID pointing to the sentence number of the head (first token) of the target entity.
- *target-head-word-ID*: a list of token ID pointing to the word ID of the head (first token) of the target entity.
For instance, a relation is defined by the instances of *source-head-sentence-ID*, *source-head-word-ID*, *relation-type*, *target-head-sentence-ID*, and *target-head-word-ID* at the same index position.
In the following example, the first relation of the first document is shown:
```python
document_1=modelhub_dataset['test'][0]
relation = {
'source-head-sentence-ID': document_1['relations']['source-head-sentence-ID'][0],
'source-head-word-ID': document_1['relations']['source-head-word-ID'][0],
'relation-type': document_1['relations']['relation-type'][0],
'target-head-sentence-ID': document_1['relations']['target-head-sentence-ID'][0],
'target-head-word-ID': document_1['relations']['target-head-sentence-ID'][0],
}
print(relation)
```
the output is:
```python
{'relation-type': 'uses',
'source-head-sentence-ID': 1,
'source-head-word-ID': 4,
'target-head-sentence-ID': 1,
'target-head-word-ID': 1}
```
That means:
the entity in sentence number *1*, starting at the token position *4* has a *uses* relation with the entity in sentence number *1* starting at token position *1*
### Data Splits
The data was not splited. It contains the test set only.
## Dataset Creation
### Curation Rationale
Although there is a long tradition of work in NLP on extracting entities and relations from text, to date there exists little work on the acquisition of business processes from unstructured data such as textual corpora of process descriptions. With this work we aim at filling this gap and establishing the first steps towards bridging data-driven information extraction methodologies from Natural Language Processing and the model-based formalization that is aimed from Business Process Management.
### Source Data
#### Initial Data Collection and Normalization
The dataset construction process has been split in five main phases:
1. Text pre-processing. As the first operation, we check the content of each document and we tokenized it. This initial check was necessary since some of the original texts were automatically translated into English by the authors of the dataset. The translations were never validated, indeed, several errors have been found and fixed.
2. Text Annotation. Each text has been annotated by using the [guidelines](https://pdi.fbk.eu/pet/annotation-guidelines-for-process-description.pdf). The team was composed by five annotators with high expertise in BPMN. Each document has been assigned to three experts that were in change of identifying all the elements and flows with each document. In this phase, we used the the Inception tool to support annotators.
3. Automatic annotation fixing. After the second phase, we ran an automatic procedure relying on a rule-based script to automatically fix annotations that were not compliant with the guidelines. For example, if a modal verb was erroneously included in the annotation of an Activity, the procedure removed it from the annotation. Another example is the missing of the article within an annotation related to an Actor. In this case, the script included it in the annotation. This phase allowed to remove possible annotation errors and to obtain annotations compliant with the guidelines.
4. Agreement Computation. Here, we computed, on the annotation provided by the experts, the agreement scores for each process element and for each relation between process elements pair adopting the methodology proposed in [Hripcsak *et al.*](https://academic.oup.com/jamia/article/12/3/296/812057?login=true). We measured the agreement in terms of the F1 measure because, besides being straightforward to calculate, it is directly interpretable. Note that chance-corrected measures like *k* approach the F1-measure as the number of cases that raters agree are negative grows. By following such a methodology, an annotation was considered in agreement among the experts if and only if they capture the same span of words and they assign the same process element tag to the annotation.
5. Reconciliation. The last phase consisted of the mitigation of disagreements within the annotations provided by the experts. The aim of this phase is to obtain a shared and agreed set of gold standard annotations on each text for both entities and relations. Such entities also enable the generation of the related full-connected process model flow that can be rendered by using, but not limited to, a BPMN diagram. During this last phase, among the 47 documents originally included into the dataset, 2 of them were discarded. These texts were not fully annotated by the annotators since they were not be able to completely understand which process elements were actually included in some specific parts of the text. For this reason, the final size of the dataset is 45 textual descriptions of the corresponding process models together with their annotations.
#### Who are the source language producers?
English
### Annotations
#### Annotation process
You can read about the annotation process in the original paper https://arxiv.org/abs/2203.04860
#### Who are the annotators?
Expert Annotators
### Personal and Sensitive Information
No personal or sensitive information issues.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset has no social impact
### Discussion of Biases
No bias found in the dataset
### Other Known Limitations
The *Further specification* and *AND Gateway* elements obtained very poor performance on the baselines proposed in the paper.
The *AND Gateway* is the less represented process elements in this dataset.
The *Further Specification* process element was the most difficult element to annotate.
## Additional Information
### Dataset Curators
- Patrizio Bellan (Fondazione Bruno Kessler, Trento, Italy and Free University of Bozen-Bolzano, Bolzano, Italy)
- Mauro Dragoni (Fondazione Bruno Kessler, Trento, Italy)
- Chiara Ghidini (Fondazione Bruno Kessler, Trento, Italy)
- Han van der Aa (University of Mannheim, Mannheim, Germany)
- Simone Ponzetto (University of Mannheim, Mannheim, Germany)
### Licensing Information
### Citation Information
```
@article{DBLP:journals/corr/abs-2203-04860,
author = {Patrizio Bellan and
Han van der Aa and
Mauro Dragoni and
Chiara Ghidini and
Simone Paolo Ponzetto},
title = {{PET:} {A} new Dataset for Process Extraction from Natural Language
Text},
journal = {CoRR},
volume = {abs/2203.04860},
year = {2022},
url = {https://doi.org/10.48550/arXiv.2203.04860},
doi = {10.48550/arXiv.2203.04860},
eprinttype = {arXiv},
eprint = {2203.04860},
timestamp = {Wed, 16 Mar 2022 16:39:52 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2203-04860.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [Patrizio Bellan](https://pdi.fbk.eu/bellan/) for adding this dataset.
#### <a name="updates"></a>Update
- v1.0.0: Added token classification task
- v1.0.1: Added extraction relation task
## <a name="annotationguidelines"></a>Annotation Guidelines
### Inception Schema
The inception schema can be found [here](https://pdi.fbk.eu/pet/inception-schema.json)
### Annotation Guidelines
The Annotation guidelines and procedures adopted to annotate the PET dataset can be downloaded [here](https://pdi.fbk.eu/pet/annotation-guidelines-for-process-description.pdf)
### Article
The Article can be downloeaded [here](https://doi.org/10.48550/arXiv.2203.04860)
### Python Interface
A python interface (beta version) to interact with the dataset can be found [here](https://pypi.org/project/petdatasetreader/)
### Benchmarks
A python benchmarking procedure to test approaches on the PET dataset will be released soon.
## <a name="loadingdata"></a>Loading data
### Token-classification task
```python
from datasets import load_dataset
modelhub_dataset = load_dataset("patriziobellan/PET", name='token-classification')
```
### Relations-extraction task
```python
from datasets import load_dataset
modelhub_dataset = load_dataset("patriziobellan/PET", name='relations-extraction')
|
huggingnft | null | null | null | false | 1 | false | huggingnft/hapeprime | 2022-04-16T17:59:08.000Z | null | false | c4a3428883440ffabcba3afe9ed7ee94ffd13abb | [] | [
"tags:huggingnft",
"tags:nft",
"tags:huggan",
"tags:gan",
"tags:image",
"tags:images",
"task:unconditional-image-generation",
"datasets:huggingnft/hapeprime",
"license:mit"
] | https://huggingface.co/datasets/huggingnft/hapeprime/resolve/main/README.md | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/hapeprime
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/hapeprime).
Model is available [here](https://huggingface.co/huggingnft/hapeprime).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/hapeprime")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
ajanco | null | null | null | false | 1 | false | ajanco/pesp | 2022-07-01T16:18:15.000Z | null | false | 61faf004bd4a26daa27ec3127bc55e5c60829cbe | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:machine-generated",
"language:ru",
"license:afl-3.0",
"multilinguality:monolingual",
"source_datasets:original",
"task_categories:other"
] | https://huggingface.co/datasets/ajanco/pesp/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
- machine-generated
language:
- ru
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: 'The Pages of Early Soviet Performance (PESP) uses machine learning to
generate multiple datasets of early-Soviet illustrated periodicals related to the
performing arts. By using computer vision techniques and training a YOLO (You Only
Look Once) real-time object detection model, we are producing textual and image
data that will facilitate new avenues of research about Soviet culture during the
first decades after the October Revolution (1917-1932).
Our starting point is Princeton University Library''s Digital PUL (DPUL) where ten
titles - totaling 526 issues and approximately 26,000 pages - of Soviet performance
journals have been digitized and can be freely viewed online. Journals are a diverse
and complex genre: taken together, this collection contains hundreds of thousands
of articles, poems, editorial commentary, advertisements as well as images, illustrations
and graphic art. Today, researchers can browse the journals and view and download
high-quality page images on DPUL.'
size_categories: []
source_datasets:
- original
task_categories:
- other
task_ids: []
---
# Pages of Early Soviet Performance (PESP)
This dataset was created as part of the [Pages of Early Soviet Performance](https://cdh.princeton.edu/projects/pages-early-soviet-performance/) project at Princeton and provides text and image research data from a previously scanned [collection of illustrated periodicals](https://dpul.princeton.edu/slavic/catalog?f%5Breadonly_collections_ssim%5D%5B%5D=Russian+Illustrated+Periodicals) held by Princeton University's Slavic Collections. The project was a partnership with ITMO University in Saint Petersburg. Our work focused on document segmentation and the prediction of image, text, title, and mixedtext regions in the document images. The mixedtext category refers to segments where the typeface and text layout are mixed with other visual elements such as graphics, photographs, and illustrations. This category identifies sections that present problems for OCR and also highlights the experimental use of text, images, and other elements in the documents.
For each of the ten journals of interest in Princeton's digital collections (DPUL), we started with the IIIF manifest URI. With these manifests, we downloaded each of the 24,000 document images. The URI for each of the images is included in the dataset and a full list is available in `IIIF_URIs.json`.
## Authors
Natalia Ermolaev, Thomas Keenan, Katherine Reischl, Andrew Janco, Quinn Dombrowski, Antonina Puchkovskaia, Alexander Jacobson, Anastasiia Mamonova, Michael Galperin and Vladislav Tretyak
## Journal manifests
- [Эрмитаж](https://figgy.princeton.edu/concern/scanned_resources/6b561fbb-ba28-4afb-91d2-d77b8728d7d9/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/6b561fbb-ba28-4afb-91d2-d77b8728d7d9/manifest)
- [Вестник искусств](https://figgy.princeton.edu/concern/scanned_resources/ad256b35-9ad0-4f75-bf83-3bad1a7c6018/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/ad256b35-9ad0-4f75-bf83-3bad1a7c6018/manifest)
- [Советский театр](https://figgy.princeton.edu/concern/scanned_resources/f33993bb-a041-40a1-b11f-f660da825583/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/f33993bb-a041-40a1-b11f-f660da825583/manifest)
- [Рабис](https://figgy.princeton.edu/concern/scanned_resources/01f4236f-0a2f-473c-946f-d9bbec12f8ea/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/01f4236f-0a2f-473c-946f-d9bbec12f8ea/manifest)
- [Даёшь](https://figgy.princeton.edu/concern/scanned_resources/e036a5da-97a8-4041-ad62-a57af44359e2/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/e036a5da-97a8-4041-ad62-a57af44359e2/manifest)
- [Персимфанс](https://figgy.princeton.edu/concern/scanned_resources/af43d19a-3659-4dd0-a0fc-4c74ce521ad6/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/af43d19a-3659-4dd0-a0fc-4c74ce521ad6/manifest)
- [Тридцать дней](https://figgy.princeton.edu/concern/scanned_resources/d2d488af-2980-4554-a9ef-aacbaf463ec8/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/d2d488af-2980-4554-a9ef-aacbaf463ec8/manifest)
- [За пролетарское искусство](https://figgy.princeton.edu/concern/scanned_resources/38f89d57-8e64-4033-97d6-b925c407584a/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/38f89d57-8e64-4033-97d6-b925c407584a/manifest)
- [Бригада художников](https://figgy.princeton.edu/concern/scanned_resources/66d00a87-5ea9-439a-a909-95d697401a2b/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/66d00a87-5ea9-439a-a909-95d697401a2b/manifest)
- [Зрелища](https://figgy.princeton.edu/concern/scanned_resources/1af8b322-a0b1-46af-8541-5c3054af8098/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/1af8b322-a0b1-46af-8541-5c3054af8098/manifest)
## Model
Using [makesense.ai](https://www.makesense.ai/) and a custom active learning application called ["Mayakovsky"](https://github.com/CDH-ITMO-Periodicals-Project/mayakovsky) we generated training data for a [YOLOv5 model](https://docs.ultralytics.com/tutorials/train-custom-datasets/). The model was fine-tuned on the new labels and predictions were generated for all images in the collection.
## OCR
Using the model's predictions for image, title, text and mixedtext segments, we cropped the image using the bounding boxes and ran OCR on each document segment using Tesseract, Google Vision, and ABBYY FineReader. Given that the output of these various OCR engines can be difficult to compare, the document segments give a common denominator for comparison of OCR outputs. Having three variations of the extracted text can be useful for experiments with OCR post-correction.
## Dataset
The dataset contains an entry for each image with the following fields:
- filename: the image name (ex. 'Советский театр_1932 No. 4_16') with journal name, year, issue, page.
- dpul: the URL for the image's journal in Digital Princeton University Library
- journal: the journal name
- year: the year of the journal issue
- issue: the issue for the image
- URI: the IIIF URI used to fetch the image from Princeton's IIIF server
- yolo: the raw model prediction (ex '3 0.1655 0.501396 0.311'), in Yolo's normalized xywh format (object-class x y width height). The labels are 'image'=0, 'mixedtext'=1, 'title'=2, 'textblock'=3.
- yolo_predictions: a List with a dictionary for each of the model's predictions with fields for:
- label: the predicted label
- x: the x-value location of the center point of the prediction
- y: the y-value location of the center point of the prediction
- w: the total width of the prediction's bounding box
- h: the total height of the prediction's bounding box
- abbyy_text: the text extracted from the predicted document segment using ABBY FineReader. Note that due to costs, only about 800 images have this data
- tesseract_text: the text extracted from the predicted document segment using Tesseract.
- vision_text: the text extracted from the predicted document segment using Google Vision.
- vision_labels: entities recognized by Google Vision in image blocks and separated by | (ex. Boating|Boat|Printmaking)
# Useage
```python
from datasets import load_dataset
dataset = load_dataset('ajanco/pesp')
for item in dataset['train']:
for prediction in item['yolo_predictions']:
print(prediction)
``` |
null | null | @inproceedings{sharma2018conceptual,
title = {Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning},
author = {Sharma, Piyush and Ding, Nan and Goodman, Sebastian and Soricut, Radu},
booktitle = {Proceedings of ACL},
year = {2018},
} | Google's Conceptual Captions dataset has more than 3 million images, paired with natural-language captions.
In contrast with the curated style of the MS-COCO images, Conceptual Captions images and their raw descriptions are harvested from the web,
and therefore represent a wider variety of styles. The raw descriptions are harvested from the Alt-text HTML attribute associated with web images.
The authors developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness,
informativeness, fluency, and learnability of the resulting captions. | false | 589 | false | conceptual_captions | 2022-11-03T16:32:04.000Z | conceptual-captions | false | e1a96a49d0b314b3a9f4d71672d4dac97d6e146a | [] | [
"annotations_creators:found",
"language_creators:found",
"language:en",
"license:other",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"task_categories:image-to-text",
"task_ids:image-captioning"
] | https://huggingface.co/datasets/conceptual_captions/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- image-to-text
task_ids:
- image-captioning
paperswithcode_id: conceptual-captions
pretty_name: Conceptual Captions
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: caption
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 623230370
num_examples: 3318333
- name: validation
num_bytes: 2846024
num_examples: 15840
download_size: 0
dataset_size: 626076394
- config_name: unlabeled
features:
- name: image_url
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 584520156
num_examples: 3318333
- name: validation
num_bytes: 2698726
num_examples: 15840
download_size: 567211172
dataset_size: 587218882
- config_name: labeled
features:
- name: image_url
dtype: string
- name: caption
dtype: string
- name: labels
sequence: string
- name: MIDs
sequence: string
- name: confidence_scores
sequence: float64
splits:
- name: train
num_bytes: 1199330856
num_examples: 2007090
download_size: 1282463277
dataset_size: 1199330856
---
# Dataset Card for Conceptual Captions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Conceptual Captions homepage](https://ai.google.com/research/ConceptualCaptions/)
- **Repository:** [Conceptual Captions repository](https://github.com/google-research-datasets/conceptual-captions)
- **Paper:** [Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning](https://www.aclweb.org/anthology/P18-1238/)
- **Leaderboard:** [Conceptual Captions leaderboard](https://ai.google.com/research/ConceptualCaptions/competition?active_tab=leaderboard)https://ai.google.com/research/ConceptualCaptions/leaderboard?active_tab=leaderboard
- **Point of Contact:** [Conceptual Captions e-mail](mailto:conceptual-captions@google.com)
### Dataset Summary
Conceptual Captions is a dataset consisting of ~3.3M images annotated with captions. In contrast with the curated style of other image caption annotations, Conceptual Caption images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. More precisely, the raw descriptions are harvested from the Alt-text HTML attribute associated with web images. To arrive at the current version of the captions, we have developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions.
### Dataset Preprocessing
This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
return batch
num_threads = 20
dset = load_dataset("conceptual_captions")
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
### Supported Tasks and Leaderboards
- `image-captioning`: This dataset can be used to train model for the Image Captioning task. The leaderboard for this task is available [here](https://ai.google.com/research/ConceptualCaptions/competition?active_tab=leaderboard). Official submission output captions are scored against the reference captions from the hidden test set using [this](https://github.com/tylin/coco-caption) implementation of the CIDEr (primary), ROUGE-L and SPICE metrics.
### Languages
All captions are in English.
## Dataset Structure
### Data Instances
#### `unlabeled`
Each instance in this configuration represents a single image with a caption:
```
{
'image_url': 'http://lh6.ggpht.com/-IvRtNLNcG8o/TpFyrudaT6I/AAAAAAAAM6o/_11MuAAKalQ/IMG_3422.JPG?imgmax=800',
'caption': 'a very typical bus station'
}
```
#### `labeled`
Each instance in this configuration represents a single image with a caption with addtional machine-generated image labels and confidence scores:
```
{
'image_url': 'https://thumb1.shutterstock.com/display_pic_with_logo/261388/223876810/stock-vector-christmas-tree-on-a-black-background-vector-223876810.jpg',
'caption': 'christmas tree on a black background .',
'labels': ['christmas tree', 'christmas decoration', 'font', 'text', 'graphic design', 'illustration','interior design', 'tree', 'christmas eve', 'ornament', 'fir', 'plant', 'pine', 'pine family', 'graphics'],
'MIDs': ['/m/025nd', '/m/05fc9mj', '/m/03gq5hm', '/m/07s6nbt', '/m/03c31', '/m/01kr8f', '/m/0h8nzzj', '/m/07j7r', '/m/014r1s', '/m/05ykl4', '/m/016x4z', '/m/05s2s', '/m/09t57', '/m/01tfm0', '/m/021sdg'],
'confidence_scores': [0.9818305373191833, 0.952756941318512, 0.9227379560470581, 0.8524878621101379, 0.7597672343254089, 0.7493422031402588, 0.7332468628883362, 0.6869218349456787, 0.6552258133888245, 0.6357356309890747, 0.5992692708969116, 0.585474967956543, 0.5222904086112976, 0.5113164782524109, 0.5036579966545105]
}
```
### Data Fields
#### `unlabeled`
- `image_url`: Static URL for downloading the image associated with the post.
- `caption`: Textual description of the image.
#### `labeled`
- `image_url`: Static URL for downloading the image associated with the post.
- `caption`: Textual description of the image.
- `labels`: A sequence of machine-generated labels obtained using the [Google Cloud Vision API](https://cloud.google.com/vision).
- `MIDs`: A sequence of machine-generated identifiers (MID) corresponding to the label's Google Knowledge Graph entry.
- `confidence_scores`: A sequence of confidence scores denoting how likely the corresponing labels are present on the image.
### Data Splits
#### `unlabeled`
The basic version of the dataset split into Training and Validation splits. The Training split consists of 3,318,333 image-URL/caption pairs and the Validation split consists of 15,840 image-URL/caption pairs.
#### `labeled`
The labeled version of the dataset with a single. The entire data is contained in Training split, which is a subset of 2,007,090 image-URL/caption pairs from the Training set of the `unlabeled` config.
## Dataset Creation
### Curation Rationale
From the paper:
> In this paper, we make contributions to both the data and modeling categories. First, we present a new dataset of caption annotations Conceptual Captions (Fig. 1), which has an order of magnitude more images than the COCO dataset. Conceptual Captions consists of about 3.3M himage, descriptioni pairs. In contrast with the curated style of the COCO images, Conceptual Captions images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles.
### Source Data
#### Initial Data Collection and Normalization
From the homepage:
>For Conceptual Captions, we developed a fully automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions. Because no human annotators are involved, the Conceptual Captions dataset generation process is highly scalable.
>
>To generate this dataset, we started with a Flume pipeline that processes billions of Internet webpages, extracting, filtering, and processing candidate image and caption pairs, and keeping those that pass through several filters.
>
>We first screen for certain properties like size, aspect ratio, adult content scores. These filters discard more than 65% of the candidates. Next, we use Alt-Texts for text-based filtering, removing captions with non-descriptive text (such as SEO tags or hashtags); we also discard texts with high sentiment polarity or adult content scores, resulting in just 3% of the incoming candidates passing through.
>
>In the next step, we filter out candidates for which none of the text tokens can be mapped to the visual content of the image. We use image classifiers (e.g., Google Cloud Vision APIs) to assign class labels to images and match these labels against the candidate text (allowing morphological transformations), discarding >around 60% of the candidates that reach this stage.
>
>The candidates passing the above filters tend to be good Alt-text image descriptions. However, a large majority of these use proper names (for people, venues, locations, etc.), brands, dates, quotes, etc. This creates two distinct problems. First, some of these cannot be inferred based on the image pixels alone. This is problematic because unless the image has the necessary visual information it is not useful for training. Second, even if the proper names could be inferred from the image it is extremely difficult for a model to learn to perform both fine-grained classification and natural-language descriptions simultaneously. We posit that if automatic determination of names, locations, brands, etc. is needed, it should be done as a separate task that may leverage image meta-information (e.g. GPS info), or complementary techniques such as OCR.
>
>We address the above problems with the insight that proper names should be replaced by words that represent the same general notion, i.e., by their concept. For example, we remove locations (“Crowd at a concert in Los Angeles“ becomes “Crowd at a concert”), names (e.g., “Former Miss World Priyanka Chopra on the red carpet” becomes “actor on the red carpet”), proper noun modifiers (e.g., “Italian cuisine” becomes just “cuisine”) and noun phrases (e.g., “actor and actor” becomes “actors”). Around 20% of the samples are discarded during this transformation because it can leave sentences too short, or otherwise inconsistent.
>
>Finally, we perform another round of filtering to identify concepts with low-count. We cluster all resolved entities (e.g., “actor”, “dog”, “neighborhood”, etc.) and keep only the candidate types which have a count of over 100 mentions. This retains around 16K entity concepts such as: “person”, “actor”, “artist”, “player” and “illustration”. The less frequent ones that we dropped include “baguette”, “bridle”, “deadline”, “ministry” and “funnel”.
#### Who are the source language producers?
Not specified.
### Annotations
#### Annotation process
Annotations are extracted jointly with the images using the automatic pipeline.
#### Who are the annotators?
Not specified.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Piyush Sharma, Nan Ding, Sebastian Goodman and Radu Soricut.
### Licensing Information
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.
### Citation Information
```bibtex
@inproceedings{sharma2018conceptual,
title = {Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning},
author = {Sharma, Piyush and Ding, Nan and Goodman, Sebastian and Soricut, Radu},
booktitle = {Proceedings of ACL},
year = {2018},
}
```
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) and [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
huggingnft | null | null | null | false | 1 | false | huggingnft/cryptoskulls | 2022-04-16T17:59:08.000Z | null | false | 9c3742c2e077f17b9e6544910cfc4e23ae81db9d | [] | [
"tags:huggingnft",
"tags:nft",
"tags:huggan",
"tags:gan",
"tags:image",
"tags:images",
"task:unconditional-image-generation",
"datasets:huggingnft/cryptoskulls",
"license:mit"
] | https://huggingface.co/datasets/huggingnft/cryptoskulls/resolve/main/README.md | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/cryptoskulls
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/cryptoskulls).
Model is available [here](https://huggingface.co/huggingnft/cryptoskulls).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/cryptoskulls")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
jason9693 | null | null | null | false | 74 | false | jason9693/APEACH | 2022-07-05T04:18:07.000Z | apeach | false | e57e10cf1f793a854486c7dc040ac18d59595199 | [] | [
"arxiv:2202.12459",
"annotations_creators:crowdsourced",
"annotations_creators:crowd-generated",
"language_creators:found",
"language:ko",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text-classification",
"task_i... | https://huggingface.co/datasets/jason9693/APEACH/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
- crowd-generated
language_creators:
- found
language:
- ko
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: apeach
pretty_name: 'APEACH'
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- binary-classification
---
# Dataset for project: kor_hate_eval(APEACH)

## Sample Code
<a href="https://colab.research.google.com/drive/1djd0fuoMYIaf7VCHaLQIziJi4_yBJruP#scrollTo=VPR24ysr5Q7k"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="base"/></a>
## Dataset Descritpion
Korean Hate Speech Evaluation Datasets : trained with [BEEP!](https://huggingface.co/datasets/kor_hate) and evaluate with [APEACH](https://github.com/jason9693/APEACH)
- **Repository: [Korean HateSpeech Evaluation Dataset](https://github.com/jason9693/APEACH)**
- **Paper: [APEACH: Attacking Pejorative Expressions with Analysis on Crowd-Generated Hate Speech Evaluation Datasets](https://arxiv.org/abs/2202.12459)**
- **Point of Contact: [Kichang Yang](ykcha9@gmail.com)**
### Languages
ko-KR
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
{'text': ['(현재 호텔주인 심정) 아18 난 마른하늘에 날벼락맞고 호텔망하게생겼는데 누군 계속 추모받네....',
'....한국적인 미인의 대표적인 분...너무나 곱고아름다운모습...그모습뒤의 슬픔을 미처 알지못했네요ㅠ'],
'class': ['Spoiled', 'Default']}
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"class": "ClassLabel(num_classes=2, names=['Default', 'Spoiled'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train (binarized BEEP!) | 7896 |
| valid (APEACH) | 3770 |
## Citation
```
@article{yang2022apeach,
title={APEACH: Attacking Pejorative Expressions with Analysis on Crowd-Generated Hate Speech Evaluation Datasets},
author={Yang, Kichang and Jang, Wonjun and Cho, Won Ik},
journal={arXiv preprint arXiv:2202.12459},
year={2022}
}
```
|
hysts | null | null | null | false | 1 | false | hysts/TADNE-sample-images | 2022-04-15T21:03:31.000Z | null | false | 37b01c5d75149551addd3ff5efc88725c78ac944 | [] | [] | https://huggingface.co/datasets/hysts/TADNE-sample-images/resolve/main/README.md | # TADNE sample images
Images generated by the [TADNE model](https://huggingface.co/hysts/TADNE).
## Note
- `prediction_results/anime-face-detector`
- https://github.com/hysts/anime-face-detector
- YOLOv3 + HRNetV2
- `prediction_results/deepdanbooru`
- https://github.com/KichangKim/DeepDanbooru
- `model-resnet_custom_v3.h5`
- `prediction_results/deepdanbooru/intermediate_features`
- Output by the following model
- 4096-dim
```python
def create_model() -> tf.keras.Model:
path = huggingface_hub.hf_hub_download('hysts/DeepDanbooru',
'model-resnet_custom_v3.h5',
use_auth_token=TOKEN)
model = tf.keras.models.load_model(path)
model = tf.keras.Model(model.input, model.layers[-4].output)
layer = tf.keras.layers.GlobalAveragePooling2D()
model = tf.keras.Sequential([model, layer])
return model
```
|
jason9693 | null | null | null | false | 1 | false | jason9693/autotrain-data-kor_hate_eval | 2022-04-14T15:44:07.000Z | null | false | 716a41b2ec2e921f24b9b1df564a07f8643989a4 | [] | [
"task_categories:text-classification"
] | https://huggingface.co/datasets/jason9693/autotrain-data-kor_hate_eval/resolve/main/README.md | ---
task_categories:
- text-classification
---
# AutoTrain Dataset for project: kor_hate_eval
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project kor_hate_eval.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "(\ud604\uc7ac \ud638\ud154\uc8fc\uc778 \uc2ec\uc815) \uc54418 \ub09c \ub9c8\ub978\ud558\ub298\uc5d0 \ub0a0\ubcbc\ub77d\ub9de\uace0 \ud638\ud154\ub9dd\ud558\uac8c\uc0dd\uacbc\ub294\ub370 \ub204\uad70 \uacc4\uc18d \ucd94\ubaa8\ubc1b\ub124....",
"target": 1
},
{
"text": "....\ud55c\uad6d\uc801\uc778 \ubbf8\uc778\uc758 \ub300\ud45c\uc801\uc778 \ubd84...\ub108\ubb34\ub098 \uacf1\uace0\uc544\ub984\ub2e4\uc6b4\ubaa8\uc2b5...\uadf8\ubaa8\uc2b5\ub4a4\uc758 \uc2ac\ud514\uc744 \ubbf8\ucc98 \uc54c\uc9c0\ubabb\ud588\ub124\uc694\u3160",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['Default', 'Spoiled'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 7896 |
| valid | 3770 |
|
huggingnft | null | null | null | false | 1 | false | huggingnft/azuki | 2022-04-16T17:59:08.000Z | null | false | ab7ad9330ec63b9652e8f091c76dfe4c549ba606 | [] | [
"tags:huggingnft",
"tags:nft",
"tags:huggan",
"tags:gan",
"tags:image",
"tags:images",
"task:unconditional-image-generation",
"datasets:huggingnft/azuki",
"license:mit"
] | https://huggingface.co/datasets/huggingnft/azuki/resolve/main/README.md | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/azuki
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/azuki).
Model is available [here](https://huggingface.co/huggingnft/azuki).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/azuki")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
huggingnft | null | null | null | false | 1 | false | huggingnft/mutant-ape-yacht-club | 2022-04-16T17:59:08.000Z | null | false | 8495090e5604bf7070abe230cb59090e24ab25ae | [] | [
"tags:huggingnft",
"tags:nft",
"tags:huggan",
"tags:gan",
"tags:image",
"tags:images",
"task:unconditional-image-generation",
"datasets:huggingnft/mutant-ape-yacht-club",
"license:mit"
] | https://huggingface.co/datasets/huggingnft/mutant-ape-yacht-club/resolve/main/README.md | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/mutant-ape-yacht-club
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/mutant-ape-yacht-club).
Model is available [here](https://huggingface.co/huggingnft/mutant-ape-yacht-club).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/mutant-ape-yacht-club")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
XiangPan | null | null | null | false | 1 | false | XiangPan/waimai_10k | 2022-04-14T22:38:31.000Z | null | false | dbb125403842b8924d864f09f1c0eb357bd57435 | [] | [] | https://huggingface.co/datasets/XiangPan/waimai_10k/resolve/main/README.md | # Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
XiangPan | null | null | null | false | 1 | false | XiangPan/ChnSentiCorp_htl_8k | 2022-04-14T22:46:21.000Z | null | false | b51fb34fb5ec42c55374863c3cbbf30f2c0f6f55 | [] | [
"license:other"
] | https://huggingface.co/datasets/XiangPan/ChnSentiCorp_htl_8k/resolve/main/README.md | ---
license: other
---
|
bullmount | null | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | false | 1 | false | bullmount/squad-it-exp | 2022-04-17T18:30:50.000Z | null | false | 966a1e77847f3e22de5fb961665331767082d571 | [] | [] | https://huggingface.co/datasets/bullmount/squad-it-exp/resolve/main/README.md | [Needs More Information]
# Dataset Card for squad_it_exp
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
SQuAD-it-exp is a dataset derived from the SQuAD-it dataset originally created by Croce et al. to 2018.<br/>
SQuAD-it-exp has been enriched by adding new unanswerable questions in SQuAD v2 format.<br/>
The dataset contains nearly 90,000 pairs of questions/answers in Italian.
### Languages
The dataset is for the ITALIAN language
### Citation Information
```
@InProceedings{10.1007/978-3-030-03840-3_29,
author="Croce, Danilo and Zelenanska, Alexandra and Basili, Roberto",
editor="Ghidini, Chiara and Magnini, Bernardo and Passerini, Andrea and Traverso, Paolo",
title="Neural Learning for Question Answering in Italian",
booktitle="AI*IA 2018 -- Advances in Artificial Intelligence",
year="2018",
publisher="Springer International Publishing",
address="Cham",
pages="389--402",
isbn="978-3-030-03840-3"
}
``` |
mwong | null | null | null | false | 23 | false | mwong/fever-claim-related | 2022-10-25T10:06:56.000Z | fever | false | 4942ad98569e62b710c547f39f916724088ef520 | [] | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_fever",
"task_categories:text-classification",
"task_ids:fact-checking"
] | https://huggingface.co/datasets/mwong/fever-claim-related/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: fever
pretty_name: fever
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_fever
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Fever dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever.html), pre-processed and ready to train and evaluate.
The training objective is a text classification task - given a claim and evidence, predict if claim is related to evidence. |
mwong | null | null | null | false | 3 | false | mwong/climate-claim-related | 2022-10-25T10:06:59.000Z | climate-fever | false | 4c366c1882d27123f4aa640b824a29998f1c642d | [] | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_fever",
"task_categories:text-classification",
"task_ids:fact-checking"
] | https://huggingface.co/datasets/mwong/climate-claim-related/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: climate-fever
pretty_name: climate-fever
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_fever
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Fever dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever.html), pre-processed and, ready to train and evaluate.
The training objective is a text classification task - given a claim and evidence, predict if claim is related to evidence. |
rocca | null | null | null | false | 1 | false | rocca/clip-keyphrase-embeddings | 2022-04-15T08:44:51.000Z | null | false | 3d6d4da7cc2b491448d172eebf397560fdded10a | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/rocca/clip-keyphrase-embeddings/resolve/main/README.md | ---
license: apache-2.0
---
The reddit_keywords.tsv file contains about 170k single word embeddings (scraped from reddit, filtering from an initial set of ~700k based on a minimum occurrence threshold) in this format:
```tsv
temporary -0.276235,-0.181357,-0.325729,0.129826,0.016490,-0.230246,-0.039997,-0.990187,-0.014679,-0.044081,-0.120046,-0.250614,-0.303871,-0.264685,-0.010019,-0.158764,0.086107,-0.018172,0.003005,-0.383161,0.412182,0.104374,0.041335,-0.018206,0.085453,0.016297,-0.015680,0.047611,-0.267469,0.046825,-0.367247,-0.020667,-0.348124,0.055445,-0.303014,0.087954,0.077361,-0.052910,0.404438,-0.107339,-0.027286,-0.174772,0.287671,0.118175,0.224158,0.210142,0.071295,0.052860,0.235766,-0.140977,-0.355314,-0.421407,0.076506,-0.050502,0.334099,-0.090490,-0.109730,0.517465,0.057345,0.322140,0.217463,-0.218778,0.200798,0.140536,0.160337,-0.302322,-0.098611,-0.100849,-0.171952,-0.333828,0.143839,-0.010286,0.103448,0.046543,-0.094578,-0.083335,0.216615,-0.185091,0.028321,-0.251232,-0.021522,0.135202,-0.059559,0.513552,-0.156604,-0.426751,0.029338,-0.086346,-0.001045,-0.210324,-0.196247,-0.127054,-1.732658,0.172654,0.064660,0.051606,0.393296,0.132444,0.068706,-0.264383,0.083144,0.357062,0.501775,0.099174,-0.179929,-0.031447,0.077417,0.141482,-0.302417,0.160296,0.484913,0.070273,0.117609,-0.024784,0.086234,-0.164586,-0.211837,0.243161,0.118945,0.051511,0.225772,-0.207831,-0.132836,0.096240,-0.443813,-0.347750,0.192331,0.119417,-0.067559,-0.208074,-0.117854,0.078054,0.401030,6.348532,-0.012304,-0.099742,-0.065778,-0.299336,-0.164993,-0.089712,0.153861,0.244722,0.138961,0.231054,-0.296617,-0.129511,-0.021327,-0.005316,-0.187050,-0.073289,0.019646,0.458080,-0.027326,0.283158,0.137897,-0.196312,0.023471,0.342747,0.109227,-0.137838,-0.008336,-0.212090,-0.277437,-0.088123,-0.150103,0.030977,0.094198,-0.086804,0.260256,0.036756,0.118120,0.409172,-0.174826,0.454344,-0.333416,0.069056,-0.143509,-0.263730,0.016844,-0.069509,0.240573,0.104100,-0.138059,-0.037173,-0.189750,0.015344,0.034381,-0.243249,-0.052328,-0.111057,0.015412,-0.114713,-0.321371,-0.207981,0.037036,0.103251,-0.011858,-0.289237,0.111561,-0.170033,-0.178935,-0.072297,-0.042672,0.190604,0.174237,-0.095280,0.302311,0.024456,0.038216,-0.223006,0.372462,0.323767,0.078378,-0.297173,-0.195620,0.417219,-0.187052,-0.542408,-0.134892,-0.226160,-0.530608,-0.161821,0.120570,0.010190,0.011004,0.218169,0.322732,0.095584,0.424685,0.293537,-0.191970,0.038989,0.042194,-0.388086,0.496116,0.204738,-0.145585,0.463766,-0.227611,0.127603,-0.074332,-0.199442,-0.055274,-0.042825,-0.120296,0.017672,-0.450518,-0.314901,-0.045003,0.031523,-0.079665,-0.315374,0.305340,0.004655,-0.083071,0.191413,0.043845,-0.213311,0.129284,-0.218377,-0.282955,-0.066901,-0.068339,0.002564,-0.146045,0.056669,0.186583,-0.048750,-0.072946,-0.071184,-0.202749,-0.217035,0.276314,-0.282127,0.128067,0.097095,-0.246900,0.232340,-0.238046,-0.304384,0.067498,0.018847,-0.058201,-0.283596,-0.215553,0.035647,0.096342,-0.175125,0.026618,-0.319932,0.423662,-0.063089,0.251738,0.073425,-0.242309,-0.272967,-0.218592,-0.050702,0.091938,0.026258,0.141810,0.014719,-0.415617,0.102258,0.323665,0.213101,-0.219119,-0.074313,-0.075735,0.031039,-0.085159,0.187972,6.345324,0.043324,-0.220423,0.052132,-0.001249,-0.114997,0.001450,0.004655,0.365987,0.536724,0.394376,0.003819,0.262951,-0.065768,-0.087903,0.027754,-0.069572,-2.503358,0.097163,0.222208,0.032130,0.004387,0.129158,-0.238117,0.168215,-0.196026,-0.092511,-0.095957,0.519996,0.053166,-0.138281,-0.071842,-0.024337,-0.182440,0.207966,0.262904,0.325529,-0.087270,0.199483,-0.098656,0.097615,0.014249,-0.074579,0.351518,0.094744,0.148318,-0.173189,0.033593,0.027609,-0.045624,0.188491,0.203499,0.229421,0.050809,-0.222414,-0.016397,0.086318,0.116249,-0.242203,0.120892,0.042388,0.372276,-0.049954,-0.338517,-0.180879,0.083117,-0.284963,-0.178325,0.079176,0.019744,-0.023706,0.391955,-0.189259,-0.373736,0.149015,0.502598,-0.498027,-0.154271,-0.093499,-0.015292,-0.554516,0.355195,0.013390,0.475157,-0.366012,-0.138618,-0.045420,0.528353,0.134862,0.025135,0.141193,-0.075705,-0.265913,-0.227393,0.319143,-0.135606,-0.055334,-0.265537,0.124943,-0.176613,0.301410,0.243831,-0.190008,0.130851,0.057539,0.044628,0.205449,0.315888,-0.097760,-0.251490,-0.039288,-0.009690,-0.013857,0.292198,-0.114490,0.058920,0.032257,0.197568,-0.117429,-0.049549,-0.274646,-0.097156,-0.057420,0.261883,0.105485,-0.131978,-0.083086,0.492079,0.056150,0.163082,0.052169,-0.258462,0.164738,-0.121904,-0.349110,-0.399021,0.109116,0.108278,-0.102895,0.075380,0.120979,0.164346,-0.173332,0.038970,0.239190,0.404884,0.202795,0.021855,0.014958,0.220877,0.214221,-0.309071,0.157248,-0.182312,-0.069097,-0.271037,0.178052,-0.173829,0.410394,-0.023872,-0.118251,0.140042,-0.055087,0.269867,0.401690,0.251227,0.097262,0.225146,0.180279,-0.679833,0.014100,0.017635,-0.020673,0.288165,-0.162649,0.272822,0.118945,-0.178165,0.105399,0.076920,0.289865,0.479189,-0.379978,-0.074296,0.221087,0.110328,-0.434901,-0.009920,-0.329799,-0.326210,0.121444,-0.399424,0.131924,0.035093,-0.330143,-0.332781,-0.375134,-0.429944,-0.028793,-0.084496
permanent -0.125035,-0.234378,0.011184,0.196125,-0.178078,-0.278433,-0.169808,-0.477378,-0.091331,0.051704,0.052124,-0.342429,0.236901,-0.503706,-0.054427,0.378874,0.356929,0.098530,0.213484,-0.350122,0.476689,0.349297,-0.421352,0.131538,-0.037294,0.242601,0.110521,0.297674,-0.003884,-0.164057,-0.181568,-0.114656,-0.022335,-0.058460,-0.392774,0.592076,-0.037568,-0.093719,0.273190,0.031433,-0.276135,-0.129429,0.202552,0.247301,0.162464,0.331153,0.150925,0.103975,0.040481,-0.308759,-0.468749,-0.118056,-0.177642,0.071796,0.019445,-0.051476,0.051152,0.208523,0.207935,0.215263,0.240936,-0.260006,0.273524,-0.102152,0.086342,-0.583079,0.104273,0.052269,-0.079865,-0.353752,-0.042390,0.052536,0.373398,-0.083875,-0.085006,-0.094790,0.209163,0.116218,-0.000282,0.063966,-0.142604,0.170597,-0.014974,0.339414,-0.459107,-0.563759,0.073553,0.011647,0.132144,0.024776,-0.104373,-0.136440,-1.464302,0.560471,0.167517,0.387043,0.013425,0.354265,-0.273501,-0.138256,0.346923,0.277063,0.132669,-0.100053,-0.031133,-0.137729,-0.038392,0.127757,0.201051,0.122387,-0.091108,0.112959,-0.076981,-0.091213,0.259445,-0.250712,-0.086296,0.077766,-0.400991,-0.061569,0.295548,-0.546704,-0.181826,-0.145557,-0.003189,-0.065816,0.313023,-0.340320,-0.232408,0.108998,0.259111,0.151180,-0.166929,6.414488,0.501402,-0.091578,-0.057641,-0.482665,-0.142667,-0.264874,0.361437,0.394330,-0.229426,-0.091375,-0.243107,0.303489,-0.005123,-0.055163,0.015856,0.069838,0.031935,0.278514,0.166143,0.474343,0.105431,-0.076213,0.039309,-0.111546,0.012941,-0.164336,-0.017733,-0.281277,-0.086701,-0.025275,0.286478,-0.012244,0.419024,-0.218707,0.303495,-0.144674,-0.015870,0.211324,-0.125048,0.148710,-0.164560,0.090908,-0.009281,-0.350103,0.044986,0.231121,0.168193,-0.172223,-0.155072,-0.071494,-0.171293,0.057206,0.509076,-0.468795,-0.048402,-0.062685,-0.073230,-0.009878,-0.075013,-0.291539,0.082641,0.120893,-0.036523,-0.371523,-0.089427,-0.135797,0.039259,-0.154754,-0.454964,0.118403,0.345686,0.308087,0.189306,-0.186566,-0.052662,0.092485,0.443999,0.371476,0.544698,0.163462,0.211605,0.028551,-0.331050,-0.118373,-0.130023,-0.238063,-0.194845,-0.147683,0.269614,0.094254,0.080196,-0.016950,0.226205,0.251216,0.029845,0.241027,0.037793,0.250348,0.178878,-0.370625,0.021588,0.053517,0.089717,0.034888,-0.127252,-0.095143,-0.048264,-0.038292,-0.114234,0.081980,-0.344073,-0.199645,0.133945,0.046776,-0.164213,-0.125509,-0.078354,0.015241,0.525474,0.109172,-0.063231,-0.204491,-0.021912,-0.035685,-0.036702,-0.021009,-0.296292,-0.110856,-0.017419,-0.346117,0.123624,-0.022428,0.178189,-0.263868,-0.301899,-0.151516,0.189889,-0.468874,0.149208,-0.343711,-0.091530,0.159136,-0.205789,0.289440,0.010746,-0.458909,-0.003668,-0.154943,0.190147,-0.072661,-0.098433,-0.260481,-0.013620,0.239844,0.175296,0.013196,0.417082,0.388816,0.610390,0.218911,-0.285315,-0.397401,-0.407900,0.112988,-0.276133,-0.189056,0.077117,-0.106753,-0.315161,0.237132,0.145833,0.157616,-0.081629,-0.078093,0.011940,0.147423,-0.169398,0.207446,6.412528,0.098466,-0.073285,0.456039,-0.219336,0.225516,-0.126300,-0.085544,0.067576,0.480005,0.323118,-0.062557,0.029795,0.159936,0.207522,0.061824,0.081886,-2.257158,-0.030088,0.212595,0.034217,0.134851,0.060425,0.273878,0.141417,-0.372521,-0.256853,-0.449594,0.111124,-0.035848,0.246770,0.013314,0.095374,0.004721,0.067856,0.068077,0.466395,-0.224380,0.224336,-0.260102,-0.119188,-0.059211,-0.037218,0.257120,0.285553,-0.044297,0.090169,0.235599,0.122039,-0.496953,-0.158066,-0.082501,-0.067778,-0.129525,0.083023,0.095774,-0.055067,0.059558,-0.207958,-0.003060,-0.127451,0.016735,0.315940,0.089926,0.034565,0.117776,-0.771444,0.075713,0.224506,-0.118574,0.011695,-0.278467,0.103532,-0.355093,0.072645,0.554889,-0.227020,-0.206997,-0.082026,0.034534,-0.053694,-0.206349,0.212462,0.297205,-0.001969,-0.115018,-0.185391,0.439835,0.206251,0.275428,0.382456,0.143425,-0.285471,-0.118383,-0.042975,-0.127418,-0.086245,-0.013727,-0.001033,0.160316,0.266595,-0.023524,-0.233376,-0.004141,-0.194787,0.362530,0.329915,0.154557,-0.059378,-0.262286,-0.268349,0.006456,0.362081,-0.316577,-0.308787,-0.025319,-0.024902,0.180332,-0.309457,0.196798,-0.184453,0.050009,-0.142146,-0.352558,-0.272899,0.163149,-0.057184,0.129606,0.054385,0.049695,-0.017398,-0.508930,-0.402540,0.189464,-0.468858,-0.395760,0.130050,-0.095393,-0.081481,-0.148284,0.425590,0.208711,-0.175284,-0.026740,0.072050,0.488482,0.123898,0.185273,0.119057,-0.220553,0.085528,0.105237,0.326145,0.150759,0.081640,-0.403034,0.053875,-0.100664,0.156698,-0.125445,0.020690,0.407489,0.089345,0.201447,0.087196,0.215535,-0.358267,0.663917,0.102616,-0.911252,0.242669,0.106915,0.214653,0.051177,0.009364,-0.139282,-0.122035,0.074249,0.219813,-0.034759,0.089622,0.741162,0.248126,-0.458298,0.276372,-0.191041,-0.380611,0.273156,-0.160504,0.010753,-0.103899,0.379443,0.501672,-0.108174,-0.292099,0.073850,-0.201425,-0.711728,0.321330,0.047050
``` |
Peihao | null | null | null | false | 1 | false | Peihao/test-dateset | 2022-10-25T10:08:29.000Z | c4 | false | cac1d6c26bc0c5266661019473ac0ffb33bcf9bc | [] | [
"arxiv:1910.10683",
"annotations_creators:no-annotation",
"language_creators:found",
"language:en",
"license:odc-by",
"multilinguality:multilingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeli... | https://huggingface.co/datasets/Peihao/test-dateset/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- odc-by
multilinguality:
- multilingual
size_categories:
- 100M<n<1B
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: c4
pretty_name: C4
---
# Dataset Card for C4
## Table of Contents
- [Dataset Card for C4](#dataset-card-for-c4)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/allenai/c4
- **Paper:** https://arxiv.org/abs/1910.10683
### Dataset Summary
A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org".
This is the version prepared by AllenAI, hosted at this address: https://huggingface.co/datasets/allenai/c4
It comes in four variants:
- `en`: 305GB in JSON format
- `en.noblocklist`: 380GB in JSON format
- `en.noclean`: 2.3TB in JSON format
- `realnewslike`: 15GB in JSON format
The `en.noblocklist` variant is exactly the same as the `en` variant, except we turned off the so-called "badwords filter", which removes all documents that contain words from the lists at https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words.
### Supported Tasks and Leaderboards
C4 is mainly intended to pretrain language models and word representations.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
An example form the `en` config is:
```
{
'url': 'https://klyq.com/beginners-bbq-class-taking-place-in-missoula/',
'text': 'Beginners BBQ Class Taking Place in Missoula!\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.',
'timestamp': '2019-04-25T12:57:54Z'
}
```
### Data Fields
The data have several fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp as a string
### Data Splits
| name | train |validation|
|----------------|--------:|---------:|
| en |364868892| 364608|
| en.noblocklist |393391519| 393226|
| en.noclean | ?| ?|
| realnewslike | 13799838| 13863|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
C4 dataset is a collection of about 750GB of English-language text sourced from the public Common Crawl web scrape. It includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish) in addition to extensive deduplication. You can find the code that has been used to build this dataset in [c4.py](https://github.com/tensorflow/datasets/blob/5952d3d60d60e1727786fa7a9a23d24bb463d4d6/tensorflow_datasets/text/c4.py) by Tensorflow Datasets.
The dataset was explicitly designed to be English only: any page that was not given a probability of at least 99% of being English by [langdetect](https://github.com/Mimino666/langdetect) was discarded.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
```
@article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
}
```
### Contributions
Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
|
null | null | @inproceedings{changpinyo2021cc12m,
title = {{Conceptual 12M}: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts},
author = {Changpinyo, Soravit and Sharma, Piyush and Ding, Nan and Soricut, Radu},
booktitle = {CVPR},
year = {2021},
} | Conceptual 12M is a large-scale dataset of 12 million
image-text pairs specifically meant to be used for visionand-language pre-training.
Its data collection pipeline is a relaxed version of the one used in Conceptual Captions 3M. | false | 25 | false | conceptual_12m | 2022-11-03T16:31:22.000Z | cc12m | false | 2fd749bd49b36c4243cda8d800ed74753d442f5a | [] | [
"arxiv:2102.08981",
"annotations_creators:found",
"language_creators:found",
"language:en",
"license:other",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"task_categories:image-to-text",
"task_ids:image-captioning"
] | https://huggingface.co/datasets/conceptual_12m/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- image-to-text
task_ids:
- image-captioning
paperswithcode_id: cc12m
pretty_name: Conceptual 12M
dataset_info:
features:
- name: image_url
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 2794168030
num_examples: 12423374
download_size: 2707204412
dataset_size: 2794168030
---
# Dataset Card for Conceptual 12M
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Conceptual 12M repository](https://github.com/google-research-datasets/conceptual-12m)
- **Paper:** [Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts](https://arxiv.org/abs/2102.08981)
- **Point of Contact:** [Conceptual Captions e-mail](mailto:conceptual-captions@google.com)
### Dataset Summary
Conceptual 12M (CC12M) is a dataset with 12 million image-text pairs specifically meant to be used for visionand-language pre-training.
Its data collection pipeline is a relaxed version of the one used in Conceptual Captions 3M (CC3M).
### Dataset Preprocessing
This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
return batch
num_threads = 20
dset = load_dataset("conceptual_12m")
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
### Supported Tasks and Leaderboards
- `image-captioning`: This dataset can be used to train model for the Image Captioning task.
### Languages
All captions are in English.
## Dataset Structure
### Data Instances
Each instance represents a single image with a caption:
```
{
'image_url': 'http://lh6.ggpht.com/-IvRtNLNcG8o/TpFyrudaT6I/AAAAAAAAM6o/_11MuAAKalQ/IMG_3422.JPG?imgmax=800',
'caption': 'a very typical bus station'
}
```
### Data Fields
- `image_url`: Static URL for downloading the image associated with the post.
- `caption`: Textual description of the image.
### Data Splits
There is only training data, with a total of 12423374 rows
## Dataset Creation
### Curation Rationale
Conceptual 12M shares the same pipeline with Conceptual Captions (CC3M), but relaxes some processing steps.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> To arrive at CC12M, we keep
the image-text filtering intact, and relax the unimodal filters only. First, for image-based filtering, we set the maximum ratio of larger to smaller dimension to 2.5 instead of 2.
We still keep only JPEG images with size greater than
400 pixels, and still exclude images that trigger pornography detectors. Second, in text-based filtering, we allow text
between 3 and 256 words in the alt-text. We still discard
candidates with no noun or no determiner, but permit ones
without prepositions. We discard the heuristics regarding
high unique-word ratio covering various POS tags and word
capitalization. We set the maximum fraction of word repetition allowed to 0.2. Given a larger pool of text due to the
above relaxations, the threshold for counting a word type as
rare is increased from 5 to 20
> The main motivation for CC3M to
perform text transformation is that a majority of candidate
captions contain ultrafine-grained entities such as proper
names (people, venues, locations, etc.), making it extremely
difficult to learn as part of the image captioning task. In
contrast, we are not restricted by the end task of image caption generation. Our intuition is that relatively more difficult pre-training data would lead to better transferability.
We thus do not perform hypernimization or digit substitution. [...] The only exception to the “keep alt-texts as
raw as possible” rule is performing person-name substitutions, which we identify as necessary to protect the privacy
of the individuals in these images. For this step, we use the
Google Cloud Natural Language APIs to detect all named
entities of type Person, and substitute them by a special token <PERSON>. Around 25% of all the alt-texts in CC12M
are transformed in this fashion.
#### Who are the source language producers?
Not specified.
### Annotations
#### Annotation process
Annotations are extracted jointly with the images using the automatic pipeline.
#### Who are the annotators?
Not specified.
### Personal and Sensitive Information
From the paper:
> The only exception to the “keep alt-texts as
raw as possible” rule is performing person-name substitutions, which we identify as necessary to protect the privacy
of the individuals in these images. For this step, we use the
Google Cloud Natural Language APIs to detect all named
entities of type Person, and substitute them by a special token <PERSON>. Around 25% of all the alt-texts in CC12M
are transformed in this fashion.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Soravit Changpinyo, Piyush Sharma, Nan Ding and Radu Soricut.
### Licensing Information
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.
### Citation Information
```bibtex
@inproceedings{changpinyo2021cc12m,
title = {{Conceptual 12M}: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts},
author = {Changpinyo, Soravit and Sharma, Piyush and Ding, Nan and Soricut, Radu},
booktitle = {CVPR},
year = {2021},
}
```
### Contributions
Thanks to [@thomasw21](https://github.com/thomasw21) for adding this dataset. |
yumingh | null | null | null | false | 1 | false | yumingh/course_project | 2022-04-15T16:33:27.000Z | null | false | d9553ef2399b048492e0ea67f5fa73d2bd24e55a | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/yumingh/course_project/resolve/main/README.md | ---
license: afl-3.0
---
|
ajanco | null | null | null | false | 1 | false | ajanco/deep | 2022-04-18T20:41:12.000Z | null | false | 15fe2e2ea5adada9197ba37af304eaecca32fd8b | [] | [
"license:mit"
] | https://huggingface.co/datasets/ajanco/deep/resolve/main/README.md | ---
license: mit
---
|
student | null | null | null | false | 2 | false | student/FFHQ | 2022-04-16T06:24:36.000Z | null | false | 35d54d53495778c09cabd7f86019cac79e578aed | [] | [] | https://huggingface.co/datasets/student/FFHQ/resolve/main/README.md | FFHQ 70000张png图片
链接:https://pan.baidu.com/s/1XDfTKWOhtwAAQQJ0KBU4RQ
提取码:bowj
## Flickr-Faces-HQ Dataset (FFHQ)






Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative adversarial networks (GAN):
> **A Style-Based Generator Architecture for Generative Adversarial Networks**<br>
> Tero Karras (NVIDIA), Samuli Laine (NVIDIA), Timo Aila (NVIDIA)<br>
> http://stylegan.xyz/paper
The dataset consists of 70,000 high-quality PNG images at 1024×1024 resolution and contains considerable variation in terms of age, ethnicity and image background. It also has good coverage of accessories such as eyeglasses, sunglasses, hats, etc. The images were crawled from [Flickr](https://www.flickr.com/), thus inheriting all the biases of that website, and automatically aligned and cropped using [dlib](http://dlib.net/). Only images under permissive licenses were collected. Various automatic filters were used to prune the set, and finally [Amazon Mechanical Turk](https://www.mturk.com/) was used to remove the occasional statues, paintings, or photos of photos.
For business inquiries, please contact [researchinquiries@nvidia.com](mailto:researchinquiries@nvidia.com)
For press and other inquiries, please contact Hector Marinez at [hmarinez@nvidia.com](mailto:hmarinez@nvidia.com)
## Licenses
The individual images were published in Flickr by their respective authors under either [Creative Commons BY 2.0](https://creativecommons.org/licenses/by/2.0/), [Creative Commons BY-NC 2.0](https://creativecommons.org/licenses/by-nc/2.0/), [Public Domain Mark 1.0](https://creativecommons.org/publicdomain/mark/1.0/), [Public Domain CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/), or [U.S. Government Works](http://www.usa.gov/copyright.shtml) license. All of these licenses allow **free use, redistribution, and adaptation for non-commercial purposes**. However, some of them require giving **appropriate credit** to the original author, as well as **indicating any changes** that were made to the images. The license and original author of each image are indicated in the metadata.
* [https://creativecommons.org/licenses/by/2.0/](https://creativecommons.org/licenses/by/2.0/)
* [https://creativecommons.org/licenses/by-nc/2.0/](https://creativecommons.org/licenses/by-nc/2.0/)
* [https://creativecommons.org/publicdomain/mark/1.0/](https://creativecommons.org/publicdomain/mark/1.0/)
* [https://creativecommons.org/publicdomain/zero/1.0/](https://creativecommons.org/publicdomain/zero/1.0/)
* [http://www.usa.gov/copyright.shtml](http://www.usa.gov/copyright.shtml)
The dataset itself (including JSON metadata, download script, and documentation) is made available under [Creative Commons BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license by NVIDIA Corporation. You can **use, redistribute, and adapt it for non-commercial purposes**, as long as you (a) give appropriate credit by **citing our paper**, (b) **indicate any changes** that you've made, and (c) distribute any derivative works **under the same license**.
* [https://creativecommons.org/licenses/by-nc-sa/4.0/](https://creativecommons.org/licenses/by-nc-sa/4.0/)
## Overview
All data is hosted on Google Drive:
| Path | Size | Files | Format | Description
| :--- | :--: | ----: | :----: | :----------
| [ffhq-dataset](https://drive.google.com/open?id=1u2xu7bSrWxrbUxk-dT-UvEJq8IjdmNTP) | 2.56 TB | 210,014 | | Main folder
| ├ [ffhq-dataset-v1.json](https://drive.google.com/open?id=1IB0BFbN_eRZx9UkJqLHSgJiQhqX-PrI6) | 254 MB | 1 | JSON | Metadata including copyright info, URLs, etc.
| ├ [images1024x1024](https://drive.google.com/open?id=1u3Hbfn3Q6jsTlte3BY85CGwId77H-OOu) | 89.1 GB | 70,000 | PNG | Aligned and cropped images at 1024×1024
| ├ [thumbnails128x128](https://drive.google.com/open?id=1uJkWCpLUM-BnXW3H_IgVMdfENeNDFNmC) | 1.95 GB | 70,000 | PNG | Thumbnails at 128×128
| ├ [in-the-wild-images](https://drive.google.com/open?id=1YyuocbwILsHAjTusSUG-_zL343jlVBhf) | 955 GB | 70,000 | PNG | Original images from Flickr
| ├ [tfrecords](https://drive.google.com/open?id=1LTBpJ0W_WLjqza3zdayligS8Dh1V1gA6) | 273 GB | 9 | tfrecords | Multi-resolution data for [StyleGAN](http://stylegan.xyz/code) and [ProGAN](https://github.com/tkarras/progressive_growing_of_gans)
| └ [zips](https://drive.google.com/open?id=1WocxvZ4GEZ1DI8dOz30aSj2zT6pkATYS) | 1.28 TB | 4 | ZIP | Contents of each folder as a ZIP archive.
High-level statistics:

For use cases that require separate training and validation sets, we have appointed the first 60,000 images to be used for training and the remaining 10,000 for validation. In the [StyleGAN paper](http://stylegan.xyz/paper), however, we used all 70,000 images for training.
We have explicitly made sure that there are no duplicate images in the dataset itself. However, please note that the `in-the-wild` folder may contain multiple copies of the same image in cases where we extracted several different faces from the same image.
## Download script
You can either grab the data directly from Google Drive or use the provided [download script](./download_ffhq.py). The script makes things considerably easier by automatically downloading all the requested files, verifying their checksums, retrying each file several times on error, and employing multiple concurrent connections to maximize bandwidth.
```
> python download_ffhq.py -h
usage: download_ffhq.py [-h] [-j] [-s] [-i] [-t] [-w] [-r] [-a]
[--num_threads NUM] [--status_delay SEC]
[--timing_window LEN] [--chunk_size KB]
[--num_attempts NUM]
Download Flickr-Face-HQ (FFHQ) dataset to current working directory.
optional arguments:
-h, --help show this help message and exit
-j, --json download metadata as JSON (254 MB)
-s, --stats print statistics about the dataset
-i, --images download 1024x1024 images as PNG (89.1 GB)
-t, --thumbs download 128x128 thumbnails as PNG (1.95 GB)
-w, --wilds download in-the-wild images as PNG (955 GB)
-r, --tfrecords download multi-resolution TFRecords (273 GB)
-a, --align recreate 1024x1024 images from in-the-wild images
--num_threads NUM number of concurrent download threads (default: 32)
--status_delay SEC time between download status prints (default: 0.2)
--timing_window LEN samples for estimating download eta (default: 50)
--chunk_size KB chunk size for each download thread (default: 128)
--num_attempts NUM number of download attempts per file (default: 10)
```
```
> python ..\download_ffhq.py --json --images
Downloading JSON metadata...
\ 100.00% done 1/1 files 0.25/0.25 GB 43.21 MB/s ETA: done
Parsing JSON metadata...
Downloading 70000 files...
| 100.00% done 70000/70000 files 89.19 GB/89.19 GB 59.87 MB/s ETA: done
```
The script also serves as a reference implementation of the automated scheme that we used to align and crop the images. Once you have downloaded the in-the-wild images with `python download_ffhq.py --wilds`, you can run `python download_ffhq.py --align` to reproduce exact replicas of the aligned 1024×1024 images using the facial landmark locations included in the metadata.
## Metadata
The `ffhq-dataset-v1.json` file contains the following information for each image in a machine-readable format:
```
{
"0": { # Image index
"category": "training", # Training or validation
"metadata": { # Info about the original Flickr photo:
"photo_url": "https://www.flickr.com/photos/...", # - Flickr URL
"photo_title": "DSCF0899.JPG", # - File name
"author": "Jeremy Frumkin", # - Author
"country": "", # - Country where the photo was taken
"license": "Attribution-NonCommercial License", # - License name
"license_url": "https://creativecommons.org/...", # - License detail URL
"date_uploaded": "2007-08-16", # - Date when the photo was uploaded to Flickr
"date_crawled": "2018-10-10" # - Date when the photo was crawled from Flickr
},
"image": { # Info about the aligned 1024x1024 image:
"file_url": "https://drive.google.com/...", # - Google Drive URL
"file_path": "images1024x1024/00000.png", # - Google Drive path
"file_size": 1488194, # - Size of the PNG file in bytes
"file_md5": "ddeaeea6ce59569643715759d537fd1b", # - MD5 checksum of the PNG file
"pixel_size": [1024, 1024], # - Image dimensions
"pixel_md5": "47238b44dfb87644460cbdcc4607e289", # - MD5 checksum of the raw pixel data
"face_landmarks": [...] # - 68 face landmarks reported by dlib
},
"thumbnail": { # Info about the 128x128 thumbnail:
"file_url": "https://drive.google.com/...", # - Google Drive URL
"file_path": "thumbnails128x128/00000.png", # - Google Drive path
"file_size": 29050, # - Size of the PNG file in bytes
"file_md5": "bd3e40b2ba20f76b55dc282907b89cd1", # - MD5 checksum of the PNG file
"pixel_size": [128, 128], # - Image dimensions
"pixel_md5": "38d7e93eb9a796d0e65f8c64de8ba161" # - MD5 checksum of the raw pixel data
},
"in_the_wild": { # Info about the in-the-wild image:
"file_url": "https://drive.google.com/...", # - Google Drive URL
"file_path": "in-the-wild-images/00000.png", # - Google Drive path
"file_size": 3991569, # - Size of the PNG file in bytes
"file_md5": "1dc0287e73e485efb0516a80ce9d42b4", # - MD5 checksum of the PNG file
"pixel_size": [2016, 1512], # - Image dimensions
"pixel_md5": "86b3470c42e33235d76b979161fb2327", # - MD5 checksum of the raw pixel data
"face_rect": [667, 410, 1438, 1181], # - Axis-aligned rectangle of the face region
"face_landmarks": [...], # - 68 face landmarks reported by dlib
"face_quad": [...] # - Aligned quad of the face region
}
},
...
}
```
## Acknowledgements
We thank Jaakko Lehtinen, David Luebke, and Tuomas Kynkäänniemi for in-depth discussions and helpful comments; Janne Hellsten, Tero Kuosmanen, and Pekka Jänis for compute infrastructure and help with the code release.
We also thank Vahid Kazemi and Josephine Sullivan for their work on automatic face detection and alignment that enabled us to collect the data in the first place:
> **One Millisecond Face Alignment with an Ensemble of Regression Trees**<br>
> Vahid Kazemi, Josephine Sullivan<br>
> Proc. CVPR 2014<br>
> https://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Kazemi_One_Millisecond_Face_2014_CVPR_paper.pdf
|
huseinzol05 | null | null | null | false | 1 | false | huseinzol05/Malay-TTS-Yasmin | 2022-04-25T06:21:15.000Z | null | false | c31370fdaed32ac5f64fed5ee6d7dd6397f5e47a | [] | [] | https://huggingface.co/datasets/huseinzol05/Malay-TTS-Yasmin/resolve/main/README.md | # Malay-TTS-Yasmin
All notebooks and code related at https://github.com/huseinzol05/malaya-speech/tree/master/data/azure-tts
## Attributes
### Wiki and News
- 24000 sample rate, super clean.
- narrator `ms-MY-YasminNeural`.
- approximate 99.4 hours.
- Texts from Malay Wikipedia and News.
- Sentences between 2 words and 20 words.
### Parliament
- 24000 sample rate, super clean.
- narrator `ms-MY-YasminNeural`.
- approximate 142 hours.
- Texts from Malaysia Malay Parliament.
- Sentences between 2 words and 25 words.
## how-to
### Wiki and News
1. Download [populated-text.json](populated-text.json) and [tts-malay-yasmin.tar.gz](tts-malay-yasmin.tar.gz).
2. To get wav and transcript,
```python
import json
import soundfile as sf
with open('populated-text.json') as fopen:
texts = json.load(fopen)
index = 0
text = texts[index]
y, sr = sf.read(f'female/{index}.wav')
```
### Parliament
1. Download [populated-parliament.json](populated-parliament.json) and [tts-malay-yasmin-parliament.tar.gz](tts-malay-yasmin-parliament.tar.gz).
2. To get wav and transcript,
```python
import json
import soundfile as sf
with open('populated-parliament.json') as fopen:
texts = json.load(fopen)
index = 0
text = texts[index]
y, sr = sf.read(f'female-parliament/{index}.wav')
``` |
dl4phys | null | null | null | false | 1 | false | dl4phys/top_tagging | 2022-04-18T07:43:02.000Z | null | false | 61ff60cb76c1fabef90bc8013587f0fb6a4fa142 | [] | [
"arxiv:1902.09914",
"license:cc-by-4.0"
] | https://huggingface.co/datasets/dl4phys/top_tagging/resolve/main/README.md | ---
license: cc-by-4.0
---
# Dataset Card for Top Quark Tagging
## Table of Contents
- [Dataset Card for Top Quark Tagging](#dataset-card-for-top-quark-tagging)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/2603256
- **Paper:** https://arxiv.org/abs/1902.09914
- **Point of Contact:** [Gregor Kasieczka](gregor.kasieczka@uni-hamburg.de)
### Dataset Summary
Top Quark Tagging is a dataset of Monte Carlo simulated events produced by proton-proton collisions at the Large Hadron Collider. The top-quark signal and mixed quark-gluon background jets are produced with Pythia8 with its default tune for a center-of-mass energy of 14 TeV. Multiple interactions and pile-up are ignored. The leading 200 jet constituent four-momenta \\( (E, p_x, p_y, p_z) \\) are stored, with zero-padding applied to jets with fewer than 200 constituents.
### Supported Tasks and Leaderboards
- `tabular-classification`: The dataset can be used to train a model for tabular binary classification, which consists in predicting whether an event is produced from a top signal or quark-gluon background. Success on this task is typically measured by achieving a *high* [accuracy](https://huggingface.co/metrics/accuracy) and AUC score.
## Dataset Structure
### Data Instances
Each instance in the dataset consists of the four-momenta of the leading 200 jet constituents, sorted by \\(p_T\\). For jets with fewer than 200 constituents, zero-padding is applied. The four-momenta of the top-quark are also provided, along with a label in the `is_signal_new` column to indicate whether the event stems from a top-quark (1) or QCD background (0). An example instance looks as follows:
```
{'E_0': 474.0711364746094,
'PX_0': -250.34703063964844,
'PY_0': -223.65196228027344,
'PZ_0': -334.73809814453125,
...
'E_199': 0.0,
'PX_199': 0.0,
'PY_199': 0.0,
'PZ_199': 0.0,
'truthE': 0.0,
'truthPX': 0.0,
'truthPY': 0.0,
'truthPZ': 0.0,
'ttv': 0,
'is_signal_new': 0}
```
### Data Fields
The fields in the dataset have the following meaning:
- `E_i`: the energy of jet constituent \\(i\\).
- `PX_i`: the \\(x\\) component of the jet constituent's momentum
- `PY_i`: the \\(y\\) component of the jet constituent's momentum
- `PZ_i`: the \\(z\\) component of the jet constituent's momentum
- `truthE`: the energy of the top-quark
- `truthPX`: the \\(x\\) component of the top quark's momentum
- `truthPY`: the \\(y\\) component of the top quark's momentum
- `truthPZ`: the \\(z\\) component of the top quark's momentum
- `ttv`: a flag that indicates which split (train, validation, or test) that a jet belongs to. Redundant since each split is provided as a separate dataset
- `is_signal_new`: the label for each jet. A 1 indicates a top-quark, while a 0 indicates QCD background.
### Data Splits
| | train | validation | test |
|------------------|--------:|-----------:|-------:|
| Number of events | 1211000 | 403000 | 404000 |
### Licensing Information
This dataset is released under the [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) license.
### Citation Information
```
@dataset{kasieczka_gregor_2019_2603256,
author = {Kasieczka, Gregor and
Plehn, Tilman and
Thompson, Jennifer and
Russel, Michael},
title = {Top Quark Tagging Reference Dataset},
month = mar,
year = 2019,
publisher = {Zenodo},
version = {v0 (2018\_03\_27)},
doi = {10.5281/zenodo.2603256},
url = {https://doi.org/10.5281/zenodo.2603256}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
|
dl4phys | null | null | null | false | 1 | false | dl4phys/top_tagging_nsubjettiness | 2022-04-16T16:27:05.000Z | null | false | bb8e41b794e8da834d39efba3c090c9a1d30cbaa | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/dl4phys/top_tagging_nsubjettiness/resolve/main/README.md | ---
license: cc-by-4.0
---
|
dl4phys | null | null | null | false | 1 | false | dl4phys/top_tagging_images | 2022-04-17T10:33:58.000Z | null | false | 69e53d567fb37a42b0756132dee8b7c11cbddc55 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/dl4phys/top_tagging_images/resolve/main/README.md | ---
license: cc-by-4.0
---
|
surrey-nlp | null | This is the dataset repository for PLOD Dataset accepted to be published at LREC 2022.
The dataset can help build sequence labelling models for the task Abbreviation Detection. | false | 6 | false | surrey-nlp/PLOD-filtered | 2022-07-30T12:14:27.000Z | plod-filtered | false | 1248106ce21f11d2a7702ba022a7e64289ae147c | [] | [
"arxiv:2204.12061",
"annotations_creators:Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan",
"language_creators:found",
"language:en",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:t... | https://huggingface.co/datasets/surrey-nlp/PLOD-filtered/resolve/main/README.md | ---
annotations_creators:
- Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan
language_creators:
- found
language:
- en
license: "cc-by-sa-4.0"
multilinguality:
- monolingual
paperswithcode_id: plod-filtered
pretty_name: 'PLOD: An Abbreviation Detection Dataset'
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- abbreviation-detection
---
# PLOD: An Abbreviation Detection Dataset
This is the repository for PLOD Dataset published at LREC 2022. The dataset can help build sequence labelling models for the task Abbreviation Detection.
### Dataset
We provide two variants of our dataset - Filtered and Unfiltered. They are described in our paper here.
1. The Filtered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/>
2. The Unfiltered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/>
3. The [SDU Shared Task](https://sites.google.com/view/sdu-aaai22/home) data we use for zero-shot testing is [available here](https://huggingface.co/datasets/surrey-nlp/SDU-test).
# Dataset Card for PLOD-filtered
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/surrey-nlp/PLOD-AbbreviationDetection
- **Paper:** https://arxiv.org/abs/2204.12061
- **Leaderboard:** https://paperswithcode.com/sota/abbreviationdetection-on-plod-filtered
- **Point of Contact:** [Diptesh Kanojia](mailto:d.kanojia@surrey.ac.uk)
### Dataset Summary
This PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain.
### Supported Tasks and Leaderboards
This dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022.
### Languages
English
## Dataset Structure
### Data Instances
A typical data point comprises an ID, a set of `tokens` present in the text, a set of `pos_tags` for the corresponding tokens obtained via Spacy NER, and a set of `ner_tags` which are limited to `AC` for `Acronym` and `LF` for `long-forms`.
An example from the dataset:
{'id': '1',
'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'],
'pos_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13],
'ner_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
### Data Fields
- id: the row identifier for the dataset point.
- tokens: The tokens contained in the text.
- pos_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER.
- ner_tags: The tags for abbreviations and long-forms.
### Data Splits
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Filtered | 112652 | 24140 | 24140|
| Unfiltered | 113860 | 24399 | 24399|
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
Extracting the data from PLOS Journals online and then tokenization, normalization.
#### Who are the source language producers?
PLOS Journal
## Additional Information
### Dataset Curators
The dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma,
Diptesh Kanojia, Constantin Orasan.
### Licensing Information
CC-BY-SA 4.0
### Citation Information
[Needs More Information]
### Installation
We use the custom NER pipeline in the [spaCy transformers](https://spacy.io/universe/project/spacy-transformers) library to train our models. This library supports training via any pre-trained language models available at the :rocket: [HuggingFace repository](https://huggingface.co/).<br/>
Please see the instructions at these websites to setup your own custom training with our dataset to reproduce the experiments using Spacy.
OR<br/>
However, you can also reproduce the experiments via the Python notebook we [provide here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection/blob/main/nbs/fine_tuning_abbr_det.ipynb) which uses HuggingFace Trainer class to perform the same experiments. The exact hyperparameters can be obtained from the models readme cards linked below. Before starting, please perform the following steps:
```bash
git clone https://github.com/surrey-nlp/PLOD-AbbreviationDetection
cd PLOD-AbbreviationDetection
pip install -r requirements.txt
```
Now, you can use the notebook to reproduce the experiments.
### Model(s)
Our best performing models are hosted on the HuggingFace models repository
| Models | [`PLOD - Unfiltered`](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) | [`PLOD - Filtered`](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) | Description |
| --- | :---: | :---: | --- |
| [RoBERTa<sub>large</sub>](https://huggingface.co/roberta-large) | [RoBERTa<sub>large</sub>-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | -soon- | Fine-tuning on the RoBERTa<sub>large</sub> language model |
| [RoBERTa<sub>base</sub>](https://huggingface.co/roberta-base) | -soon- | [RoBERTa<sub>base</sub>-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | Fine-tuning on the RoBERTa<sub>base</sub> language model |
| [AlBERT<sub>large-v2</sub>](https://huggingface.co/albert-large-v2) | [AlBERT<sub>large-v2</sub>-finetuned-abbDet](https://huggingface.co/surrey-nlp/albert-large-v2-finetuned-abbDet) | -soon- | Fine-tuning on the AlBERT<sub>large-v2</sub> language model |
On the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.<br/>
### Usage
You can use the HuggingFace Model link above to find the instructions for using this model in Python locally using the notebook provided in the Git repo.
| |
surrey-nlp | null | This is the dataset repository for PLOD Dataset accepted to be published at LREC 2022.
The dataset can help build sequence labelling models for the task Abbreviation Detection. | false | 2 | false | surrey-nlp/PLOD-unfiltered | 2022-10-24T17:44:59.000Z | plod-an-abbreviation-detection-dataset-for | false | 1ae1e145148c3744360ede5f91923d56226f1412 | [] | [
"arxiv:2204.12061",
"annotations_creators:Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan",
"language_creators:found",
"language:en",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:t... | https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered/resolve/main/README.md | ---
annotations_creators:
- Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: plod-an-abbreviation-detection-dataset-for
pretty_name: 'PLOD: An Abbreviation Detection Dataset'
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- abbreviation-detection
---
# PLOD: An Abbreviation Detection Dataset
This is the repository for PLOD Dataset published at LREC 2022. The dataset can help build sequence labelling models for the task Abbreviation Detection.
### Dataset
We provide two variants of our dataset - Filtered and Unfiltered. They are described in our paper here.
1. The Filtered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/>
2. The Unfiltered version can be accessed via [Huggingface Datasets here](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) and a [CONLL format is present here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection).<br/>
3. The [SDU Shared Task](https://sites.google.com/view/sdu-aaai22/home) data we use for zero-shot testing is [available here](https://huggingface.co/datasets/surrey-nlp/SDU-test).
# Dataset Card for PLOD-unfiltered
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/surrey-nlp/PLOD-AbbreviationDetection
- **Paper:** https://arxiv.org/abs/2204.12061
- **Leaderboard:** https://paperswithcode.com/sota/abbreviationdetection-on-plod-an-abbreviation
- **Point of Contact:** [Diptesh Kanojia](mailto:d.kanojia@surrey.ac.uk)
### Dataset Summary
This PLOD Dataset is an English-language dataset of abbreviations and their long-forms tagged in text. The dataset has been collected for research from the PLOS journals indexing of abbreviations and long-forms in the text. This dataset was created to support the Natural Language Processing task of abbreviation detection and covers the scientific domain.
### Supported Tasks and Leaderboards
This dataset primarily supports the Abbreviation Detection Task. It has also been tested on a train+dev split provided by the Acronym Detection Shared Task organized as a part of the Scientific Document Understanding (SDU) workshop at AAAI 2022.
### Languages
English
## Dataset Structure
### Data Instances
A typical data point comprises an ID, a set of `tokens` present in the text, a set of `pos_tags` for the corresponding tokens obtained via Spacy NER, and a set of `ner_tags` which are limited to `AC` for `Acronym` and `LF` for `long-forms`.
An example from the dataset:
{'id': '1',
'tokens': ['Study', '-', 'specific', 'risk', 'ratios', '(', 'RRs', ')', 'and', 'mean', 'BW', 'differences', 'were', 'calculated', 'using', 'linear', 'and', 'log', '-', 'binomial', 'regression', 'models', 'controlling', 'for', 'confounding', 'using', 'inverse', 'probability', 'of', 'treatment', 'weights', '(', 'IPTW', ')', 'truncated', 'at', 'the', '1st', 'and', '99th', 'percentiles', '.'],
'pos_tags': [8, 13, 0, 8, 8, 13, 12, 13, 5, 0, 12, 8, 3, 16, 16, 0, 5, 0, 13, 0, 8, 8, 16, 1, 8, 16, 0, 8, 1, 8, 8, 13, 12, 13, 16, 1, 6, 0, 5, 0, 8, 13],
'ner_tags': [0, 0, 0, 3, 4, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
### Data Fields
- id: the row identifier for the dataset point.
- tokens: The tokens contained in the text.
- pos_tags: the Part-of-Speech tags obtained for the corresponding token above from Spacy NER.
- ner_tags: The tags for abbreviations and long-forms.
### Data Splits
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Filtered | 112652 | 24140 | 24140|
| Unfiltered | 113860 | 24399 | 24399|
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
Extracting the data from PLOS Journals online and then tokenization, normalization.
#### Who are the source language producers?
PLOS Journal
## Additional Information
### Dataset Curators
The dataset was initially created by Leonardo Zilio, Hadeel Saadany, Prashant Sharma,
Diptesh Kanojia, Constantin Orasan.
### Licensing Information
CC-BY-SA 4.0
### Citation Information
[Needs More Information]
### Installation
We use the custom NER pipeline in the [spaCy transformers](https://spacy.io/universe/project/spacy-transformers) library to train our models. This library supports training via any pre-trained language models available at the :rocket: [HuggingFace repository](https://huggingface.co/).<br/>
Please see the instructions at these websites to setup your own custom training with our dataset to reproduce the experiments using Spacy.
OR<br/>
However, you can also reproduce the experiments via the Python notebook we [provide here](https://github.com/surrey-nlp/PLOD-AbbreviationDetection/blob/main/nbs/fine_tuning_abbr_det.ipynb) which uses HuggingFace Trainer class to perform the same experiments. The exact hyperparameters can be obtained from the models readme cards linked below. Before starting, please perform the following steps:
```bash
git clone https://github.com/surrey-nlp/PLOD-AbbreviationDetection
cd PLOD-AbbreviationDetection
pip install -r requirements.txt
```
Now, you can use the notebook to reproduce the experiments.
### Model(s)
Our best performing models are hosted on the HuggingFace models repository:
| Models | [`PLOD - Unfiltered`](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) | [`PLOD - Filtered`](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) | Description |
| --- | :---: | :---: | --- |
| [RoBERTa<sub>large</sub>](https://huggingface.co/roberta-large) | [RoBERTa<sub>large</sub>-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | -soon- | Fine-tuning on the RoBERTa<sub>large</sub> language model |
| [RoBERTa<sub>base</sub>](https://huggingface.co/roberta-base) | -soon- | [RoBERTa<sub>base</sub>-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | Fine-tuning on the RoBERTa<sub>base</sub> language model |
| [AlBERT<sub>large-v2</sub>](https://huggingface.co/albert-large-v2) | [AlBERT<sub>large-v2</sub>-finetuned-abbDet](https://huggingface.co/surrey-nlp/albert-large-v2-finetuned-abbDet) | -soon- | Fine-tuning on the AlBERT<sub>large-v2</sub> language model |
On the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.<br/>
### Usage
You can use the HuggingFace Model link above to find the instructions for using this model in Python locally using the notebook provided in the Git repo.
| |
necm77 | null | null | null | false | 9 | false | necm77/negotiation_data | 2022-04-19T20:50:08.000Z | null | false | 8158095eeaed506a0f8bd650eaf665d1cb30e33d | [] | [] | https://huggingface.co/datasets/necm77/negotiation_data/resolve/main/README.md | |
Paercky | null | null | null | false | 1 | false | Paercky/Tweets | 2022-04-16T21:33:23.000Z | null | false | cc19b6f9f276ad7fc1d5d76870cdc605a2805e8b | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Paercky/Tweets/resolve/main/README.md | ---
license: afl-3.0
---
|
Paercky | null | null | null | false | 2 | false | Paercky/autotrain-data-Tweets | 2022-10-25T10:08:35.000Z | null | false | 1bb2f5816caf50002da4e7bd5ec845fec22eb4cd | [] | [
"language:en",
"task_categories:text-classification"
] | https://huggingface.co/datasets/Paercky/autotrain-data-Tweets/resolve/main/README.md | ---
language:
- en
task_categories:
- text-classification
---
# AutoTrain Dataset for project: Tweets
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project Tweets.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "So the mask mandate goes away the day after #Furnal2022 ends, and you know what will happen after th[...]",
"target": 0
},
{
"text": "@EwanMacKenna Also does anyone know whether Margaret Buttimer of Bandon is still in prison for the '[...]",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=3, names=['1', '2', '3'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1679 |
| valid | 420 |
|
huseinzol05 | null | null | null | false | 2 | false | huseinzol05/Malay-TTS-Osman | 2022-04-17T05:39:21.000Z | null | false | a4d324be68761bd614dd3be85ccaac497001fabb | [] | [] | https://huggingface.co/datasets/huseinzol05/Malay-TTS-Osman/resolve/main/README.md | # Malay-TTS-Osman
All notebooks and code related at https://github.com/huseinzol05/malaya-speech/tree/master/data/azure-tts
## Attributes
### Wiki and News
- 24000 sample rate, super clean.
- narrator `ms-MY-OsmanNeural`.
- approximate 94.5 hours
- Texts from Malay Wikipedia and News.
- Sentences between 2 words and 20 words.
### Parliament
- 24000 sample rate, super clean.
- narrator `ms-MY-OsmanNeural`.
- approximate 133.2 hours.
- Texts from Malaysia Malay Parliament.
- Sentences between 2 words and 25 words.
## how-to
### Wiki and News
1. Download [populated-text.json](populated-text.json) and [tts-malay-osman.tar.gz](tts-malay-osman.tar.gz).
2. To get wav and transcript,
```python
import json
import soundfile as sf
with open('populated-text.json') as fopen:
texts = json.load(fopen)
index = 0
text = texts[index]
y, sr = sf.read(f'male/{index}.wav')
```
### Parliament
1. Download [populated-parliament.json](populated-parliament.json) and [tts-malay-osman-parliament.tar.gz](tts-malay-osman-parliament.tar.gz).
2. To get wav and transcript,
```python
import json
import soundfile as sf
with open('populated-parliament.json') as fopen:
texts = json.load(fopen)
index = 0
text = texts[index]
y, sr = sf.read(f'male-parliament/{index}.wav')
``` |
enimai | null | null | null | false | 2 | false | enimai/MuST-C-ru | 2022-04-17T05:23:38.000Z | null | false | 53997f64314d8360a2b2d23b7445ba47d4c66b9a | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/enimai/MuST-C-ru/resolve/main/README.md | ---
license: afl-3.0
---
|
laion | null | null | null | false | 1 | false | laion/laion5B-watermark-safety-ordered | 2022-05-22T02:38:15.000Z | null | false | 428e9a44c5ce16d4c36bd24548985fe7dbedd6a9 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/laion/laion5B-watermark-safety-ordered/resolve/main/README.md | ---
license: cc-by-4.0
---
https://github.com/rom1504/laion-prepro/blob/main/laion5B/usage_guide/watermark_safety_usage.py |
Divyanshu | null | @misc{https://doi.org/10.48550/arxiv.2204.08776,
doi = {10.48550/ARXIV.2204.08776},
url = {https://arxiv.org/abs/2204.08776},
author = {Aggarwal, Divyanshu and Gupta, Vivek and Kunchukuttan, Anoop},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {IndicXNLI: Evaluating Multilingual Inference for Indian Languages},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
} | IndicXNLI is a translated version of XNLI to 11 Indic Languages. As with XNLI, the goal is
to predict textual entailment (does sentence A imply/contradict/neither sentence
B) and is a classification task (given two sentences, predict one of three
labels). | false | 1,436 | false | Divyanshu/indicxnli | 2022-10-06T15:26:00.000Z | null | false | 7092c27872e919f31d0496fb8b9c47bd2cba3f6c | [] | [
"arxiv:2204.08776",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc0-1.0",
"... | https://huggingface.co/datasets/Divyanshu/indicxnli/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: IndicXNLI
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# Dataset Card for "IndicXNLI"
## Table of Contents
- [Dataset Card for "IndicXNLI"](#dataset-card-for-indicxnli)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Description
- **Homepage:** <https://github.com/divyanshuaggarwal/IndicXNLI>
- **Paper:** [IndicXNLI: Evaluating Multilingual Inference for Indian Languages](https://arxiv.org/abs/2204.08776)
- **Point of Contact:** [Divyanshu Aggarwal](mailto:divyanshuggrwl@gmail.com)
### Dataset Summary
INDICXNLI is similar to existing
XNLI dataset in shape/form, but focusses on Indic language family. INDICXNLI include NLI
data for eleven major Indic languages that includes
Assamese (‘as’), Gujarat (‘gu’), Kannada (‘kn’),
Malayalam (‘ml’), Marathi (‘mr’), Odia (‘or’),
Punjabi (‘pa’), Tamil (‘ta’), Telugu (‘te’), Hindi
(‘hi’), and Bengali (‘bn’).
### Supported Tasks and Leaderboards
**Tasks:** Natural Language Inference
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One example from the `hi` dataset is given below in JSON format.
```python
{'premise': 'अवधारणात्मक रूप से क्रीम स्किमिंग के दो बुनियादी आयाम हैं-उत्पाद और भूगोल।',
'hypothesis': 'उत्पाद और भूगोल क्रीम स्किमिंग का काम करते हैं।',
'label': 1 (neutral) }
```
### Data Fields
- `premise (string)`: Premise Sentence
- `hypothesis (string)`: Hypothesis Sentence
- `label (integer)`: Integer label `0` if hypothesis `entails` the premise, `2` if hypothesis `negates` the premise and `1` otherwise.
### Data Splits
<!-- Below is the dataset split given for `hi` dataset.
```python
DatasetDict({
train: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 392702
})
test: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 5010
})
validation: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 2490
})
})
``` -->
Language | ISO 639-1 Code |Train | Test | Dev |
--------------|----------------|-------|-----|------|
Assamese | as | 392,702 | 5,010 | 2,490 |
Bengali | bn | 392,702 | 5,010 | 2,490 |
Gujarati | gu | 392,702 | 5,010 | 2,490 |
Hindi | hi | 392,702 | 5,010 | 2,490 |
Kannada | kn | 392,702 | 5,010 | 2,490 |
Malayalam | ml |392,702 | 5,010 | 2,490 |
Marathi | mr |392,702 | 5,010 | 2,490 |
Oriya | or | 392,702 | 5,010 | 2,490 |
Punjabi | pa | 392,702 | 5,010 | 2,490 |
Tamil | ta | 392,702 | 5,010 | 2,490 |
Telugu | te | 392,702 | 5,010 | 2,490 |
<!-- The dataset split remains same across all languages. -->
## Dataset usage
Code snippet for using the dataset using datasets library.
```python
from datasets import load_dataset
dataset = load_dataset("Divyanshu/indicxnli")
```
## Dataset Creation
Machine translation of XNLI english dataset to 11 listed Indic Languages.
### Curation Rationale
[More information needed]
### Source Data
[XNLI dataset](https://cims.nyu.edu/~sbowman/xnli/)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2204.08776)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2204.08776)
#### Human Verification Process
[Detailed in the paper](https://arxiv.org/abs/2204.08776)
## Considerations for Using the Data
### Social Impact of Dataset
[Detailed in the paper](https://arxiv.org/abs/2204.08776)
### Discussion of Biases
[Detailed in the paper](https://arxiv.org/abs/2204.08776)
### Other Known Limitations
[Detailed in the paper](https://arxiv.org/abs/2204.08776)
### Dataset Curators
Divyanshu Aggarwal, Vivek Gupta, Anoop Kunchukuttan
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@misc{https://doi.org/10.48550/arxiv.2204.08776,
doi = {10.48550/ARXIV.2204.08776},
url = {https://arxiv.org/abs/2204.08776},
author = {Aggarwal, Divyanshu and Gupta, Vivek and Kunchukuttan, Anoop},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {IndicXNLI: Evaluating Multilingual Inference for Indian Languages},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!-- ### Contributions -->
|
surafelkindu | null | null | null | false | 2 | false | surafelkindu/Amharic_corpus | 2022-04-17T18:19:47.000Z | null | false | 539315f30d0cfd1aa5765a65a4dcd6d93d168d20 | [] | [
"license:mit"
] | https://huggingface.co/datasets/surafelkindu/Amharic_corpus/resolve/main/README.md | ---
license: mit
---
ዛጎል ዜና- መንግስት አምስት ሺህ የሚጠጉ እስረኞችን “ተመራቂዎች” በሚል መፍታቱን ይፋ ባደረገበት ቀን በተመሳሳይ አምቦ ተማሪዎች ተቃውሞ ማሰማታቸው ተሰማ። ተማሪዎቹ የአስቸኳይ አዋጁን በመጣስ ” መረራ ይፈታ” እያሉ ተቃውሞ መጀመራቸው ነው የተሰማው። ከትምህርት ቤት ወደ ትምህርት ቤት የሰፋው ተቃውሞ ብህይወት ላይ አደጋ ባያስከትልም በንብረት ላይ ግን ጉዳት አድርሷል። መኪና ሲቃጠል ያዩ የአይን ምስክሮች ተቃውሞውን በጀመሩት ላይም ሆነ ዘግይተው በተቀላቀሉት ላይ እንደ ቀደሞው ያለ የሃይል እርምጃ አልተወሰደም። የኦሮሚያ ሚዲያ ኔት ወርክ እንዳለው ደግሞ በርካታ ሰዎች ታስረዋል።
ለወትሮው ህገ መንግስቱን በሃይል ለመናድ የተነሱ፣ የነውጥ ሃይሎች፣ አተራማሾች፣ የጥፋት ሃይል ተላላኪዎች በሚል ተጠርጥረው በቁጥጥር ስር ከዋሉት መካከል 4035 የሚሆኑት ሲፈቱ እስረኞቹ “ስድስት ኮርስ ወስደው ተመረቁ” ነው የተባለው።
የኦሮሚያ ማረሚያ ቤቶች አስተዳደር ኮሚሽነር ፀሃይ በላይን ጠቅሶ ፋና እንደዘገበው ጦላይ ተሃድሶ ማዕከል ከገቡ 5 ሺህ 600 ሰልጣኞች መካከል 4035 ያህሉ በስድስት ዋና ዋና ጉዳዮች ሥልጠና ወስደው ተመርቀዋል። ኮርሶቹም በፍፁም፣ አይደገምም፣ የቀለም አብዮት፣ የኢትዮጰያ ህገ–መንግስት እና የኢትዮጵያ ህዳሴ የሚሉ ርዕሰ ጉዳዮችን የተካተቱባቸው ነው።
አበምርቃቱ ላይ ጠቅላይ ሚኒስትር ሃይለማርያም ተገኝተው “ ሽኝት” አደርጉላቸው ተብሏል። በርካታ ቃል ተገብቶላቸዋል። መስመርም ተሰምሮላቸዋል። “በደምና በአጥንት የተጻፈውን ሕገመንግስት፣ ዋጋ የተከፈለበትን ህገመንግስት” በማለት አቶ ሃይለማርያም በሃይል ለመናድ መሞከር አይቻልም በለዋል። “ ልክ እናንተ አይደገምም እንዳላችሁት፣ እኛም አይደገም እንላለን” ብለዋል። የፋና ዘገባ እንዲህ ይነበባል።
አዲስ አበባ ፣ ታህሳስ 12 ፣ 2009 (ኤፍ ቢ ሲ) በሃገሪቱ የተለያዩ አካባቢዎች በተፈጠረው ሁከት ውስጥ ተሳትፈው በማሰልጠኛ ጣቢያዎች የተሃድሶ ስልጠና ሲወስዱ የነበሩ ዜጎች ወደ መጡበት እየተመለሱ ነው። በአዋሽ፣ አላጌና ብር ሸለቆ ማዕከላት የተሃድሶ ስልጠና የወሰዱ ዜጎች ናቸው ወደ አካባቢያቸው እየተመለሱ ያሉት። በጦላይ ለአንድ ወር የተሃድሶ ስልጠና የወሰዱ 4 ሺህ 35 ዜጎችም ሥልጠናቸውን አጠናቀው ነገ ወደ መጡበት አካባቢ ይመለሳሉ ተብሏል።
በጦላይ የተሃድሶ ማዕከል የተገኙት ጠቅላይ ሚኒስትር ኃይለማርያም ደሳለኝ በዚሁ ጊዜ ባስተላለፉት መልዕክት ሰልጣኞች ወደ መደበኛ ህይወታቸው እንዲመለሱ መንግሥት ድጋፍ ያደርጋል ብለዋል። ሠራተኞች ወደ ሥራ ገበታቸው እንዲመለሱ የሚደረግ ሲሆን ተማሪዎች ደግሞ ትምህርታቸው እንዲቀጥሉ ይደረጋልም ነው ያሉት ጠቅላይ ሚኒስትር ኃይለማርያም።
ሥራ አጥ የሆኑ ወጣቶችም በራሳቸው መንገድ ሥራ እንዲፈጥሩ ድጋፍ እንደሚደረግላቸው ጠቅላይ ሚኒስትሩ ገልጸዋል። ሠላም፣ ልማትና ዴሞክራሲ የማይነጣጡ የአንድ አገር ህልውና መሰረት መሆናቸውን ወጣቱ ተገንዝቦ እነዚህን እሴቶች የመጠበቅ ኃላፊነቱን እንዲወጣ ጠይቀዋል። ወጣቱ ጥያቄ እንኳ ቢኖረው ሕገ-መንግሥቱ በሚፈቅደው መሰረት የማቅረብና መልስ የማግኘት መብት እንዳለው ገልጸዋል። ባለፉት ወራት እንደታየው ጥያቄውን በአመጽና ግርግር መጠየቁ ዋጋ እንዳስከፈለ ለማሳያነት በማንሳት።
እንዲህ ዓይነት ሁኔታ እንዳይደገም መንግሥትም የራሱን ስህተት ለማረም ጥልቅ ተሃድሶ እያደረገ መሆኑን ገልጸው ወጣቱም የራሱን ስህተት በማረም ከመንግሥት ጋር በመሆን ሠላሙን እንዲጠብቅ መልዕክት አስተላልፈዋል። የኦሮሚያ ክልል ርዕሰ መስተዳደር አቶ ለማ መገርሳ በበኩላቸው በክልሉ የሰፈነውን ሠላም ለማስቀጠል ከሁሉም የህብረተሰብ ክፍል ጋር በቅንጅት ሥራዎች ይሰራሉ ብለዋል።
ከወራት በፊት በተፈጠረው ሁከትና ግርግር ህይወት የጠፋ መሆኑን ገልጸው ለዘመናት የተለፋባቸው የህዝብ ኃብቶችም መውደማቸው አግባብ አለመሆኑን ተናግረዋል። ክልሉ ሊለወጥና ሊለማ የሚችለው የክልሉ ወጣቶች ለሠላም በጋራ ዘብ ሲቆሙ እንደሆነም አስምረውበታል።
አሁን ወደ |
KevinZ | null | @article{talmor2020olmpics,
title={oLMpics-on what language model pre-training captures},
author={Talmor, Alon and Elazar, Yanai and Goldberg, Yoav and Berant, Jonathan},
journal={Transactions of the Association for Computational Linguistics},
volume={8},
pages={743--758},
year={2020},
publisher={MIT Press}
} | This is a set a eight datasets from the paper "oLMpics - On what Language Model Pre-training Captures"
by Alon Talmor et al. | false | 2 | false | KevinZ/oLMpics | 2022-04-19T18:08:06.000Z | null | false | 7cffe68258589932209921818b9f9e56324850e3 | [] | [] | https://huggingface.co/datasets/KevinZ/oLMpics/resolve/main/README.md | oLMpics README
|
student | null | null | null | false | 2 | false | student/birds_400 | 2022-04-18T03:15:55.000Z | null | false | 976943672aec93d411e31de606fc103e4aa6073b | [] | [] | https://huggingface.co/datasets/student/birds_400/resolve/main/README.md | 鸟类400.物种图像分类
58388训练集,2000测试测试集,2000验证图像224X224X3 jpg格式
400种鸟类的数据集。58388张训练图像、2000张测试图像(每种5张图像)和2000张验证图像(每种5张图像)。这是一个非常高质量的数据集,每张图像中只有一只鸟,鸟通常占据图像中至少50%的像素。因此,即使是一个中等复杂的模型也能在90%的范围内实现训练和测试精度。
所有图像均为jpg格式的224 X 224 X 3彩色图像。数据集包括列车集、测试集和验证集。每套包含400个子目录,每种鸟类一个。如果使用Keras ImageDataGenerator,则数据结构非常方便。flowfromdirectory创建列车、测试和有效数据生成器。数据集还包括一个鸟类物种档案。csv。此cvs文件包含三列。“文件路径”列包含图像文件的文件路径。“标签”列包含与图像文件关联的类名。鸟类种类。如果使用df=pandas读入csv文件。birdscsv(Bird Species.csv)将创建一个pandas数据帧,然后可以将其拆分为traindf、testdf和validdf数据帧,以创建您自己的数据划分为train、test和validdf数据集。
注:数据集中的测试和验证图像是手工选择的“最佳”图像,因此使用这些数据集与创建自己的测试和验证集相比,您的模型可能会获得最高的准确度分数。然而,就看不见的图像上的模型性能而言,后一种情况更为准确。
这些图片是通过网络搜索按物种名称收集的。下载一个物种的图像文件后,使用我开发的python duplicate image detector程序检查其重复图像。删除所有检测到的重复项,以防止它们在训练集、测试集和验证集之间成为共同的图像。
之后,对图像进行裁剪,使鸟占据图像中至少50%的像素。然后,这些图像以jpg格式调整为224x224 X3。裁剪确保了当CNN对其进行处理时,图像中有足够的信息来创建高度准确的分类器。即使是一个中等稳健的模型,也应在高90%的范围内实现训练、验证和测试精度。由于数据集很大,我建议您尝试使用150 X 150 X3的模型和图像大小进行训练,以减少训练时间。所有文件也从每个物种的一个开始按顺序编号。所以测试图像被命名为1。jpg至5。jpg。对于验证图像也是如此。训练图像也用“零”填充顺序编号。例如001。jpg,002。jpg…010。jpg,011。jpg…。。099.jpg,100jpg,102。当与python文件函数和目录中的Keras流一起使用时,zero的填充保留了文件顺序。
训练集是不平衡的,每个物种有不同数量的文件。然而,每个物种至少有120个训练图像文件。这种不平衡并没有影响我的内核分类器,因为它在测试集上达到了98%以上的准确率。
数据集中一个显著的不平衡是雄性物种图像与雌性物种图像的比例。大约85%的图片是男性的,15%是女性的。典型的雄性动物的肤色要多样化得多,而一个物种的雌性动物通常是平淡无奇的。因此,男性和女性的形象可能看起来完全不同。几乎所有的测试和验证图像都来自该物种的雄性。因此,分类器可能无法在雌性物种图像上表现良好。 |
student | null | null | null | false | 2 | false | student/CUB_birds_200_2011 | 2022-04-18T03:21:03.000Z | null | false | 254dd05ce1dd064b434436fb491836b3b489fd9b | [] | [] | https://huggingface.co/datasets/student/CUB_birds_200_2011/resolve/main/README.md | CUB200-2011数据集介绍:
该数据集由加州理工学院再2010年提出的细粒度数据集,也是目前细粒度分类识别研究的基准图像数据集。
该数据集共有11788张鸟类图像,包含200类鸟类子类,其中训练数据集有5994张图像,测试集有5794张图像,每张图像均提供了图像类标记信息,图像中鸟的bounding box,鸟的关键part信息,以及鸟类的属性信息,数据集如下图所示。
下载的数据集中,包含了如下文件:
bounding_boxes.txt;classes.txt;image_class_labels.txt; images.txt; train_test_split.txt.
其中,bounding_boxes.txt为图像中鸟类的边界框信息;classes.txt为鸟类的类别信息,共有200类; image_class_labels.txt为图像标签和所属类别标签信息;images.txt为图像的标签和图像路径信息;train_test_split.txt为训练集和测试集划分。
本博客主要是根据train_test_split.txt文件和images.txt文件将原始下载的CUB200-2011数据集划分为训练集和测试集。在深度学习Pytorch框架下采用ImageFolder和DataLoader读取数据集较为方便。相关的python代码如下:
(1) CUB200-2011训练集和测试集划分代码
# *_*coding: utf-8 *_*
# author --liming--
"""
读取images.txt文件,获得每个图像的标签
读取train_test_split.txt文件,获取每个图像的train, test标签.其中1为训练,0为测试.
"""
import os
import shutil
import numpy as np
import config
import time
time_start = time.time()
# 文件路径
path_images = config.path + 'images.txt'
path_split = config.path + 'train_test_split.txt'
trian_save_path = config.path + 'dataset/train/'
test_save_path = config.path + 'dataset/test/'
# 读取images.txt文件
images = []
with open(path_images,'r') as f:
for line in f:
images.append(list(line.strip('\n').split(',')))
# 读取train_test_split.txt文件
split = []
with open(path_split, 'r') as f_:
for line in f_:
split.append(list(line.strip('\n').split(',')))
# 划分
num = len(images) # 图像的总个数
for k in range(num):
file_name = images[k][0].split(' ')[1].split('/')[0]
aaa = int(split[k][0][-1])
if int(split[k][0][-1]) == 1: # 划分到训练集
#判断文件夹是否存在
if os.path.isdir(trian_save_path + file_name):
shutil.copy(config.path + 'images/' + images[k][0].split(' ')[1], trian_save_path+file_name+'/'+images[k][0].split(' ')[1].split('/')[1])
else:
os.makedirs(trian_save_path + file_name)
shutil.copy(config.path + 'images/' + images[k][0].split(' ')[1], trian_save_path + file_name + '/' + images[k][0].split(' ')[1].split('/')[1])
print('%s处理完毕!' % images[k][0].split(' ')[1].split('/')[1])
else:
#判断文件夹是否存在
if os.path.isdir(test_save_path + file_name):
aaaa = config.path + 'images/' + images[k][0].split(' ')[1]
bbbb = test_save_path+file_name+'/'+images[k][0].split(' ')[1]
shutil.copy(config.path + 'images/' + images[k][0].split(' ')[1], test_save_path+file_name+'/'+images[k][0].split(' ')[1].split('/')[1])
else:
os.makedirs(test_save_path + file_name)
shutil.copy(config.path + 'images/' + images[k][0].split(' ')[1], test_save_path + file_name + '/' + images[k][0].split(' ')[1].split('/')[1])
print('%s处理完毕!' % images[k][0].split(' ')[1].split('/')[1])
time_end = time.time()
print('CUB200训练集和测试集划分完毕, 耗时%s!!' % (time_end - time_start))
config文件
# *_*coding: utf-8 *_*
# author --liming--
path = '/media/lm/C3F680DFF08EB695/细粒度数据集/birds/CUB200/CUB_200_2011/'
ROOT_TRAIN = path + 'images/train/'
ROOT_TEST = path + 'images/test/'
BATCH_SIZE = 16
(2) 利用Pytorch方式读取数据
# *_*coding: utf-8 *_*
# author --liming--
"""
用于已下载数据集的转换,便于pytorch的读取
"""
import torch
import torchvision
import config
from torchvision import datasets, transforms
data_transform = transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
def train_data_load():
# 训练集
root_train = config.ROOT_TRAIN
train_dataset = torchvision.datasets.ImageFolder(root_train,
transform=data_transform)
CLASS = train_dataset.class_to_idx
print('训练数据label与文件名的关系:', CLASS)
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=config.BATCH_SIZE,
shuffle=True)
return CLASS, train_loader
def test_data_load():
# 测试集
root_test = config.ROOT_TEST
test_dataset = torchvision.datasets.ImageFolder(root_test,
transform=data_transform)
CLASS = test_dataset.class_to_idx
print('测试数据label与文件名的关系:',CLASS)
test_loader = torch.utils.data.DataLoader(test_dataset,
batch_size=config.BATCH_SIZE,
shuffle=True)
return CLASS, test_loader
if __name__ == '__main___':
train_data_load()
test_data_load() |
stepp1 | null | null | null | false | 10 | false | stepp1/tweet_emotion_intensity | 2022-04-18T20:49:56.000Z | null | false | 13e24921a5d4e04d6ab4bf21fada26c530e3db1f | [] | [] | https://huggingface.co/datasets/stepp1/tweet_emotion_intensity/resolve/main/README.md | # Tweet Emotion Intensity Dataset
## Papers:
* Emotion Intensities in Tweets. Saif M. Mohammad and Felipe Bravo-Marquez. In Proceedings of the sixth joint conference on lexical and computational semantics (*Sem), August 2017, Vancouver, Canada.
* WASSA-2017 Shared Task on Emotion Intensity. Saif M. Mohammad and Felipe Bravo-Marquez. In Proceedings of the EMNLP 2017 Workshop on Computational Approaches to Subjectivity, Sentiment, and Social Media (WASSA), September 2017, Copenhagen, Denmark.
|
SocialGrep | null | null | Data from the humour subreddits /r/meirl and /r/me_irl, up to Apr 1 2022 | false | 2 | false | SocialGrep/the-reddit-irl-dataset | 2022-07-01T17:52:22.000Z | null | false | e078606bba4ff4735ffac758f8fb5e9d9045e4ba | [] | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original"
] | https://huggingface.co/datasets/SocialGrep/the-reddit-irl-dataset/resolve/main/README.md | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for the-reddit-irl-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-irl-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditirldataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditirldataset)
### Dataset Summary
Data from the humour subreddits /r/meirl and /r/me_irl, up to Apr 1 2022.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
|
Lexi | null | null | null | false | 2 | false | Lexi/spanextract | 2022-10-25T10:08:42.000Z | squad | false | 4544b33bcc8077384d02b422f91b5723b890f53c | [] | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"task_categories:question-answering",
"task_ids:extractive-qa"
] | https://huggingface.co/datasets/Lexi/spanextract/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: squad
pretty_name: SQuAD
---
# Dataset Card for "squad"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 33.51 MB
- **Size of the generated dataset:** 85.75 MB
- **Total amount of disk used:** 119.27 MB
### Dataset Summary
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 33.51 MB
- **Size of the generated dataset:** 85.75 MB
- **Total amount of disk used:** 119.27 MB
An example of 'train' looks as follows.
```
{
"answers": {
"answer_start": [1],
"text": ["This is a test text"]
},
"context": "This is a test context.",
"id": 1,
"question": "Is this a test?",
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `int32` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name |train|validation|
|----------|----:|---------:|
|plain_text|---| ---|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
wanyu | null | null | null | false | 2 | false | wanyu/IteraTeR_v2 | 2022-10-24T18:58:08.000Z | null | false | f549503ef55b76632d02538edb6987ea9f96f82c | [] | [
"arxiv:2204.03685",
"annotations_creators:crowdsourced",
"language_creators:found",
"language:en",
"license:apache-2.0",
"multilinguality:monolingual",
"source_datasets:original",
"task_categories:text2text-generation",
"language_bcp47:en-US",
"tags:conditional-text-generation",
"tags:text-editi... | https://huggingface.co/datasets/wanyu/IteraTeR_v2/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: IteraTeR_v2
language_bcp47:
- en-US
tags:
- conditional-text-generation
- text-editing
---
Paper: [Read, Revise, Repeat: A System Demonstration for Human-in-the-loop Iterative Text Revision](https://arxiv.org/abs/2204.03685)
Authors: Wanyu Du*, Zae Myung Kim*, Vipul Raheja, Dhruv Kumar, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
Watch our system demonstration below!
[](https://www.youtube.com/watch?v=lK08tIpEoaE)
|
ciroy | null | null | null | false | 2 | false | ciroy/mlcommons-test | 2022-04-18T20:36:47.000Z | null | false | 6572d608a26a8f4fa2715ae788cb52b343e1e64d | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/ciroy/mlcommons-test/resolve/main/README.md | ---
license: cc-by-4.0
---
|
Bingsu | null | null | null | false | 868 | false | Bingsu/Cat_and_Dog | 2022-10-14T02:30:13.000Z | null | false | bdcf6d15638fb3534ed9a84fe8590c26a31cbdd5 | [] | [
"language:en",
"license:cc0-1.0",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:image-classification"
] | https://huggingface.co/datasets/Bingsu/Cat_and_Dog/resolve/main/README.md | ---
language:
- en
license:
- cc0-1.0
pretty_name: Cat and Dog
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- image-classification
dataset_info:
features:
- name: image
dtype: image
- name: labels
dtype:
class_label:
names:
0: cat
1: dog
splits:
- name: train
num_bytes: 166451650.0
num_examples: 8000
- name: test
num_bytes: 42101650.0
num_examples: 2000
download_size: 227859268
dataset_size: 208553300.0
size_in_bytes: 436412568.0
---
## Dataset Description
- **Homepage:** [Cat and Dog](https://www.kaggle.com/datasets/tongpython/cat-and-dog)
- **Download Size** 217.30 MiB
- **Generated Size** 198.89 MiB
- **Total Size** 416.20 MiB
### Dataset Summary
A dataset from [kaggle](https://www.kaggle.com/datasets/tongpython/cat-and-dog) with duplicate data removed.
### Data Fields
The data instances have the following fields:
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `labels`: an `int` classification label.
### Class Label Mappings:
```
{
"cat": 0,
"dog": 1,
}
```
### Data Splits
| | train | test |
|---------------|-------|-----:|
| # of examples | 8000 | 2000 |
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/Cat_and_Dog")
>>> dataset
DatasetDict({
train: Dataset({
features: ['image', 'labels'],
num_rows: 8000
})
test: Dataset({
features: ['image', 'labels'],
num_rows: 2000
})
})
>>> dataset["train"].features
{'image': Image(decode=True, id=None), 'labels': ClassLabel(num_classes=2, names=['cat', 'dog'], id=None)}
``` |
Bingsu | null | null | null | false | 2 | false | Bingsu/KSS_Dataset | 2022-07-02T00:10:10.000Z | null | false | 48fdfd7ab1dbc1a62e4e8a8b9f4c360259d51d3c | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:ko",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text-to-speech"
] | https://huggingface.co/datasets/Bingsu/KSS_Dataset/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ko
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: Korean Single Speaker Speech Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-to-speech
task_ids: []
---
## Dataset Description
- **Homepage:** [Korean Single Speaker Speech Dataset](https://www.kaggle.com/datasets/bryanpark/korean-single-speaker-speech-dataset)
- **Repository:** [Kyubyong/kss](https://github.com/Kyubyong/kss)
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
# Description of the original author
### KSS Dataset: Korean Single speaker Speech Dataset
KSS Dataset is designed for the Korean text-to-speech task. It consists of audio files recorded by a professional female voice actoress and their aligned text extracted from my books. As a copyright holder, by courtesy of the publishers, I release this dataset to the public. To my best knowledge, this is the first publicly available speech dataset for Korean.
### File Format
Each line in `transcript.v.1.3.txt` is delimited by `|` into six fields.
- A. Audio file path
- B. Original script
- C. Expanded script
- D. Decomposed script
- E. Audio duration (seconds)
- F. English translation
e.g.,
1/1_0470.wav|저는 보통 20분 정도 낮잠을 잡니다.|저는 보통 이십 분 정도 낮잠을 잡니다.|저는 보통 이십 분 정도 낮잠을 잡니다.|4.1|I usually take a nap for 20 minutes.
### Specification
- Audio File Type: wav
- Total Running Time: 12+ hours
- Sample Rate: 44,100 KHZ
- Number of Audio Files: 12,853
- Sources
- |1| [Kyubyong Park, 500 Basic Korean Verbs, Tuttle Publishing, 2015.](https://www.amazon.com/500-Basic-Korean-Verbs-Comprehensive/dp/0804846057/ref=sr_1_1?s=books&ie=UTF8&qid=1522911616&sr=1-1&keywords=kyubyong+park)|
- |2| [Kyubyong Park, 500 Basic Korean Adjectives 2nd Ed., Youkrak, 2015.](http://www.hanbooks.com/500bakoad.html)|
- |3| [Kyubyong Park, Essential Korean Vocabulary, Tuttle Publishing, 2015.](https://www.amazon.com/Essential-Korean-Vocabulary-Phrases-Fluently/dp/0804843252/ref=sr_1_3?s=books&ie=UTF8&qid=1522911806&sr=1-3&keywords=kyubyong+park)|
- |4| [Kyubyong Park, Tuttle Learner's Korean-English Dictionary, Tuttle Publishing, 2012.](https://www.amazon.com/Tuttle-Learners-Korean-English-Dictionary-Essential/dp/0804841500/ref=sr_1_8?s=books&ie=UTF8&qid=1522911806&sr=1-8&keywords=kyubyong+park)|
### License
NC-SA 4.0. You CANNOT use this dataset for ANY COMMERCIAL purpose. Otherwise, you can freely use this.
### Citation
If you want to cite KSS Dataset, please refer to this:
Kyubyong Park, KSS Dataset: Korean Single speaker Speech Dataset, https://kaggle.com/bryanpark/korean-single-speaker-speech-dataset, 2018
### Reference
Check out [this](https://github.com/Kyubyong/kss) for a project using this KSS Dataset.
### Contact
You can contact me at kbpark.linguist@gmail.com.
April, 2018.
Kyubyong Park
### Dataset Summary
12,853 Korean audio files with transcription.
### Supported Tasks and Leaderboards
text-to-speech
### Languages
korean
## Dataset Structure
### Data Instances
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/KSS_Dataset")
>>> dataset["train"].features
{'audio': Audio(sampling_rate=44100, mono=True, decode=True, id=None),
'original_script': Value(dtype='string', id=None),
'expanded_script': Value(dtype='string', id=None),
'decomposed_script': Value(dtype='string', id=None),
'duration': Value(dtype='float32', id=None),
'english_translation': Value(dtype='string', id=None)}
```
```python
>>> dataset["train"][0]
{'audio': {'path': None,
'array': array([ 0.00000000e+00, 3.05175781e-05, -4.57763672e-05, ...,
0.00000000e+00, -3.05175781e-05, -3.05175781e-05]),
'sampling_rate': 44100},
'original_script': '그는 괜찮은 척하려고 애쓰는 것 같았다.',
'expanded_script': '그는 괜찮은 척하려고 애쓰는 것 같았다.',
'decomposed_script': '그는 괜찮은 척하려고 애쓰는 것 같았다.',
'duration': 3.5,
'english_translation': 'He seemed to be pretending to be okay.'}
```
### Data Splits
| | train |
|---------------|------:|
| # of examples | 12853 | |
hnchen | null | null | null | false | 2 | false | hnchen/Session-search | 2022-05-07T09:21:10.000Z | null | false | 547fc965adf45751ee1fb0881b0c921e66f250ab | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/hnchen/Session-search/resolve/main/README.md | ---
license: afl-3.0
---
|
samwell | null | null | null | false | 2 | false | samwell/en_twi | 2022-04-19T10:20:26.000Z | null | false | ec3653e28f1d3429a0977a7fcac32f308de33e02 | [] | [
"license:mit"
] | https://huggingface.co/datasets/samwell/en_twi/resolve/main/README.md | ---
license: mit
---
|
google | null | null | null | false | 2,578 | false | google/fleurs | 2022-11-07T18:07:23.000Z | null | false | ab9a4b3b06dd4c34e86e048ad3ad84f7b149af5e | [] | [
"arxiv:2205.12446",
"arxiv:2106.03193",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language:afr",
"language:amh",
"language:ara",
"language:asm",
... | https://huggingface.co/datasets/google/fleurs/resolve/main/README.md | ---
annotations_creators:
- expert-generated
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- expert-generated
language:
- afr
- amh
- ara
- asm
- ast
- azj
- bel
- ben
- bos
- cat
- ceb
- cmn
- ces
- cym
- dan
- deu
- ell
- eng
- spa
- est
- fas
- ful
- fin
- tgl
- fra
- gle
- glg
- guj
- hau
- heb
- hin
- hrv
- hun
- hye
- ind
- ibo
- isl
- ita
- jpn
- jav
- kat
- kam
- kea
- kaz
- khm
- kan
- kor
- ckb
- kir
- ltz
- lug
- lin
- lao
- lit
- luo
- lav
- mri
- mkd
- mal
- mon
- mar
- msa
- mlt
- mya
- nob
- npi
- nld
- nso
- nya
- oci
- orm
- ory
- pan
- pol
- pus
- por
- ron
- rus
- bul
- snd
- slk
- slv
- sna
- som
- srp
- swe
- swh
- tam
- tel
- tgk
- tha
- tur
- ukr
- umb
- urd
- uzb
- vie
- wol
- xho
- yor
- yue
- zul
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: 'The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech
(XTREME-S) benchmark is a benchmark designed to evaluate speech representations
across languages, tasks, domains and data regimes. It covers 102 languages from
10+ language families, 3 different domains and 4 task families: speech recognition,
translation, classification and retrieval.'
tags:
- speech-recognition
---
# FLEURS
## Dataset Description
- **Fine-Tuning script:** [pytorch/speech-recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition)
- **Paper:** [FLEURS: Few-shot Learning Evaluation of
Universal Representations of Speech](https://arxiv.org/abs/2205.12446)
- **Total amount of disk used:** ca. 350 GB
Fleurs is the speech version of the [FLoRes machine translation benchmark](https://arxiv.org/abs/2106.03193).
We use 2009 n-way parallel sentences from the FLoRes dev and devtest publicly available sets, in 102 languages.
Training sets have around 10 hours of supervision. Speakers of the train sets are different than speakers from the dev/test sets. Multilingual fine-tuning is
used and ”unit error rate” (characters, signs) of all languages is averaged. Languages and results are also grouped into seven geographical areas:
- **Western Europe**: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh*
- **Eastern Europe**: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*
- **Central-Asia/Middle-East/North-Africa**: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*
- **Sub-Saharan Africa**: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*
- **South-Asia**: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*
- **South-East Asia**: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*
- **CJK languages**: *Cantonese and Mandarin Chinese, Japanese, Korean*
## Supported Tasks
### 1. Speech Recognition (ASR)
```py
from datasets import load_dataset
fleurs_asr = load_dataset("google/fleurs", "af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_asr = load_dataset("google/fleurs", "all")
# see structure
print(fleurs_asr)
# load audio sample on the fly
audio_input = fleurs_asr["train"][0]["audio"] # first decoded audio sample
transcription = fleurs_asr["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
# for analyses see language groups
all_language_groups = fleurs_asr["train"].features["lang_group_id"].names
lang_group_id = fleurs_asr["train"][0]["lang_group_id"]
all_language_groups[lang_group_id]
```
### 2. Language Identification
LangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.
```py
from datasets import load_dataset
fleurs_langID = load_dataset("google/fleurs", "all") # to download all data
# see structure
print(fleurs_langID)
# load audio sample on the fly
audio_input = fleurs_langID["train"][0]["audio"] # first decoded audio sample
language_class = fleurs_langID["train"][0]["lang_id"] # first id class
language = fleurs_langID["train"].features["lang_id"].names[language_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
### 3. Retrieval
Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English "key" utterance corresponding to the speech translation of "queries" in 15 languages. Results have to be reported on the test sets of Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.
```py
from datasets import load_dataset
fleurs_retrieval = load_dataset("google/fleurs", "af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_retrieval = load_dataset("google/fleurs", "all")
# see structure
print(fleurs_retrieval)
# load audio sample on the fly
audio_input = fleurs_retrieval["train"][0]["audio"] # decoded audio sample
text_sample_pos = fleurs_retrieval["train"][0]["transcription"] # positive text sample
text_sample_neg = fleurs_retrieval["train"][1:20]["transcription"] # negative text samples
# use `audio_input`, `text_sample_pos`, and `text_sample_neg` to fine-tune your model for retrieval
```
Users can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.
## Dataset Structure
We show detailed information the example configurations `af_za` of the dataset.
All other configurations have the same structure.
### Data Instances
**af_za**
- Size of downloaded dataset files: 1.47 GB
- Size of the generated dataset: 1 MB
- Total amount of disk used: 1.47 GB
An example of a data instance of the config `af_za` looks as follows:
```
{'id': 91,
'num_samples': 385920,
'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/310a663d52322700b3d3473cbc5af429bd92a23f9bc683594e70bc31232db39e/home/vaxelrod/FLEURS/oss2_obfuscated/af_za/audio/train/17797742076841560615.wav',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/310a663d52322700b3d3473cbc5af429bd92a23f9bc683594e70bc31232db39e/home/vaxelrod/FLEURS/oss2_obfuscated/af_za/audio/train/17797742076841560615.wav',
'array': array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ...,
-1.1205673e-04, -8.4638596e-05, -1.2731552e-04], dtype=float32),
'sampling_rate': 16000},
'raw_transcription': 'Dit is nog nie huidiglik bekend watter aantygings gemaak sal word of wat owerhede na die seun gelei het nie maar jeugmisdaad-verrigtinge het in die federale hof begin',
'transcription': 'dit is nog nie huidiglik bekend watter aantygings gemaak sal word of wat owerhede na die seun gelei het nie maar jeugmisdaad-verrigtinge het in die federale hof begin',
'gender': 0,
'lang_id': 0,
'language': 'Afrikaans',
'lang_group_id': 3}
```
### Data Fields
The data fields are the same among all splits.
- **id** (int): ID of audio sample
- **num_samples** (int): Number of float values
- **path** (str): Path to the audio file
- **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio
- **raw_transcription** (str): The non-normalized transcription of the audio file
- **transcription** (str): Transcription of the audio file
- **gender** (int): Class id of gender
- **lang_id** (int): Class id of language
- **lang_group_id** (int): Class id of language group
### Data Splits
Every config only has the `"train"` split containing of *ca.* 1000 examples, and a `"validation"` and `"test"` split each containing of *ca.* 400 examples.
## Dataset Creation
We collect between one and three recordings for each sentence (2.3 on average), and buildnew train-dev-test splits with 1509, 150 and 350 sentences for
train, dev and test respectively.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).
### Discussion of Biases
Most datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through FLEURS should generalize to all languages.
### Other Known Limitations
The dataset has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on FLEURS should still correlate well with actual progress made for speech understanding.
## Additional Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
You can access the FLEURS paper at https://arxiv.org/abs/2205.12446.
Please cite the paper when referencing the FLEURS corpus as:
```
@article{fleurs2022arxiv,
title = {FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech},
author = {Conneau, Alexis and Ma, Min and Khanuja, Simran and Zhang, Yu and Axelrod, Vera and Dalmia, Siddharth and Riesa, Jason and Rivera, Clara and Bapna, Ankur},
journal={arXiv preprint arXiv:2205.12446},
url = {https://arxiv.org/abs/2205.12446},
year = {2022},
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) and [@aconneau](https://github.com/aconneau) for adding this dataset.
|
mteb | null | null | null | false | 395 | false | mteb/sprintduplicatequestions-pairclassification | 2022-09-27T19:15:57.000Z | null | false | 5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea | [] | [
"language:en"
] | https://huggingface.co/datasets/mteb/sprintduplicatequestions-pairclassification/resolve/main/README.md | ---
language:
- en
--- |
bigscience-historical-texts | null | null | null | false | 2 | false | bigscience-historical-texts/Open_Medieval_French | 2022-04-22T13:01:01.000Z | null | false | 11843e744cba376f3d2da042cc771c8ed1e890e1 | [] | [] | https://huggingface.co/datasets/bigscience-historical-texts/Open_Medieval_French/resolve/main/README.md | # Open Medieval French
Source: [https://github.com/OpenMedFr/texts](https://github.com/OpenMedFr/texts) |
mteb | null | null | null | false | 327 | false | mteb/askubuntudupquestions-reranking | 2022-09-27T19:11:08.000Z | null | false | 4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c | [] | [
"language:en"
] | https://huggingface.co/datasets/mteb/askubuntudupquestions-reranking/resolve/main/README.md | ---
language:
- en
--- |
taln-ls2n | null | @inproceedings{bougouin-etal-2013-topicrank,
title = "{T}opic{R}ank: Graph-Based Topic Ranking for Keyphrase Extraction",
author = "Bougouin, Adrien and
Boudin, Florian and
Daille, B{\'e}atrice",
booktitle = "Proceedings of the Sixth International Joint Conference on Natural Language Processing",
month = oct,
year = "2013",
address = "Nagoya, Japan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I13-1062",
pages = "543--551",
} | Wikinews-fr-100 benchmark dataset for keyphrase extraction an generation. | false | 3 | false | taln-ls2n/wikinews-fr-100 | 2022-09-23T07:38:18.000Z | null | false | 40abf43ceb2f64198e063e981f46d58ef07c304d | [] | [
"annotations_creators:unknown",
"language_creators:unknown",
"language:fr",
"license:cc-by-4.0",
"multilinguality:monolingual",
"task_categories:text-generation",
"task_ids:keyphrase-generation",
"task_ids:keyphrase-extraction",
"size_categories:n<1K"
] | https://huggingface.co/datasets/taln-ls2n/wikinews-fr-100/resolve/main/README.md | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- fr
license:
- cc-by-4.0
multilinguality:
- monolingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- n<1K
pretty_name: Wikinews-fr-100
---
# Wikinews-fr-100 Benchmark Dataset for Keyphrase Generation
## About
Wikinews-fr-100 is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 100 news articles in French collected from [wikinews](https://fr.wikinews.org/wiki/Accueil).
Keyphrases were annotated by readers (students in computer science) in an uncontrolled setting (that is, not limited to thesaurus entries).
Details about the dataset can be found in the original paper [(Bougouin et al., 2013)][bougouin-2013].
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. Present reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract.
Text pre-processing (tokenization) is carried out using `spacy` (`fr_core_news_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Snowball stemmer implementation for french provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
## Content and statistics
The dataset contains the following test split:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: |
| Test | 100 | 306.9 | 9.64 | 95.91 | 1.40 | 0.85 | 1.84 |
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
## References
- (Bougouin et al., 2013) Adrien Bougouin, Florian Boudin, and Béatrice Daille. 2013.
[TopicRank: Graph-Based Topic Ranking for Keyphrase Extraction][bougouin-2013].
In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 543–551, Nagoya, Japan. Asian Federation of Natural Language Processing.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021].
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[bougouin-2013]: https://aclanthology.org/I13-1062/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/ |
mteb | null | null | null | false | 315 | false | mteb/scidocs-reranking | 2022-09-27T19:11:31.000Z | null | false | 56a6d0140cf6356659e2a7c1413286a774468d44 | [] | [
"language:en"
] | https://huggingface.co/datasets/mteb/scidocs-reranking/resolve/main/README.md | ---
language:
- en
--- |
mteb | null | null | null | false | 306 | false | mteb/stackoverflowdupquestions-reranking | 2022-09-27T19:13:01.000Z | null | false | ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9 | [] | [
"language:en"
] | https://huggingface.co/datasets/mteb/stackoverflowdupquestions-reranking/resolve/main/README.md | ---
language:
- en
--- |
JEFFREY-VERDIERE | null | null | null | false | 2 | false | JEFFREY-VERDIERE/train_X_PJT_NLP | 2022-04-19T13:34:19.000Z | null | false | 3d3a799e2c93779d80e316fb5f2991332ef55fe3 | [] | [] | https://huggingface.co/datasets/JEFFREY-VERDIERE/train_X_PJT_NLP/resolve/main/README.md | |
taln-ls2n | null | @inproceedings{boudin-2013-taln,
title = "{TALN} Archives : a digital archive of {F}rench research articles in Natural Language Processing ({TALN} Archives : une archive num{\'e}rique francophone des articles de recherche en Traitement Automatique de la Langue) [in {F}rench]",
author = "Boudin, Florian",
booktitle = "Proceedings of TALN 2013 (Volume 2: Short Papers)",
month = jun,
year = "2013",
address = "Les Sables d{'}Olonne, France",
publisher = "ATALA",
url = "https://aclanthology.org/F13-2001",
pages = "507--514",
} | TALN Archives benchmark dataset for keyphrase extraction an generation. | false | 4 | false | taln-ls2n/taln-archives | 2022-09-23T07:58:07.000Z | null | false | 986fd2f2865cc142a9177e27e11b5585bd0c885a | [] | [
"annotations_creators:unknown",
"language_creators:unknown",
"language:fr",
"language:en",
"license:cc-by-4.0",
"multilinguality:multilingual",
"task_categories:text-generation",
"task_ids:keyphrase-generation",
"task_ids:keyphrase-extraction",
"size_categories:1K<n<10K"
] | https://huggingface.co/datasets/taln-ls2n/taln-archives/resolve/main/README.md | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- fr
- en
license:
- cc-by-4.0
multilinguality:
- multilingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- 1K<n<10K
pretty_name: TALN-Archives
---
# TALN-Archives Benchmark Dataset for Keyphrase Generation
## About
TALN-Archives is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 1207 abstracts of scientific papers in French collected from the [TALN Archives](http://talnarchives.atala.org/).
Keyphrases were annotated by authors in an uncontrolled setting (that is, not limited to thesaurus entries).
English translations of title/abstract/keyphrases are also available for a subset of the documents (456 fully- and 719 partially-translated documents), allowing to experiment with cross-lingual / multilingual keyphrase generation.
Details about the dataset can be found in the original paper [(Boudin, 2013)][boudin-2013].
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. <u>P</u>resent reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract.
Text pre-processing (tokenization) is carried out using `spacy` (`fr_core_news_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Snowball stemmer implementation for french provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
## Content and statistics
The dataset contains the following test split:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: |
| Test | 1207 | 138.3 | 4.12 | 53.83 | 12.32 | 21.69 | 12.16 |
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
- **translation**: translations of title, abstract and keyphrases in English if available.
## References
- (Boudin, 2013) Florian Boudin. 2013.
[TALN Archives : a digital archive of French research articles in Natural Language Processing (TALN Archives : une archive numérique francophone des articles de recherche en Traitement Automatique de la Langue) [in French]][boudin-2013].
In Proceedings of TALN 2013 (Volume 2: Short Papers), pages 507–514, Les Sables d’Olonne, France. ATALA.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021].
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[boudin-2013]: https://aclanthology.org/F13-2001/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/ |
mteb | null | null | null | false | 419 | false | mteb/sickr-sts | 2022-09-27T19:13:22.000Z | null | false | 20a6d6f312dd54037fe07a32d58e5e168867909d | [] | [
"language:en"
] | https://huggingface.co/datasets/mteb/sickr-sts/resolve/main/README.md | ---
language:
- en
--- |
mnazari | null | null | null | false | 1 | false | mnazari/urmi-assyrian-voice | 2022-04-20T21:04:42.000Z | null | false | 1311b5392e225619ed0fd951f85437f755d6f2e9 | [] | [
"license:cc0-1.0",
"annotations_creators:Geoffrey Khan",
"annotations_creators:Matthew Nazari",
"language:aii",
"size_category:n<1K",
"task_categories:automatic-speech-recognition"
] | https://huggingface.co/datasets/mnazari/urmi-assyrian-voice/resolve/main/README.md | ---
license: cc0-1.0
pretty_name: Assyrian
annotations_creators:
- Geoffrey Khan
- Matthew Nazari
language: aii
size_category: n<1K
task_categories:
- automatic-speech-recognition
---
# Dataset Card for urmi_assyrian_voice
## Dataset Description
The Urmi Assyrian Voice dataset is parsed from the research and fieldwork of [Geoffrey Khan](https://cambridge.academia.edu/GeoffreyKhan) which is made public through the [North-Eastern Neo-Aramaic Database Project](https://nena.ames.cam.ac.uk/dialects/225/audio). Annotation corrections as well as parsing was performed by Matthew Nazari.
## Dataset Summary
This dataset contains labelled audio examples of the Urmi dialect of North-Eastern Neo-Aramaic. The dataset only consists of one female speaker in her late seventies. Note that you will need to normalize the utterances for machine learning tasks (clean punctuation, remove accents, etc). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.