author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggan | null | null | null | false | 7 | false | huggan/few-shot-anime-face | 2022-04-12T14:08:09.000Z | null | false | 07ca20da8baf5a0e04029236a7d9de706e05966b | [] | [
"arxiv:2101.04775"
] | https://huggingface.co/datasets/huggan/few-shot-anime-face/resolve/main/README.md | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 541 | false | huggan/pokemon | 2022-04-01T11:50:45.000Z | null | false | 649a061a8b9fc03aad2d3abd56c2e9ce42da42fd | [] | [] | https://huggingface.co/datasets/huggan/pokemon/resolve/main/README.md | Source: https://www.kaggle.com/datasets/djilax/pkmn-image-dataset |
huggan | null | null | null | false | 6 | false | huggan/few-shot-art-painting | 2022-04-12T14:06:24.000Z | null | false | 623cf5299032a13f955fef4259db0a794b42c8d0 | [] | [
"arxiv:2101.04775"
] | https://huggingface.co/datasets/huggan/few-shot-art-painting/resolve/main/README.md | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 1 | false | huggan/few-shot-fauvism-still-life | 2022-04-12T14:07:31.000Z | null | false | ab6960d72dde5d5880a24e3580dc4af97f61436b | [] | [
"arxiv:2101.04775"
] | https://huggingface.co/datasets/huggan/few-shot-fauvism-still-life/resolve/main/README.md | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 1 | false | huggan/few-shot-flat-colored-patterns | 2022-04-12T14:07:41.000Z | null | false | 9d26da16edb06b659c3a2ede3660cefcd23168af | [] | [
"arxiv:2101.04775"
] | https://huggingface.co/datasets/huggan/few-shot-flat-colored-patterns/resolve/main/README.md | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 1 | false | huggan/few-shot-moongate | 2022-04-12T14:07:11.000Z | null | false | a56f84f9de3496b3d492d960611c54546f6b89dc | [] | [
"arxiv:2101.04775"
] | https://huggingface.co/datasets/huggan/few-shot-moongate/resolve/main/README.md | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 61 | false | huggan/few-shot-pokemon | 2022-04-12T14:06:36.000Z | null | false | d5aca3bdb21bff3e20c0e78b614fa114477118fc | [] | [
"arxiv:2101.04775"
] | https://huggingface.co/datasets/huggan/few-shot-pokemon/resolve/main/README.md | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 1 | false | huggan/few-shot-shells | 2022-04-12T14:07:59.000Z | null | false | 592999df611c39c3cac8774c53f3c59f819a3eef | [] | [
"arxiv:2101.04775"
] | https://huggingface.co/datasets/huggan/few-shot-shells/resolve/main/README.md | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
huggan | null | null | null | false | 1 | false | huggan/few-shot-skulls | 2022-04-12T14:03:56.000Z | null | false | fef3bf060bf60fc11be5d4d651c6a5634d5eaf56 | [] | [
"arxiv:2101.04775"
] | https://huggingface.co/datasets/huggan/few-shot-skulls/resolve/main/README.md | # Citation
```
@article{DBLP:journals/corr/abs-2101-04775,
author = {Bingchen Liu and
Yizhe Zhu and
Kunpeng Song and
Ahmed Elgammal},
title = {Towards Faster and Stabilized {GAN} Training for High-fidelity Few-shot
Image Synthesis},
journal = {CoRR},
volume = {abs/2101.04775},
year = {2021},
url = {https://arxiv.org/abs/2101.04775},
eprinttype = {arXiv},
eprint = {2101.04775},
timestamp = {Fri, 22 Jan 2021 15:16:00 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-04775.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
hackathon-pln-es | null | null | null | false | 1 | false | hackathon-pln-es/es_tweets_laboral | 2022-10-25T10:03:39.000Z | null | false | 0689c984ee2d9fb5ffd7c91f0cfeb7bbaa43f2f9 | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language:es",
"license:unknown",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:intent-classification"
] | https://huggingface.co/datasets/hackathon-pln-es/es_tweets_laboral/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- es
license:
- unknown
multilinguality:
- monolingual
pretty_name: "Tweets en espa\xF1ol denuncia laboral"
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
---
# Dataset Card for [es_tweets_laboral]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
Dataset creado por @hucruz, @DanielaGarciaQuezada, @hylandude, @BloodBoy21
Etiquetado por @DanielaGarciaQuezada
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
español
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
null | null | @InProceedings{liang2022metashift,
title={MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts},
author={Weixin Liang and James Zou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=MTex8qKavoS}
} | The MetaShift is a dataset of datasets for evaluating distribution shifts and training conflicts.
The MetaShift dataset is a collection of 12,868 sets of natural images across 410 classes.
It was created for understanding the performance of a machine learning model across diverse data distributions. | false | 12 | false | metashift | 2022-11-03T15:51:00.000Z | metashift | false | 0514cb74c928187916271ea7104ac1a1a138d36e | [] | [
"arxiv:2202.06523",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:image-classification",
"task_categories:other",
"task_ids:multi... | https://huggingface.co/datasets/metashift/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: MetaShift
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- image-classification
- other
task_ids:
- multi-label-image-classification
paperswithcode_id: metashift
tags:
- domain-generalization
dataset_info:
features:
- name: image_id
dtype: string
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: cat
1: dog
2: bus
3: truck
4: elephant
5: horse
6: bowl
7: cup
- name: context
dtype: string
config_name: metashift
splits:
- name: train
num_bytes: 16333509
num_examples: 86808
download_size: 21878013674
dataset_size: 16333509
---
# Dataset Card for MetaShift
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [MetaShift homepage](https://metashift.readthedocs.io/)
- **Repository:** [MetaShift repository](https://github.com/Weixin-Liang/MetaShift)
- **Paper:** [MetaShift paper](https://arxiv.org/abs/2202.06523v1)
- **Point of Contact:** [Weixin Liang](mailto:wxliang@stanford.edu)
### Dataset Summary
The MetaShift dataset is a collection of 12,868 sets of natural images across 410 classes. It was created for understanding the performance of a machine learning model across diverse data distributions.
The authors leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift.
The key idea is to cluster images using its metadata which provides context for each image.
For example : cats with cars or cats in bathroom.
The main advantage is the dataset contains many more coherent sets of data compared to other benchmarks.
Two important benefits of MetaShift :
- Contains orders of magnitude more natural data shifts than previously available.
- Provides explicit explanations of what is unique about each of its data sets and a distance score that measures the amount of distribution shift between any two of its data sets.
### Dataset Usage
The dataset has the following configuration parameters:
- selected_classes: `list[string]`, optional, list of the classes to generate the MetaShift dataset for. If `None`, the list is equal to `['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']`.
- attributes_dataset: `bool`, default `False`, if `True`, the script generates the MetaShift-Attributes dataset. Refer [MetaShift-Attributes Dataset](https://github.com/Weixin-Liang/MetaShift#bonus-generate-the-metashift-attributes-dataset-subsets-defined-by-subject-attributes) for more information.
- attributes: `list[string]`, optional, list of attributes classes included in the Attributes dataset. If `None` and `attributes_dataset` is `True`, it's equal to `["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]`. You can find the full attribute ontology in the above link.
- with_image_metadata: `bool`, default `False`, whether to include image metadata. If set to `True`, this will give additional metadata about each image. See [Scene Graph](https://cs.stanford.edu/people/dorarad/gqa/download.html) for more information.
- image_subset_size_threshold: `int`, default `25`, the number of images required to be considered a subset. If the number of images is less than this threshold, the subset is ignored.
- min_local_groups: `int`, default `5`, the minimum number of local groups required to be considered an object class.
Consider the following examples to get an idea of how you can use the configuration parameters :
1. To generate the MetaShift Dataset :
```python
load_dataset("metashift", selected_classes=['cat', 'dog', 'bus'])
```
The full object vocabulary and its hierarchy can be seen [here](https://github.com/Weixin-Liang/MetaShift/blob/main/dataset/meta_data/class_hierarchy.json).
The default classes are `['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']`
2. To generate the MetaShift-Attributes Dataset (subsets defined by subject attributes) :
```python
load_dataset("metashift", attributes_dataset = True, attributes=["dog(smiling)", "cat(resting)"])
```
The default attributes are `["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]`
3. To generate the dataset with additional image metadata information :
```python
load_dataset("metashift", selected_classes=['cat', 'dog', 'bus'], with_image_metadata=True)
```
4. Further, you can specify your own configuration different from those used in the papers as follows:
```python
load_dataset("metashift", image_subset_size_threshold=20, min_local_groups=3)
```
### Dataset Meta-Graphs
From the MetaShift Github Repo :
> MetaShift splits the data points of each class (e.g., Cat) into many subsets based on visual contexts. Each node in the meta-graph represents one subset. The weight of each edge is the overlap coefficient between the corresponding two subsets. Node colors indicate the graph-based community detection results. Inter-community edges are colored. Intra-community edges are grayed out for better visualization. The border color of each example image indicates its community in the meta-graph. We have one such meta-graph for each of the 410 classes in the MetaShift.
The following are the metagraphs for the default classes, these have been generated using the `generate_full_MetaShift.py` file.
<p align='center'>
<img width='75%' src='https://i.imgur.com/wrpezCK.jpg' alt="Cat Meta-graph" /> </br>
<b>Figure: Meta-graph: visualizing the diverse data distributions within the “cat” class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/FhuAwfT.jpg' alt="Dog Meta-graph" /> </br>
<b>Figure: Meta-graph for the “Dog” class, which captures meaningful semantics of the multi-modal data distribution of “Dog”. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/FFCcN6L.jpg' alt="Bus Meta-graph" /> </br>
<b>Figure: Meta-graph for the “Bus” class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/rx5b5Vo.jpg' alt="Elephant Meta-graph" /> </br>
<b>Figure: Meta-graph for the "Elephant" class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/6f6U3S8.jpg' alt="Horse Meta-graph" /> </br>
<b>Figure: Meta-graph for the "Horse" class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/x9zhQD7.jpg' alt="Truck Meta-graph"/> </br>
<b>Figure: Meta-graph for the Truck class. </b>
</p>
### Supported Tasks and Leaderboards
From the paper:
> MetaShift supports evaluation on both :
> - domain generalization and subpopulation shifts settings,
> - assessing training conflicts.
### Languages
All the classes and subsets use English as their primary language.
## Dataset Structure
### Data Instances
A sample from the MetaShift dataset is provided below:
```
{
'image_id': '2411520',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7F99115B8D90>,
'label': 2,
'context': 'fence'
}
```
A sample from the MetaShift-Attributes dataset is provided below:
```
{
'image_id': '2401643',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333 at 0x7FED371CE350>
'label': 0
}
```
The format of the dataset with image metadata included by passing `with_image_metadata=True` to `load_dataset` is provided below:
```
{
'image_id': '2365745',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333 at 0x7FEBCD39E4D0>
'label': 0,
'context': 'ground',
'width': 500,
'height': 333,
'location': None,
'weather': None,
'objects':
{
'object_id': ['2676428', '3215330', '1962110', '2615742', '3246028', '3232887', '3215329', '1889633', '3882667', '3882663', '1935409', '3882668', '3882669'],
'name': ['wall', 'trailer', 'floor', 'building', 'walkway', 'head', 'tire', 'ground', 'dock', 'paint', 'tail', 'cat', 'wall'],
'x': [194, 12, 0, 5, 3, 404, 27, 438, 2, 142, 324, 328, 224],
'y': [1, 7, 93, 10, 100, 46, 215, 139, 90, 172, 157, 45, 246],
'w': [305, 477, 499, 492, 468, 52, 283, 30, 487, 352, 50, 122, 274],
'h': [150, 310, 72, 112, 53, 59, 117, 23, 240, 72, 107, 214, 85],
'attributes': [['wood', 'green'], [], ['broken', 'wood'], [], [], [], ['black'], [], [], [], ['thick'], ['small'], ['blue']],
'relations': [{'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': ['of'], 'object': ['3882668']}, {'name': ['to the left of'], 'object': ['3882669']}, {'name': ['to the right of'], 'object': ['3882668']}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': ['of'], 'object': ['3882668']}, {'name': ['perched on', 'to the left of'], 'object': ['3882667', '1889633']}, {'name': ['to the right of'], 'object': ['3215329']}]
}
}
```
### Data Fields
- `image_id`: Unique numeric ID of the image in Base Visual Genome dataset.
- `image`: A PIL.Image.Image object containing the image.
- `label`: an int classification label.
- `context`: represents the context in which the label is seen. A given label could have multiple contexts.
Image Metadata format can be seen [here](https://cs.stanford.edu/people/dorarad/gqa/download.html) and a sample above has been provided for reference.
### Data Splits
All the data is contained in training set.
## Dataset Creation
### Curation Rationale
From the paper:
> We present MetaShift as an important resource for studying the behavior of
ML algorithms and training dynamics across data with heterogeneous contexts. In order to assess the reliability and fairness of a model, we need to evaluate
its performance and training behavior across heterogeneous types of data. MetaShift contains many more coherent sets of data compared to other benchmarks. Importantly, we have explicit annotations of what makes each subset unique (e.g. cats with cars or dogs next to a bench) as well as a score that measures the distance between any two subsets, which is not available in previous benchmarks of natural data.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> We leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift. Visual Genome contains over 100k images across 1,702 object classes. MetaShift is constructed on a class-by-class basis. For each class, say “cat”, we pull out all cat images and proceed with generating candidate subests, constructing meta-graphs and then duantify distances of distribution shifts.
#### Who are the source language producers?
[More Information Needed]
### Annotations
The MetaShift dataset uses Visual Genome as its base, therefore the annotations process is same as the Visual Genome dataset.
#### Annotation process
From the Visual Genome paper :
> We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over 33,000 unique workers contributed to the dataset. The dataset was collected over the course of 6 months after 15 months of experimentation and iteration on the data representation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where each HIT involved creating descriptions, questions and answers, or region graphs.
#### Who are the annotators?
From the Visual Genome paper :
> Visual Genome was collected and verified entirely by crowd workers from Amazon Mechanical Turk.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the paper:
> One limitation is that our MetaShift might inherit existing biases in Visual Genome, which is the
base dataset of our MetaShift. Potential concerns include minority groups being under-represented
in certain classes (e.g., women with snowboard), or annotation bias where people in images are
by default labeled as male when gender is unlikely to be identifiable. Existing work in analyzing,
quantifying, and mitigating biases in general computer vision datasets can help with addressing this
potential negative societal impact.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
From the paper :
> Our MetaShift and the code would use the Creative Commons Attribution 4.0 International License. Visual Genome (Krishna et al., 2017) is licensed under a Creative Commons Attribution 4.0 International License. MS-COCO (Lin et al., 2014) is licensed under CC-BY 4.0. The Visual Genome dataset uses 108, 077 images from the intersection of the YFCC100M (Thomee et al., 2016) and MS-COCO. We use the pre-processed and cleaned version of Visual Genome by GQA (Hudson & Manning, 2019).
### Citation Information
```bibtex
@InProceedings{liang2022metashift,
title={MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts},
author={Weixin Liang and James Zou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=MTex8qKavoS}
}
```
### Contributions
Thanks to [@dnaveenr](https://github.com/dnaveenr) for adding this dataset. |
jglaser | null | null | null | false | 1 | false | jglaser/pdb_protein_ligand_complexes | 2022-10-13T15:09:57.000Z | null | false | 2eec8352d97326bcba1de4687668e2602b22c110 | [] | [
"tags:proteins",
"tags:molecules",
"tags:chemistry",
"tags:SMILES",
"tags:complex structures"
] | https://huggingface.co/datasets/jglaser/pdb_protein_ligand_complexes/resolve/main/README.md | ---
tags:
- proteins
- molecules
- chemistry
- SMILES
- complex structures
---
## How to use the data sets
This dataset contains about 36,000 unique pairs of protein sequences and ligand SMILES, and the coordinates
of their complexes from the PDB.
SMILES are assumed to be tokenized by the regex from P. Schwaller.
## Ligand selection criteria
Only ligands
- that have at least 3 atoms,
- a molecular weight >= 100 Da,
- and which are not among the 280 most common ligands in the PDB (this includes common additives like PEG, ADP, ..)
are considered.
### Use the already preprocessed data
Load a test/train split using
```
import pandas as pd
train = pd.read_pickle('data/pdb_train.p')
test = pd.read_pickle('data/pdb_test.p')
```
Receptor features contain protein frames and side chain angles in OpenFold/AlphaFold format.
Ligand tokens which do not correspond to atoms have `nan` as their coordinates.
Documentation by example:
```
>>> import pandas as pd
>>> test = pd.read_pickle('data/pdb_test.p')
>>> test.head(5)
pdb_id lig_id ... ligand_xyz_2d ligand_bonds
0 7k38 VTY ... [(-2.031355975502858, -1.6316778784387098, 0.0... [(0, 1), (1, 4), (4, 5), (5, 10), (10, 9), (9,...
1 6prt OWA ... [(4.883261310160714, -0.37850716807626705, 0.0... [(11, 18), (18, 20), (20, 8), (8, 7), (7, 2), ...
2 4lxx FNF ... [(8.529427756002057, 2.2434809270065372, 0.0),... [(51, 49), (49, 48), (48, 46), (46, 53), (53, ...
3 4lxx FON ... [(-10.939694946697701, -1.1876214529096956, 0.... [(13, 1), (1, 0), (0, 3), (3, 4), (4, 7), (7, ...
4 7bp1 CAQ ... [(-1.9485571585149868, -1.499999999999999, 0.0... [(4, 3), (3, 1), (1, 0), (0, 7), (7, 9), (7, 6...
[5 rows x 8 columns]
>>> test.columns
Index(['pdb_id', 'lig_id', 'seq', 'smiles', 'receptor_features', 'ligand_xyz',
'ligand_xyz_2d', 'ligand_bonds'],
dtype='object')
>>> test.iloc[0]['receptor_features']
{'rigidgroups_gt_frames': array([[[[-5.3122622e-01, 2.0922849e-01, -8.2098854e-01,
1.7295000e+01],
[-7.1005428e-01, -6.3858479e-01, 2.9670244e-01,
-9.1399997e-01],
[-4.6219218e-01, 7.4056256e-01, 4.8779655e-01,
3.3284000e+01],
[ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
1.0000000e+00]],
...
[[ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
-3.5030000e+00],
[ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
2.6764999e+01],
[ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
1.5136000e+01],
[ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00,
1.0000000e+00]]]], dtype=float32), 'torsion_angles_sin_cos': array([[[-1.90855725e-09, 3.58859784e-02],
[ 1.55730803e-01, 9.87799530e-01],
[ 6.05505241e-01, -7.95841312e-01],
...,
[-2.92459433e-01, -9.56277928e-01],
[ 9.96634814e-01, -8.19697779e-02],
[ 0.00000000e+00, 0.00000000e+00]],
...
[[ 2.96455977e-04, -9.99999953e-01],
[-8.15660990e-01, 5.78530158e-01],
[-3.17915569e-01, 9.48119024e-01],
...,
[ 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00]]])}
>>> test.iloc[0]['receptor_features'].keys()
dict_keys(['rigidgroups_gt_frames', 'torsion_angles_sin_cos'])
>>> test.iloc[0]['ligand_xyz']
[(22.289, 11.985, 9.225), (21.426, 11.623, 7.959), (nan, nan, nan), (nan, nan, nan), (21.797, 11.427, 6.574), (20.556, 11.56, 5.792), (nan, nan, nan), (20.507, 11.113, 4.552), (nan, nan, nan), (19.581, 10.97, 6.639), (20.107, 10.946, 7.954), (nan, nan, nan), (nan, nan, nan), (19.645, 10.364, 8.804)]
```
### Manual update from PDB
```
# download the PDB archive into folder pdb/
sh rsync.sh 24 # number of parallel download processes
# extract sequences and coordinates in parallel
sbatch pdb.slurm
# or
mpirun -n 42 parse_complexes.py # desired number of tasks
```
|
hackathon-pln-es | null | null | null | false | 2 | false | hackathon-pln-es/biomed_squad_es_v2 | 2022-04-03T17:46:58.000Z | null | false | e3789d92458aeb34a189a1fff9863e6d248d891a | [] | [
"arxiv:1912.05200"
] | https://huggingface.co/datasets/hackathon-pln-es/biomed_squad_es_v2/resolve/main/README.md | # Dataset Card for biomed_squad_es_v2
This Dataset was created as part of the "Extractive QA Biomedicine" project developed during the 2022 [Hackathon](https://somosnlp.org/hackathon) organized by SOMOS NLP.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is a subset of the [dev squad_es (v2) dataset](https://huggingface.co/datasets/squad_es) (automatic translation of the Stanford Question Answering Dataset v2 into Spanish) containing questions related to the biomedical domain.
License, distribution and usage conditions of the original Squad_es Dataset apply.
### Languages
Spanish
## Dataset Structure
### Data Fields
```
{'answers': {'answer_start': [343, 343, 343],
'text': ['diez veces su propio peso',
'diez veces su propio peso',
'diez veces su propio peso']},
'context': 'Casi todos los ctenóforos son depredadores, tomando presas que van desde larvas microscópicas y rotíferos a los adultos de pequeños crustáceos; Las excepciones son los juveniles de dos especies, que viven como parásitos en las salpas en las que los adultos de su especie se alimentan. En circunstancias favorables, los ctenóforos pueden comer diez veces su propio peso en un día. Sólo 100-150 especies han sido validadas, y posiblemente otras 25 no han sido completamente descritas y nombradas. Los ejemplos de libros de texto son cidipidos con cuerpos en forma de huevo y un par de tentáculos retráctiles bordeados con tentilla ("pequeños tentáculos") que están cubiertos con colúnculos, células pegajosas. El filo tiene una amplia gama de formas corporales, incluyendo los platyctenidos de mar profundo, en los que los adultos de la mayoría de las especies carecen de peines, y los beroides costeros, que carecen de tentáculos. Estas variaciones permiten a las diferentes especies construir grandes poblaciones en la misma área, porque se especializan en diferentes tipos de presas, que capturan por una amplia gama de métodos que utilizan las arañas.',
'id': '5725c337271a42140099d165',
'question': '¿Cuánta comida come un Ctenophora en un día?',
'title': 'Ctenophora'}
```
### Data Splits
Validation: 1137 examples
### Citation Information
```
@article{2016arXiv160605250R,
author = {Casimiro Pio , Carrino and Marta R. , Costa-jussa and Jose A. R. , Fonollosa},
title = "{Automatic Spanish Translation of the SQuAD Dataset for Multilingual
Question Answering}",
journal = {arXiv e-prints},
year = 2019,
eid = {arXiv:1912.05200v1},
pages = {arXiv:1912.05200v1},
archivePrefix = {arXiv},
eprint = {1912.05200v2},
}
```
## Team
Santiago Maximo: [smaximo](https://huggingface.co/smaximo) |
iluvvatar | null | null | null | false | 1 | false | iluvvatar/RuNNE | 2022-10-23T05:35:11.000Z | null | false | 3b520ea6e4735e727fcbd0f0ebe8a84e51b0ea42 | [] | [
"arxiv:2108.13112",
"language:ru",
"multilinguality:monolingual",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/iluvvatar/RuNNE/resolve/main/README.md | ---
language:
- ru
multilinguality:
- monolingual
pretty_name: RuNNE
task_categories:
- structure-prediction
task_ids:
- named-entity-recognition
---
# RuNNE dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Citation Information](#citation-information)
- [Contacts](#contacts)
## Dataset Description
Part of NEREL dataset (https://arxiv.org/abs/2108.13112), a Russian dataset
for named entity recognition and relation extraction, used in RuNNE (2022)
competition (https://github.com/dialogue-evaluation/RuNNE).
Entities may be nested (see https://arxiv.org/abs/2108.13112).
Entity types list:
* AGE
* AWARD
* CITY
* COUNTRY
* CRIME
* DATE
* DISEASE
* DISTRICT
* EVENT
* FACILITY
* FAMILY
* IDEOLOGY
* LANGUAGE
* LAW
* LOCATION
* MONEY
* NATIONALITY
* NUMBER
* ORDINAL
* ORGANIZATION
* PENALTY
* PERCENT
* PERSON
* PRODUCT
* PROFESSION
* RELIGION
* STATE_OR_PROVINCE
* TIME
* WORK_OF_ART
## Dataset Structure
There are two "configs" or "subsets" of the dataset.
Using
`load_dataset('MalakhovIlya/RuNNE', 'ent_types')['ent_types']`
you can download list of entity types (
Dataset({
features: ['type'],
num_rows: 29
})
)
Using
`load_dataset('MalakhovIlya/RuNNE', 'data')` or `load_dataset('MalakhovIlya/RuNNE')`
you can download the data itself (DatasetDict)
Dataset consists of 3 splits: "train", "test" and "dev". Each of them contains text document. "Train" and "test" splits also contain annotated entities, "dev" doesn't.
Each entity is represented by a string of the following format: "\<start> \<stop> \<type>", where \<start> is a position of the first symbol of entity in text, \<stop> is the last symbol position in text and \<type> is a one of the aforementioned list of types.
P.S.
Original NEREL dataset also contains relations, events and linked entities, but they were not added here yet ¯\\\_(ツ)_/¯
## Citation Information
@article{Artemova2022runne,
title={{RuNNE-2022 Shared Task: Recognizing Nested Named Entities}},
author={Artemova, Ekaterina and Zmeev, Maksim and Loukachevitch, Natalia and Rozhkov, Igor and Batura, Tatiana and Braslavski, Pavel and Ivanov, Vladimir and Tutubalina, Elena},
journal={Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference "Dialog"},
year={2022}
}
## Contacts
Malakhov Ilya
Telegram - https://t.me/noname_4710
|
DMetaSoul | null | null | null | false | 1 | false | DMetaSoul/chinese-semantic-textual-similarity | 2022-04-02T10:38:47.000Z | null | false | 64fd51e4bb4d4d41e59df46d597725468c716c97 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/DMetaSoul/chinese-semantic-textual-similarity/resolve/main/README.md | ---
license: apache-2.0
---
为了对 like-BERT 预训练模型进行 fine-tune 调优和评测以得到更好的文本表征模,对业界开源的语义相似(STS)、自然语言推理(NLI)、问题匹配(QMC)以及相关性等数据集进行了搜集整理,具体介绍如下:
| 类型 | 数据集 | 简介 | 规模 |
| -------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | -------------------------------------------------- |
| **通用领域** | [OCNLI](https://www.cluebenchmarks.com/introduce.html) | 原生中文自然语言推理数据集,是第一个非翻译的、使用原生汉语的大型中文自然语言推理数据集。OCNLI为中文语言理解基准测评(CLUE)的一部分。 | **Train**: 50437, **Dev**: 2950 |
| | [CMNLI](https://github.com/pluto-junzeng/CNSD) | 翻译自英文自然语言推理数据集 XNLI 和 MNLI,曾经是中文语言理解基准测评(CLUE)的一部分,现在被 OCNLI 取代。 | **Train**: 391783, **Dev**: 12241 |
| | [CSNLI](https://github.com/pluto-junzeng/CNSD) | 翻译自英文自然语言推理数据集 SNLI。 | **Train**: 545833, **Dev**: 9314, **Test**: 9176 |
| | [STS-B-Chinese](https://github.com/pluto-junzeng/CNSD) | 翻译自英文语义相似数据集 STSbenchmark。 | **Train**: 5231, **Dev**: 1458, **Test**: 1361 |
| | [PAWS-X](https://www.luge.ai/#/luge/dataDetail?id=16) | 释义(含义)匹配数据集,特点是具有高度重叠词汇,重点考察模型对句法结构的理解能力。 | **Train**: 49401, **Dev**: 2000, **Test**: 2000 |
| | [PKU-Paraphrase-Bank](https://github.com/pkucoli/PKU-Paraphrase-Bank/) | 中文语句复述数据集,也即一句话换种方式描述但语义保持一致。 | 共509832个语句对 |
| **问题匹配** | [LCQMC](https://www.luge.ai/#/luge/dataDetail?id=14) | 百度知道领域的中文问题匹配大规模数据集,该数据集从百度知道不同领域的用户问题中抽取构建数据。 | **Train**: 238766, **Dev**: 8802, **Test**: 12500 |
| | [BQCorpus](https://www.luge.ai/#/luge/dataDetail?id=15) | 银行金融领域的问题匹配数据,包括了从一年的线上银行系统日志里抽取的问题pair对,是目前最大的银行领域问题匹配数据。 | **Train**: 100000, **Dev**: 10000, **Test**: 10000 |
| | [AFQMC](https://www.cluebenchmarks.com/introduce.html) | 蚂蚁金服真实金融业务场景中的问题匹配数据集(已脱敏),是中文语言理解基准测评(CLUE)的一部分。 | **Train**: 34334, **Dev**: 4316 |
| | [DuQM](https://www.luge.ai/#/luge/dataDetail?id=27) | 问题匹配评测数据集(label没有公开),属于百度大规模阅读理解数据集(DuReader)的[一部分](https://github.com/baidu/DuReader/tree/master/DuQM)。 | 共50000个语句对 |
| **对话和搜索** | [BUSTM: OPPO-xiaobu](https://www.luge.ai/#/luge/dataDetail?id=28) | 通过对闲聊、智能客服、影音娱乐、信息查询等多领域真实用户交互语料进行用户信息脱敏、相似度筛选处理得到,该对话匹配(Dialogue Short Text Matching)数据集主要特点是文本较短、非常口语化、存在文本高度相似而语义不同的难例。 | **Train**: 167173, **Dev**: 10000 |
| | [QBQTC](https://github.com/CLUEbenchmark/QBQTC) | QQ浏览器搜索相关性数据集(QBQTC,QQ Browser Query Title Corpus),是QQ浏览器搜索引擎目前针对大搜场景构建的一个融合了相关性、权威性、内容质量、 时效性等维度标注的学习排序(LTR)数据集,广泛应用在搜索引擎业务场景中。(相关性的含义:0,相关程度差;1,有一定相关性;2,非常相关。) | **Train**: 180000, **Dev**: 20000, **Test**: 5000 |
*以上数据集主要收集整理自[CLUE](https://www.cluebenchmarks.com/introduce.html)(中文语言理解基准评测数据集)、[SimCLUE](https://github.com/CLUEbenchmark/SimCLUE) (整合许多开源文本相似数据集)、[百度千言](https://www.luge.ai/#/)的文本相似度等数据集。*
根据以上收集的数据集构建如下**评测 benchmark**:
| Name | Size | Type |
| ---------------------- | ----- | ------------- |
| **Chinese-STS-B-dev** | 1458 | label=0.0~1.0 |
| **Chinese-STS-B-test** | 1361 | label=0.0~1.0 |
| **afqmc-dev** | 4316 | label=0,1 |
| **lcqmc-dev** | 8802 | label=0,1 |
| **bqcorpus-dev** | 10000 | label=0,1 |
| **pawsx_dev** | 2000 | label=0,1 |
| **oppo-xiaobu-dev** | 10000 | label=0,1 |
*TODO:目前收集的数据集不论是数量还是多样性都需要进一步扩充以更真实的反映表征模型的效果*
|
copenlu | null | null | null | false | 58 | false | copenlu/fever_gold_evidence | 2022-07-10T04:28:30.000Z | fever | false | ab25011388c16beda08d9f7f57473b7e85125efb | [] | [
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extende... | https://huggingface.co/datasets/copenlu/fever_gold_evidence/resolve/main/README.md | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- machine-generated
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: fever
pretty_name: ''
size_categories:
- 100K<n<1M
source_datasets:
- extended|fever
task_categories:
- text-classification
task_ids:
- fact-checking
---
# Dataset Card for fever_gold_evidence
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/copenlu/fever-adversarial-attacks
- **Repository:** https://github.com/copenlu/fever-adversarial-attacks
- **Paper:** https://aclanthology.org/2020.emnlp-main.256/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Dataset for training classification-only fact checking with claims from the FEVER dataset.
This dataset is used in the paper "Generating Label Cohesive and Well-Formed Adversarial Claims", EMNLP 2020
The evidence is the gold evidence from the FEVER dataset for *REFUTE* and *SUPPORT* claims.
For *NEI* claims, we extract evidence sentences with the system in "Christopher Malon. 2018. Team Papelo: Transformer Networks at FEVER. In Proceedings of the
First Workshop on Fact Extraction and VERification (FEVER), pages 109113."
More details can be found in https://github.com/copenlu/fever-adversarial-attacks
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{atanasova-etal-2020-generating,
title = "Generating Label Cohesive and Well-Formed Adversarial Claims",
author = "Atanasova, Pepa and
Wright, Dustin and
Augenstein, Isabelle",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.256",
doi = "10.18653/v1/2020.emnlp-main.256",
pages = "3168--3177",
abstract = "Adversarial attacks reveal important vulnerabilities and flaws of trained models. One potent type of attack are universal adversarial triggers, which are individual n-grams that, when appended to instances of a class under attack, can trick a model into predicting a target class. However, for inference tasks such as fact checking, these triggers often inadvertently invert the meaning of instances they are inserted in. In addition, such attacks produce semantically nonsensical inputs, as they simply concatenate triggers to existing samples. Here, we investigate how to generate adversarial attacks against fact checking systems that preserve the ground truth meaning and are semantically valid. We extend the HotFlip attack algorithm used for universal trigger generation by jointly minimizing the target class loss of a fact checking model and the entailment class loss of an auxiliary natural language inference model. We then train a conditional language model to generate semantically valid statements, which include the found universal triggers. We find that the generated attacks maintain the directionality and semantic validity of the claim better than previous work.",
}
``` |
marksverdhei | null | null | null | false | 1 | false | marksverdhei/wordnet-definitions-en-2021 | 2022-04-04T21:55:03.000Z | null | false | d267838191dbf769374ef1f8ce0c0a04a413b18a | [] | [] | https://huggingface.co/datasets/marksverdhei/wordnet-definitions-en-2021/resolve/main/README.md | # Wordnet definitions for English
Dataset by Princeton WordNet and the Open English WordNet team
https://github.com/globalwordnet/english-wordnet
This dataset contains every entry in wordnet that has a definition and an example.
Be aware that the word "null" can be misinterpreted as a null value if loading it in with e.g. pandas |
hackathon-pln-es | null | null | null | false | 1 | false | hackathon-pln-es/spanish-poetry-dataset | 2022-04-03T03:34:26.000Z | null | false | 49cf0593a2baf2fd848d81470d7c439c3ab8d3ec | [] | [] | https://huggingface.co/datasets/hackathon-pln-es/spanish-poetry-dataset/resolve/main/README.md | This dataset was previously created in Kaggle by [Andrea Morales Garzón](https://huggingface.co/andreamorgar).
[Link Kaggle](https://www.kaggle.com/andreamorgar/spanish-poetry-dataset/version/1) |
hackathon-pln-es | null | null | null | false | 1 | false | hackathon-pln-es/spanish-to-quechua | 2022-10-25T10:03:46.000Z | null | false | aa48b3c7f4d0c1450f8f2df27ceb8a882b022600 | [] | [
"language:es",
"language:qu",
"task_categories:translation",
"task:translation"
] | https://huggingface.co/datasets/hackathon-pln-es/spanish-to-quechua/resolve/main/README.md | ---
language:
- es
- qu
task_categories:
- translation
task:
- translation
---
# Spanish to Quechua
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [team members](#team-members)
## Dataset Description
This dataset is a recopilation of webs and others datasets that shows in [dataset creation section](#dataset-creation). This contains translations from spanish (es) to Qechua of Ayacucho (qu).
## Dataset Structure
### Data Fields
- es: The sentence in Spanish.
- qu: The sentence in Quechua of Ayacucho.
### Data Splits
- train: To train the model (102 747 sentences).
- Validation: To validate the model during training (12 844 sentences).
- test: To evaluate the model when the training is finished (12 843 sentences).
## Dataset Creation
### Source Data
This dataset has generated from:
- "Mundo Quechua" by "Ivan Acuña" - [available here](https://mundoquechua.blogspot.com/2006/07/frases-comunes-en-quechua.html)
- "Kuyakuykim (Te quiero): Apps con las que podrías aprender quechua" by "El comercio" - [available here](https://elcomercio.pe/tecnologia/actualidad/traductor-frases-romanticas-quechua-noticia-467022-noticia/)
- "Piropos y frases de amor en quechua" by "Soy Quechua" - [available here](https://www.soyquechua.org/2019/12/palabras-en-quechua-de-amor.html)
- "Corazón en quechua" by "Soy Quechua" - [available here](https://www.soyquechua.org/2020/05/corazon-en-quechua.html)
- "Oraciones en Español traducidas a Quechua" by "Tatoeba" - [available here](https://tatoeba.org/es/sentences/search?from=spa&query=&to=que)
- "AmericasNLP 2021 Shared Task on Open Machine Translation" by "americasnlp2021" - [available here](https://github.com/AmericasNLP/americasnlp2021/tree/main/data/quechua-spanish/parallel_data/es-quy)
### Data cleaning
- The dataset was manually cleaned during compilation, as some words of one language were related to several words of the other language.
## Considerations for Using the Data
This is a first version of the dataset, we expected improve it over time and especially to neutralize the biblical themes.
## Team members
- [Sara Benel](https://huggingface.co/sbenel)
- [Jose Vílchez](https://huggingface.co/JCarlos) |
aymen31 | null | null | null | false | 2 | false | aymen31/PlantVillage | 2022-04-03T04:41:23.000Z | null | false | c8d301967424c6c7a3632b863453ddcd1fa60cd3 | [] | [
"license:other"
] | https://huggingface.co/datasets/aymen31/PlantVillage/resolve/main/README.md | ---
license: other
---
|
abdulhady | null | null | null | false | 1 | false | abdulhady/ckb | 2022-04-03T10:52:39.000Z | null | false | c9c5f26698bc6a2dcf5ad6c6f71091b74718bdce | [] | [
"license:other"
] | https://huggingface.co/datasets/abdulhady/ckb/resolve/main/README.md | ---
license: other
---
|
johnowhitaker | null | null | null | false | 1 | false | johnowhitaker/colorbs | 2022-04-04T06:52:33.000Z | null | false | aa4acbaa7537aa9ae6dc5447dc82e59146ec083e | [] | [] | https://huggingface.co/datasets/johnowhitaker/colorbs/resolve/main/README.md | A synthetic dataset for GAN experiments.
Created with a CLOOB Conditioned Latent Diffusion model (https://github.com/JD-P/cloob-latent-diffusion)
For each color in a list of standard CSS color names, a set of images was generated using the following command:
```
python cfg_sample.py --autoencoder autoencoder_kl_32x32x4\
--checkpoint yfcc-latent-diffusion-f8-e2-s250k.ckpt\
--method plms\
--cond-scale 1.0\
--seed 34\
--steps 25\
-n 36\
"A glass orb with {color} spacetime fire burning inside"
```
|
fmmolina | null | null | null | false | 1 | false | fmmolina/eHealth-KD-Adaptation | 2022-04-11T07:16:13.000Z | null | false | 39816326bf8c3499e150a27e13336760e7c3d904 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/fmmolina/eHealth-KD-Adaptation/resolve/main/README.md | ---
license: afl-3.0
---
## Description
An adaptation of [eHealth-KD Challenge 2020 dataset](https://knowledge-learning.github.io/ehealthkd-2020/), filtered only for the task of NER. Some adaptation of the original dataset have been made:
- BIO annotations
- Errors fixing
- Overlapped entities has been processed as an unique entity
## Dataset loading
datasets = load_dataset('json', data_files={'train': ['@YOUR_PATH@/training_anns_bio.json'], 'testing': ['@YOUR_PATH@/testing_anns_bio.json'], 'validation': ['@YOUR_PATH@/development_anns_bio.json']}) |
hackathon-pln-es | null | null | null | false | 1 | false | hackathon-pln-es/readability-es-caes | 2022-10-20T19:10:45.000Z | null | false | 3a7f842dcf1cb81d626076f263f1c1ae00254ab4 | [] | [
"annotations_creators:other",
"language_creators:other",
"language:es",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:text-classification-other-readability"
] | https://huggingface.co/datasets/hackathon-pln-es/readability-es-caes/resolve/main/README.md | ---
annotations_creators:
- other
language_creators:
- other
language:
- es
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: readability-es-caes
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-classification-other-readability
---
# Dataset Card for [readability-es-caes]
## Dataset Description
### Dataset Summary
This dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources:
- [CAES corpus](http://galvan.usc.es/caes/) (Martínez et al., 2019): the "Corpus de Aprendices del Español" is a collection of texts produced by Spanish L2 learners from Spanish learning centers and universities. These text are produced by students of all levels (A1 to C1), with different backgrounds (11 native languages) and levels of experience.
### Languages
Spanish
## Dataset Structure
Texts are tokenized to create a paragraph-based dataset
### Data Fields
The dataset is formatted as a json lines and includes the following fields:
- **Category:** when available, this includes the level of this text according to the Common European Framework of Reference for Languages (CEFR).
- **Level:** standardized readability level: simple or complex.
- **Level-3:** standardized readability level: basic, intermediate or advanced.
- **Text:** original text formatted into sentences.
## Additional Information
### Licensing Information
https://creativecommons.org/licenses/by-nc-sa/4.0/
### Citation Information
Please cite this page to give credit to the authors :)
### Team
- [Laura Vásquez-Rodríguez](https://lmvasque.github.io/)
- [Pedro Cuenca](https://twitter.com/pcuenq)
- [Sergio Morales](https://www.fireblend.com/)
- [Fernando Alva-Manchego](https://feralvam.github.io/)
|
hackathon-pln-es | null | null | null | false | 3 | false | hackathon-pln-es/unam_tesis | 2022-10-25T10:03:47.000Z | null | false | 984190c2a4bcf10c66012ed7dc8ef626fe831d0f | [] | [
"annotations_creators:MajorIsaiah",
"annotations_creators:Ximyer",
"annotations_creators:clavel",
"annotations_creators:inoid",
"language_creators:crowdsourced",
"language:es",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:n=200",
"source_datasets:original",
"task_categor... | https://huggingface.co/datasets/hackathon-pln-es/unam_tesis/resolve/main/README.md | ---
annotations_creators:
- MajorIsaiah
- Ximyer
- clavel
- inoid
language_creators: [crowdsourced]
language: [es]
license: [apache-2.0]
multilinguality: [monolingual]
pretty_name: ''
size_categories:
- n=200
source_datasets: [original]
task_categories: [text-classification]
task_ids: [language-modeling]
---
# Dataset Card of "unam_tesis"
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
- [yiselclavel@gmail.com](mailto:yiselclavel@gmail.com)
- [isaac7isaias@gmail.com](mailto:isaac7isaias@gmail.com)
### Dataset Summary
El dataset unam_tesis cuenta con 1000 tesis de 5 carreras de la Universidad Nacional Autónoma de México (UNAM), 200 por carrera. Se pretende seguir incrementando este dataset con las demás carreras y más tesis.
### Supported Tasks and Leaderboards
text-classification
### Languages
Español (es)
## Dataset Structure
### Data Instances
Las instancias del dataset son de la siguiente forma:
El objetivo de esta tesis es elaborar un estudio de las condiciones asociadas al aprendizaje desde casa a nivel preescolar y primaria en el municipio de Nicolás Romero a partir de la cancelación de clases presenciales ante la contingencia sanitaria del Covid-19 y el entorno familiar del estudiante. En México, la Encuesta para la Medición del Impacto COVID-19 en la Educación (ECOVID-ED) 2020, es un proyecto que propone el INEGI y realiza de manera especial para conocer las necesidades de la población estudiantil de 3 a 29 años de edad, saber qué está sucediendo con su entorno inmediato, las condiciones en las que desarrollan sus actividades académicas y el apoyo que realizan padres, tutores o cuidadores principales de las personas en edad formativa. La ECOVID-ED 2020 se llevó a cabo de manera especial con el objetivo de conocer el impacto de la cancelación provisional de clases presenciales en las instituciones educativas del país para evitar los contagios por la pandemia COVID-19 en la experiencia educativa de niños, niñas, adolescentes y jóvenes de 3 a 29 años, tanto en el ciclo escolar 2019-2020, como en ciclo 2020-2021. En este ámbito de investigación, el Instituto de Investigaciones sobre la Universidad y la Educación (IISUE) de la Universidad Nacional Autónoma de México público en 2020 la obra “Educación y Pandemia: Una visión académica” que se integran 34 trabajos que abordan la muy amplia temática de la educación y la universidad con reflexiones y ejercicios analíticos estrechamente relacionadas en el marco coyuntural de la pandemia COVID-19. La tesis se presenta en tres capítulos: En el capítulo uno se realizará una descripción del aprendizaje de los estudiantes a nivel preescolar y primaria del municipio de NicolásRomero, Estado de México, que por motivo de la contingencia sanitaria contra el Covid-19 tuvieron que concluir su ciclo académico 2019-2020 y el actual ciclo 2020-2021 en su casa debido a la cancelación provisional de clases presenciales y bajo la tutoría de padres, familiar o ser cercano; así como las horas destinadas al estudio y las herramientas tecnológicas como teléfonos inteligentes, computadoras portátiles, computadoras de escritorio, televisión digital y tableta. En el capítulo dos, se presentarán las herramientas necesarias para la captación de la información mediante técnicas de investigación social, a través de las cuales se mencionará, la descripción, contexto y propuestas del mismo, considerando los diferentes tipos de cuestionarios, sus componentes y diseño, teniendo así de manera específica la diversidad de ellos, que llevarán como finalidad realizar el cuestionario en línea para la presente investigación. Posteriormente, se podrá destacar las fases del diseño de la investigación, que se realizarán mediante una prueba piloto tomando como muestra a distintos expertos en el tema. De esta manera se obtendrá la información relevante para estudiarla a profundidad. En el capítulo tres, se realizará el análisis apoyado de las herramientas estadísticas, las cuales ofrecen explorar la muestra de una manera relevante, se aplicará el método inferencial para expresar la información y predecir las condiciones asociadas al autoaprendizaje, la habilidad pedagógica de padres o tutores, la convivencia familiar, la carga académica y actividades escolares y condicionamiento tecnológico,con la finalidad de inferir en la población. Asimismo, se realizarán pruebas de hipótesis, tablas de contingencia y matriz de correlación. Por consiguiente, los resultados obtenidos de las estadísticas se interpretarán para describir las condiciones asociadas y como impactan en la enseñanza de preescolar y primaria desde casa.|María de los Ángeles|Blancas Regalado|Análisis de las condiciones del aprendizaje desde casa en los alumnos de preescolar y primaria del municipio de Nicolás Romero |2022|Actuaría
| Carreras | Número de instancias |
|--------------|----------------------|
| Actuaría | 200 |
| Derecho| 200 |
| Economía| 200 |
| Psicología| 200 |
| Química Farmacéutico Biológica| 200 |
### Data Fields
El dataset está compuesto por los siguientes campos: "texto|titulo|carrera". <br/>
texto: Se refiere al texto de la introducción de la tesis. <br/>
titulo: Se refiere al título de la tesis. <br/>
carrera: Se refiere al nombre de la carrera a la que pertenece la tesis. <br/>
### Data Splits
El dataset tiene 2 particiones: entrenamiento (train) y prueba (test).
| Partición | Número de instancias |
|--------------|-------------------|
| Entrenamiento | 800 |
| Prueba | 200 |
## Dataset Creation
### Curation Rationale
La creación de este dataset ha sido motivada por la participación en el Hackathon 2022 de PLN en Español organizado por Somos NLP, con el objetivo de democratizar el NLP en español y promover su aplicación a buenas causas y, debido a que no existe un dataset de tesis en español.
### Source Data
#### Initial Data Collection and Normalization
El dataset original (dataset_tesis) fue creado a partir de un proceso de scraping donde se extrajeron tesis de la Universidad Nacional Autónoma de México en el siguiente link: https://tesiunam.dgb.unam.mx/F?func=find-b-0&local_base=TES01.
Se optó por realizar un scraper para conseguir la información. Se decidió usar la base de datos TESIUNAM, la cual es un catálogo en donde se pueden visualizar las tesis de los sustentantes que obtuvieron un grado en la UNAM, así como de las tesis de licenciatura de escuelas incorporadas a ella.
Para ello, en primer lugar se consultó la Oferta Académica (http://oferta.unam.mx/indice-alfabetico.html) de la Universidad, sitio de donde se extrajo cada una de las 131 licenciaturas en forma de lista. Después, se analizó cada uno de los casos presente en la base de datos, debido a que existen carreras con más de 10 tesis, otras con menos de 10, o con solo una o ninguna tesis disponible. Se usó Selenium para la interacción con un navegador Web (Edge) y está actualmente configurado para obtener las primeras 20 tesis, o menos, por carrera.
Este scraper obtiene de esta base de datos:
- Nombres del Autor
- Apellidos del Autor
- Título de la Tesis
- Año de la Tesis
- Carrera de la Tesis
A la vez, este scraper descarga cada una de las tesis en la carpeta Downloads del equipo local. En el csv formado por el scraper se añadió el "Resumen/Introduccion/Conclusion de la tesis", dependiendo cual primero estuviera disponible, ya que la complejidad recae en la diferencia de la estructura y formato de cada una de las tesis.
#### Who are the source language producers?
Los datos son creados por humanos de forma manual, en este caso por estudiantes de la UNAM y revisados por sus supervisores.
### Annotations
El dataset fue procesado para eliminar información innecesaria para los clasificadores. El dataset original cuenta con los siguientes campos: "texto|autor_nombre|autor_apellido|titulo|año|carrera".
#### Annotation process
Se extrajeron primeramente 200 tesis de 5 carreras de esta universidad: Actuaría, Derecho, Economía, Psicología y Química Farmacéutico Biológica. De estas se extrajo: introducción, nombre del autor, apellidos de autor, título de la tesis y la carrera. Los datos fueron revisados y limpiados por los autores.
Luego, el dataset fue procesado con las siguientes tareas de Procesamiento de Lenguaje Natural (dataset_tesis_procesado):
- convertir a minúsculas
- tokenización
- eliminar palabras que no son alfanuméricas
- eliminar palabras vacías
- stemming: eliminar plurales
#### Who are the annotators?
Las anotaciones fueron hechas por humanos, en este caso los autores del dataset, usando código de máquina en el lenguaje Python.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
El presente conjunto de datos favorecerá la búsqueda e investigación relacionada con tesis en español, a partir de su categorización automática por un modelo entrenado con este dataset. Esta tarea favorece el cumplimiento del objetivo 4 de Desarrollo Sostenible de la ONU: Educación y Calidad (https://www.un.org/sustainabledevelopment/es/objetivos-de-desarrollo-sostenible/).
### Discussion of Biases
El texto tiene algunos errores en la codificación por lo que algunos caracteres como las tildes no se muestran correctamente. Las palabras con estos caracteres son eliminadas en el procesamiento hasta que se corrija el problema.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Miembros del equipo (user de Hugging Face):
[Isacc Isahias López López](https://huggingface.co/MajorIsaiah)
[Yisel Clavel Quintero](https://huggingface.co/clavel)
[Dionis López](https://huggingface.co/inoid)
[Ximena Yeraldin López López](https://huggingface.co/Ximyer)
### Licensing Information
La versión 1.0.0 del dataset unam_tesis está liberada bajo la licencia <a href='http://www.apache.org/licenses/LICENSE-2.0'/> Apache-2.0 License </a>.
### Citation Information
"Esta base de datos se ha creado en el marco del Hackathon 2022 de PLN en Español organizado por Somos NLP patrocinado por Platzi, Paperspace y Hugging Face: https://huggingface.co/hackathon-pln-es."
Para citar este dataset, por favor, use el siguiente formato de cita:
@inproceedings{Hackathon 2022 de PLN en Español,
title={UNAM's Theses with BETO fine-tuning classify},
author={López López, Isaac Isaías; Clavel Quintero, Yisel; López Ramos, Dionis & López López, Ximena Yeraldin},
booktitle={Hackathon 2022 de PLN en Español},
year={2022}
}
### Contributions
Gracias a [@yiselclavel](https://github.com/yiselclavel) y [@IsaacIsaias](https://github.com/IsaacIsaias) por agregar este dataset.
|
hackathon-pln-es | null | null | null | false | 1 | false | hackathon-pln-es/ITAMA-DataSet | 2022-04-04T03:32:20.000Z | null | false | e94ac1f1b72be4a83408f20a8d49ffd98e9724b1 | [] | [] | https://huggingface.co/datasets/hackathon-pln-es/ITAMA-DataSet/resolve/main/README.md | # Extracción de datos de Reddit
Se descargaron todos los titulos de los hilos de algunas comunidades en español de Reddit entre marzo del 2017 y enero del 2022:
| Comunidad | N° de hilos |
|----------------------------|-------------|
|AskRedditespanol | 28072 |
| BOLIVIA | 4935 |
| PERU | 20735 |
| argentina | 214986 |
| chile | 69077 |
|espanol | 39376 |
| mexico | 136984 |
| preguntaleareddit | 37300 |
| uruguay | 55693 |
| vzla | 42909 |
# Etiquetas
Luego, se etiquetaron manualmente algunos de los hilos para marcar AMA vs No AMA.
Se etiquetaron 757 hilos (AMA: 290, No AMA: 458), siguiendo una estrategia de query by committee.
En el archivo `etiqueta_ama.csv` se puede revisar esto.
Con estos 757 hilos se ejecuto un algoritmo de label spreading para identificar los hilos AMA restantes, esto dío un total de 3519 hilos.
En el archivo `autoetiquetado_ama.csv` se puede revisar esto.
Para identificar las profesiones de las personas que crearon los hilos se utilizó la siguiente lista:
https://raw.githubusercontent.com/davoclavo/adigmatangadijolachanga/master/profesiones.txt
Para lograr abarcar todas las posibilidades, se agregaron tanto las versiones que terminaban en "a" como en "o" de todas las profesiones.
Luego se agruparon las profesiones similares, para lograr un numero similar de hilos por profesión, para lo que se utilizo el siguiente diccionario:
```
sinonimos = {
'sexologo': 'psicologo',
'enfermero': 'medico',
'farmaceutico': 'medico',
'cirujano': 'medico',
'doctor': 'medico',
'radiologo': 'medico',
'dentista': 'odontologo',
'matron': 'medico',
'patologo': 'medico',
'educador': 'profesor',
'maestro': 'profesor',
'programador': 'ingeniero',
'informatico': 'ingeniero',
'juez': 'abogado',
'fiscal': 'abogado',
'oficial': 'abogado',
'astronomo': 'ciencias',
'fisico': 'ciencias',
'ecologo': 'ciencias',
'filosofo': 'ciencias',
'biologo': 'ciencias',
'zoologo': 'ciencias',
'quimico': 'ciencias',
'matematico': 'ciencias',
'meteorologo': 'ciencias',
'periodista': 'humanidades',
'dibujante': 'humanidades',
'fotografo': 'humanidades',
'traductor': 'humanidades',
'presidente': 'jefe',
'gerente': 'jefe'
}
```
Se descargaron todos los comentarios de los hilos AMA que contenian algunas de estas profesiones y luego se agruparon incluyendo solamente los que contenian algún signo de pregunta y que tuviesen una respuesta del autor del hilo, formando un par de pregunta respuesta.
Finalmente, se mantuvieron todas las profesiones que contenian más de 200 pares de pregunta respuesta, las que incluyen alrededor de 3000 pares pregunta respuesta.
En el archivo `qa_corpus_profesion.csv` se puede revisar esto. |
ManRo | null | null | null | false | 3 | false | ManRo/Sexism_Twitter_MeTwo | 2022-04-04T11:46:05.000Z | null | false | 66d3e93c84abc82d96ad84beb30bef404f0957ac | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/ManRo/Sexism_Twitter_MeTwo/resolve/main/README.md | ---
license: apache-2.0
---
The Dataset was built on 2022/03/29 to contribute to improve the representation of the Spanish language in NLP tasks tasks in the HuggingFace platform.
The dataset contains 2,471 tweets obtained from their tweet_id. The dataset considers the following columns:
- Column 1( Status_id): Corresponds to the unique identification number of the tweet in the social network.
- Column 2( text): Corresponds to the text (in Spanish) linked to the corresponding "Status_Id", which is used to perform the sexism analysis.
- Column 3 (Category): Corresponds to the classification that has been made when analyzing the text (in Spanish), considering three categories: (SEXIST,NON_SEXIST,DOUBTFUL)
The dataset has been built thanks to the previous work of : F. Rodríguez-Sánchez, J. Carrillo-de-Albornoz and L. Plaza. from MeTwo Machismo and Sexism Twitter Identification dataset.
For more information on the categorization process check: https://ieeexplore.ieee.org/document/9281090 |
pragnakalp | null | null | null | false | 1 | false | pragnakalp/squad_v2_french_translated | 2022-08-29T07:49:15.000Z | null | false | ad894516a8db0f6d292da5b7194b2729f47c02f9 | [] | [
"language:fr",
"license:apache-2.0",
"multilinguality:monolingual",
"multilinguality:translation"
] | https://huggingface.co/datasets/pragnakalp/squad_v2_french_translated/resolve/main/README.md | ---
language: fr
license: apache-2.0
multilinguality:
- monolingual
- translation
---
Using Google Translation, we have translated SQuAD 2.0 dataset into multiple languages.
Here is the translated dataset of SQuAD 2.0 in French language.
Shared by [Pragnakalp Techlabs](https://www.pragnakalp.com) |
ikekobby | null | null | null | false | 1 | false | ikekobby/40-percent-cleaned-preprocessed-fake-real-news | 2022-04-04T09:41:40.000Z | null | false | 8d5f91d054aafc2a98eacfc2715c031113cd1bc0 | [] | [] | https://huggingface.co/datasets/ikekobby/40-percent-cleaned-preprocessed-fake-real-news/resolve/main/README.md | Kaggle based dataset for text classification task. The data has been cleaned and processed for preparation into any model for classification based tasks. This is just 40% of the entire dataset. |
arch-raven | null | null | null | false | 1 | false | arch-raven/music-fingerprint-dataset | 2022-04-05T11:48:05.000Z | null | false | dae4dcc041f173bc7134be9d562d0f996693aa07 | [] | [
"arxiv:2010.11910"
] | https://huggingface.co/datasets/arch-raven/music-fingerprint-dataset/resolve/main/README.md | # Neural Audio Fingerprint Dataset
(c) 2021 by Sungkyun Chang
https://github.com/mimbres/neural-audio-fp
This dataset includes all music sources, background noise and impulse-reponses
(IR) samples that have been used in the work ["Neural Audio Fingerprint for
High-specific Audio Retrieval based on Contrastive Learning"]
(https://arxiv.org/abs/2010.11910).
### Format:
16-bit PCM Mono WAV, Sampling rate 8000 Hz
### Description:
```
/
fingerprint_dataset_icassp2021/
├── aug
│ ├── bg <=== Pub/cafe etc. background noise mix
│ ├── ir <=== IR data for microphone and room reverb simulatio
│ └── speech <=== English conversation, NOT USED IN THE PAPER RESULT
├── extras
│ └── fma_info <=== Meta data for music sources.
└── music
├── test-dummy-db-100k-full <== 100K songs of full-lengths
├── test-query-db-500-30s <== 500 songs (30s) and 2K synthesized queries
├── train-10k-30s <== 10K songs (30s) for training
└── val-query-db-500-30s <== 500 songs (30s) for validation/mini-search
```
### Data source:
• Bacgkound noise from Audioset was retrieved using key words ['subway',
'metro', 'underground', 'not music']
• Cochlear.ai pub-noise was recorded at the Strabucks branches in Seoul by
Jeongsoo Park.
• Random noise was generated by Donmoon Lee.
• Room/space IR data was collected from Aachen IR and OpenAIR 1.4 dataset.
• Portions of MIC IRs were from Vintage MIC (http://recordinghacks.com/), and
pre-processed with room/space IR data.
• Portions of MIC IRs were recorded by Donmoon Lee, Jeonsu Park and Hyungui Lim
using mobile devices in the anechoic chamber at Seoul National University.
• All music sources were taken from the Free Music Archive (FMA) data set,
and converted from `stereo 44Khz` to `mono 8Khz`.
• train-10k-30s contains all 8K songs from FMA_small. The remaining 2K songs
were from FMA_medium.
• val- and test- data were isolated from train-, and taken from FMA_medium.
• test-query-db-500-30s/query consists of the pre-synthesized 2,000 queries
of 10s each (SNR 0~10dB) and their corresponding 500 songs of 30s each.
• Additionally, query_fixed_SNR directory contains synthesized queries with
fixed SNR of 0dB and -3dB.
• dummy-db-100k was taken from FMA_full, and duplicates with other sets were
removed.
### License:
This dataset is distributed under the CC BY-SA 2.0 license separately from the
github source code, and licenses for composites from other datasets are
attached to each sub-directory.
|
hackathon-pln-es | null | null | null | false | 1 | false | hackathon-pln-es/readability-es-hackathon-pln-public | 2022-10-20T19:11:49.000Z | null | false | da11c85db69698b60179cacee5f6ce5dfdd75636 | [] | [
"annotations_creators:found",
"language_creators:found",
"language:es",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:text-classification-other-readability"
] | https://huggingface.co/datasets/hackathon-pln-es/readability-es-hackathon-pln-public/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- es
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: readability-es-sentences
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-classification-other-readability
---
# Dataset Card for [readability-es-sentences]
## Dataset Description
Compilation of short Spanish articles for readability assessment.
### Dataset Summary
This dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources:
- **Coh-Metrix-Esp corpus (Quispesaravia, et al., 2016):** collection of 100 parallel texts with simple and complex variants in Spanish. These texts include children's and adult stories to fulfill each category.
- **[kwiziq](https://www.kwiziq.com/):** a language learner assistant
- **[hablacultura.com](https://hablacultura.com/):** Spanish resources for students and teachers. We have downloaded the available content in their websites.
### Languages
Spanish
## Dataset Structure
The dataset includes 1019 text entries between 80 and 8714 characters long. The vast majority (97%) are below 4,000 characters long.
### Data Fields
The dataset is formatted as a json lines and includes the following fields:
- **Category:** when available, this includes the level of this text according to the Common European Framework of Reference for Languages (CEFR).
- **Level:** standardized readability level: complex or simple.
- **Level-3:** standardized readability level: basic, intermediate or advanced
- **Text:** original text formatted into sentences.
Not all the entries contain usable values for `category`, `level` and `level-3`, but all of them should contain at least one of `level`, `level-3`. When the corresponding information could not be derived, we use the special `"N/A"` value to indicate so.
## Additional Information
### Licensing Information
https://creativecommons.org/licenses/by-nc-sa/4.0/
### Citation Information
Please cite this page to give credit to the authors :)
### Team
- [Laura Vásquez-Rodríguez](https://lmvasque.github.io/)
- [Pedro Cuenca](https://twitter.com/pcuenq)
- [Sergio Morales](https://www.fireblend.com/)
- [Fernando Alva-Manchego](https://feralvam.github.io/)
|
huggan | null | null | null | false | 1 | false | huggan/inat_butterflies | 2022-04-04T10:53:19.000Z | null | false | 2e1b744445b279b21a6d1aeacfb3dff8d2acf7fa | [] | [] | https://huggingface.co/datasets/huggan/inat_butterflies/resolve/main/README.md | This dataset contains images from iNaturalist of butterflies (superfamily Papilionoidea) with at least one fave. Check the descriptions - some images have a licence like CC-BY-NC and can't be used for commercial purposes.
The list of observations was exported from iNaturalist after a query similar to https://www.inaturalist.org/observations?place_id=any&popular&taxon_id=47224
The images were downloaded with img2dataset and uploaded to the huggingface hub by @johnowhitaker using this colab notebook: https://colab.research.google.com/drive/14qwFV_G4dh6evizzqHP08qDUAHtzfuiW?usp=sharing
The goal is to have a dataset of butterflies in different poses and settings, to use for GAN training and to compare with datasets built with museum collections of pinned specimens (which tend to be much cleaner and have more consistency of pose etc)
I'm not familiar with the nuances of creative commons licencing but you may wish to filter out images which are no-derivatices (CC-...-ND) when training a GAN or creating new images. |
huggingartists | null | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | This dataset is designed to generate lyrics with HuggingArtists. | false | 3 | false | huggingartists/olga-buzova | 2022-10-25T10:03:54.000Z | null | false | d73ccef8b255c317a226912071e92b272c55dc43 | [] | [
"language:en",
"tags:huggingartists",
"tags:lyrics",
"models:huggingartists/olga-buzova"
] | https://huggingface.co/datasets/huggingartists/olga-buzova/resolve/main/README.md | ---
language:
- en
tags:
- huggingartists
- lyrics
models:
- huggingartists/olga-buzova
---
# Dataset Card for "huggingartists/olga-buzova"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.164278 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/efacbc8bb2d22ab78e494539bba61b3e.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/olga-buzova">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ольга Бузова (Olga Buzova)</div>
<a href="https://genius.com/artists/olga-buzova">
<div style="text-align: center; font-size: 14px;">@olga-buzova</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/olga-buzova).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/olga-buzova")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|66| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/olga-buzova")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
Nart | null | null | null | false | 1 | false | Nart/abkhaz_text | 2022-11-01T10:53:17.000Z | null | false | 9fd68bd28031a1f936845bdde6eb3aeb59eeadc9 | [] | [
"language_creators:expert-generated",
"language:ab",
"license:cc0-1.0",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"task_categories:text-generation",
"task_ids:language-modeling"
] | https://huggingface.co/datasets/Nart/abkhaz_text/resolve/main/README.md | ---
language_creators:
- expert-generated
language:
- ab
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Abkhaz monolingual corpus
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Card for "Abkhaz text"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
## Dataset Description
- **Point of Contact:** [Nart Tlisha](mailto:daniel.abzakh@gmail.com)
- **Size of the generated dataset:** 176 MB
### Dataset Summary
The Abkhaz language monolingual dataset is a collection of 1,470,480 sentences extracted from different sources. The dataset is available under the Creative Commons Universal Public Domain License. Part of it is also available as part of [Common Voice](https://commonvoice.mozilla.org/ab), another part is from the [Abkhaz National Corpus](https://clarino.uib.no/abnc)
## Dataset Creation
### Source Data
Here is a link to the source of a large part of the data on [github](https://github.com/danielinux7/Multilingual-Parallel-Corpus/blob/master/ebooks/reference.md)
## Considerations for Using the Data
### Other Known Limitations
The accuracy of the dataset is around 95% (gramatical, arthographical errors)
|
huggan | null | null | null | false | 1 | false | huggan/inat_butterflies_top10k | 2022-04-04T12:50:28.000Z | null | false | 49f91f486696456ead1685e46fbd63e6520f2537 | [] | [] | https://huggingface.co/datasets/huggan/inat_butterflies_top10k/resolve/main/README.md | Filtered version of https://huggingface.co/datasets/huggan/inat_butterflies
To pick the best images, CLIP was used to compare each image with a text description of a good image ("")
Notebook for the filtering: https://colab.research.google.com/drive/1OEqr1TtL4YJhdj_bebNWXRuG3f2YqtQE?usp=sharing
See the original dataset for sources and licence caveats (tl;dr check the image descriptions to make sure you aren't breaking a licence like CC-BY-NC-ND which some images have) |
damlab | null | null | null | false | 3 | false | damlab/human_hiv_ppi | 2022-04-04T14:38:49.000Z | null | false | 596623eb34923ccd0eb540ea1f737cd09c304e58 | [] | [
"license:mit"
] | https://huggingface.co/datasets/damlab/human_hiv_ppi/resolve/main/README.md | ---
license: mit
---
# Dataset Description
## Dataset Summary
This dataset was parsed from the Human-HIV Interaction dataset maintained by the NCBI.
It contains a >16,000 pairs of interactions between HIV and Human proteins.
Sequences of the interacting proteins were retrieved from the NCBI protein database and added to the dataset.
The raw data is available from the [NBCI FTP site](https://ftp.ncbi.nlm.nih.gov/gene/GeneRIF/hiv_interactions.gz) and the curation strategy is described in the [NAR Research paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4383939/) announcing the dataset.
## Dataset Structure
### Data Instances
Data Fields: hiv_protein_product, hiv_protein_name, interaction_type, human_protein_product, human_protein_name, reference_list, description, hiv_protein_sequence, human_protein_sequence
Data Splits: None
## Dataset Creation
Curation Rationale: This dataset was curated train models to recognize proteins that interact with HIV.
Initial Data Collection and Normalization: Dataset was downloaded and curated on 4/4/2022 but the most recent update of the underlying NCBI database was 2016.
## Considerations for Using the Data
Discussion of Biases: This dataset of protein interactions was manually curated by experts utilizing published scientific literature.
This inherently biases the collection to well-studied proteins and known interactions.
The dataset does not contain _negative_ interactions.
## Additional Information:
- Dataset Curators: Will Dampier
- Citation Information: TBA |
met | null | null | null | false | 1 | false | met/mm | 2022-04-04T18:42:01.000Z | null | false | 00712474bff3c7b433e6e4286a3ed2381850c05d | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/met/mm/resolve/main/README.md | ---
license: apache-2.0
---
|
huggan | null | null | null | false | 1 | false | huggan/smithsonian-butterfly-lowres | 2022-04-06T19:57:24.000Z | null | false | 484a5ad065c06cb4e04333ed4e4947a7e0373192 | [] | [
"license:cc0-1.0"
] | https://huggingface.co/datasets/huggan/smithsonian-butterfly-lowres/resolve/main/README.md | ---
license: cc0-1.0
---
Collection of pinned butterfly images from the Smithsonian https://www.si.edu/spotlight/buginfo/butterfly
Doesn't include metadata yet!
Url pattern: "https://ids.si.edu/ids/deliveryService?max_w=550&id=ark:/65665/m3c70e17cf30314fd4ad86afa7d1ebf49f"
Added sketch versions!
sketch_pidinet is generated by : https://github.com/zhuoinoulu/pidinet
sketch_pix2pix is generated by : https://github.com/mtli/PhotoSketch
|
met | null | null | null | false | 1 | false | met/Meti_ICT | 2022-04-05T11:56:09.000Z | null | false | 3b6940038258b4660e398ee7b29e3774e79fe0dd | [] | [
"license:ms-pl"
] | https://huggingface.co/datasets/met/Meti_ICT/resolve/main/README.md | ---
license: ms-pl
---
|
SocialGrep | null | null | A meta dataset of Reddit's own /r/datasets community. | false | 1 | false | SocialGrep/the-reddit-dataset-dataset | 2022-07-01T17:55:48.000Z | null | false | dd2d9cbe7ba3139d1f48096e3f19ce2eba4d27eb | [] | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original"
] | https://huggingface.co/datasets/SocialGrep/the-reddit-dataset-dataset/resolve/main/README.md | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for the-reddit-dataset-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-dataset-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditdatasetdataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditdatasetdataset)
### Dataset Summary
A meta dataset of Reddit's own /r/datasets community.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
|
rafay | null | null | null | false | 1 | false | rafay/upside_down_detection_cifar100 | 2022-04-05T06:51:09.000Z | null | false | 21d357ddf012a439d4b98b5dcf3367da55cca87d | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/rafay/upside_down_detection_cifar100/resolve/main/README.md | ---
license: afl-3.0
---
|
jet-universe | null | null | null | false | 1 | false | jet-universe/jetclass | 2022-05-27T19:00:45.000Z | null | false | c50846883a030dd8930ee5788524902b10439b63 | [] | [
"arxiv:2202.03772",
"license:mit"
] | https://huggingface.co/datasets/jet-universe/jetclass/resolve/main/README.md | ---
license: mit
---
# Dataset Card for JetClass
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/jet-universe/particle_transformer
- **Paper:** https://arxiv.org/abs/2202.03772
- **Leaderboard:**
- **Point of Contact:** [Huilin Qu](mailto:huilin.qu@cern.ch)
### Dataset Summary
JetClass is a large and comprehensive dataset to advance deep learning for jet tagging. The dataset consists of 100 million jets for training, with 10 different types of jets. The jets in this dataset generally fall into two categories:
* The background jets are initiated by light quarks or gluons (q/g) and are ubiquitously produced at the
LHC.
* The signal jets are those arising either from the top quarks (t), or from the W, Z or Higgs (H) bosons. For top quarks and Higgs bosons, we further consider their different decay modes as separate types, because the resulting jets have rather distinct characteristics and are often tagged individually.
Jets in this dataset are simulated with standard Monte Carlo event generators used by LHC experiments. The production and decay of the top quarks and the W, Z and Higgs bosons are generated with MADGRAPH5_aMC@NLO. We use PYTHIA to evolve the produced particles, i.e., performing parton showering and hadronization, and produce the final outgoing particles. To be close to realistic jets reconstructed at the ATLAS or CMS experiment, detector effects are simulated with DELPHES using the CMS detector configuration provided in DELPHES. In addition, the impact parameters of electrically charged particles are smeared to match the resolution of the CMS tracking detector . Jets are clustered from DELPHES E-Flow objects with the anti-kT algorithm using a distance
parameter R = 0.8. Only jets with transverse momentum in 500–1000 GeV and pseudorapidity |η| < 2 are considered. For signal jets, only the “high-quality” ones that fully contain the decay products of initial particles are included.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use the JetClass dataset, please cite:
```
@article{Qu:2022mxj,
author = "Qu, Huilin and Li, Congqiao and Qian, Sitian",
title = "{Particle Transformer for Jet Tagging}",
eprint = "2202.03772",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
month = "2",
year = "2022"
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
|
PolyAI | null | @article{gerz2021multilingual,
title={Multilingual and cross-lingual intent detection from spoken data},
author={Gerz, Daniela and Su, Pei-Hao and Kusztos, Razvan and Mondal, Avishek and Lis, Michal and Singhal, Eshan and Mrk{\v{s}}i{\'c}, Nikola and Wen, Tsung-Hsien and Vuli{\'c}, Ivan},
journal={arXiv preprint arXiv:2104.08524},
year={2021}
} | MINDS-14 is training and evaluation resource for intent
detection task with spoken data. It covers 14
intents extracted from a commercial system
in the e-banking domain, associated with spoken examples in 14 diverse language varieties. | false | 1,638 | false | PolyAI/minds14 | 2022-10-23T05:36:35.000Z | null | false | 1f8f4e777aa46d53446f16f00f1add22aec02dd0 | [] | [
"arxiv:2104.08524",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language:en",
"language:fr",
"language:it",
"language:es",
"language:pt",
"langua... | https://huggingface.co/datasets/PolyAI/minds14/resolve/main/README.md | ---
annotations_creators:
- expert-generated
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
- fr
- it
- es
- pt
- de
- nl
- ru
- pl
- cs
- ko
- zh
language_bcp47:
- en
- en-GB
- en-US
- en-AU
- fr
- it
- es
- pt
- de
- nl
- ru
- pl
- cs
- ko
- zh
license:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: 'MInDS-14'
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
- speech-processing
task_ids:
- speech-recognition
- keyword-spotting
---
# MInDS-14
## Dataset Description
- **Fine-Tuning script:** [pytorch/audio-classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
- **Paper:** [Multilingual and Cross-Lingual Intent Detection from Spoken Data](https://arxiv.org/abs/2104.08524)
- **Total amount of disk used:** ca. 500 MB
MINDS-14 is training and evaluation resource for intent detection task with spoken data. It covers 14
intents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties.
## Example
MInDS-14 can be downloaded and used as follows:
```py
from datasets import load_dataset
minds_14 = load_dataset("PolyAI/minds14", "fr-FR") # for French
# to download all data for multi-lingual fine-tuning uncomment following line
# minds_14 = load_dataset("PolyAI/all", "all")
# see structure
print(minds_14)
# load audio sample on the fly
audio_input = minds_14["train"][0]["audio"] # first decoded audio sample
intent_class = minds_14["train"][0]["intent_class"] # first transcription
intent = minds_14["train"].features["intent_class"].names[intent_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
## Dataset Structure
We show detailed information the example configurations `fr-FR` of the dataset.
All other configurations have the same structure.
### Data Instances
**fr-FR**
- Size of downloaded dataset files: 471 MB
- Size of the generated dataset: 300 KB
- Total amount of disk used: 471 MB
An example of a datainstance of the config `fr-FR` looks as follows:
```
{
"path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
"audio": {
"path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
"array": array(
[0.0, 0.0, 0.0, ..., 0.0, 0.00048828, -0.00024414], dtype=float32
),
"sampling_rate": 8000,
},
"transcription": "je souhaite changer mon adresse",
"english_transcription": "I want to change my address",
"intent_class": 1,
"lang_id": 6,
}
```
### Data Fields
The data fields are the same among all splits.
- **path** (str): Path to the audio file
- **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio
- **transcription** (str): Transcription of the audio file
- **english_transcription** (str): English transcription of the audio file
- **intent_class** (int): Class id of intent
- **lang_id** (int): Id of language
### Data Splits
Every config only has the `"train"` split containing of *ca.* 600 examples.
## Dataset Creation
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
```
@article{DBLP:journals/corr/abs-2104-08524,
author = {Daniela Gerz and
Pei{-}Hao Su and
Razvan Kusztos and
Avishek Mondal and
Michal Lis and
Eshan Singhal and
Nikola Mrksic and
Tsung{-}Hsien Wen and
Ivan Vulic},
title = {Multilingual and Cross-Lingual Intent Detection from Spoken Data},
journal = {CoRR},
volume = {abs/2104.08524},
year = {2021},
url = {https://arxiv.org/abs/2104.08524},
eprinttype = {arXiv},
eprint = {2104.08524},
timestamp = {Mon, 26 Apr 2021 17:25:10 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-08524.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset
|
ramnika003 | null | null | null | false | 1 | false | ramnika003/autotrain-data-sentiment_analysis_project | 2022-04-05T09:16:59.000Z | null | false | 6342d0716fac4e248c53a27039c7d30ccaa9342b | [] | [
"task_categories:text-classification"
] | https://huggingface.co/datasets/ramnika003/autotrain-data-sentiment_analysis_project/resolve/main/README.md | ---
task_categories:
- text-classification
---
# AutoTrain Dataset for project: sentiment_analysis_project
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project sentiment_analysis_project.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Realizing that I don`t have school today... or tomorrow... or for the next few months. I really nee[...]",
"target": 1
},
{
"text": "Good morning tweeps. Busy this a.m. but not in a working way",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=3, names=['negative', 'neutral', 'positive'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 16180 |
| valid | 4047 |
|
met | null | null | null | false | 1 | false | met/AMH_MET | 2022-04-05T11:46:16.000Z | null | false | d98c69e4a1133485a535297c69e231c854fa7877 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/met/AMH_MET/resolve/main/README.md | ---
license: apache-2.0
---
|
met | null | null | null | false | 1 | false | met/Meti_try | 2022-04-05T12:42:25.000Z | null | false | 03b8bdea7e37f62de083d91b6d51998afd698b23 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/met/Meti_try/resolve/main/README.md | ---
license: apache-2.0
---
|
met | null | null | null | false | 1 | false | met/Met | 2022-04-05T13:31:43.000Z | null | false | e5669a83db35069d560ee7e565c0af93a289db30 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/met/Met/resolve/main/README.md | ---
license: apache-2.0
---
|
duskvirkus | null | null | null | false | 1 | false | duskvirkus/dafonts-free | 2022-04-05T16:30:11.000Z | null | false | dbb8ee349ff4e6d6ac0f7f01c9007be3862e3deb | [] | [
"license:other"
] | https://huggingface.co/datasets/duskvirkus/dafonts-free/resolve/main/README.md | ---
license: other
---
|
aayush9753 | null | null | null | false | 1 | false | aayush9753/InterIIT-Bosch-MidPrep-AgeGenderClassificationInCCTV | 2022-04-05T20:33:51.000Z | null | false | 5f43ccb5ce480675591f1bd3b8ee19ed6f0de9ca | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/aayush9753/InterIIT-Bosch-MidPrep-AgeGenderClassificationInCCTV/resolve/main/README.md | ---
license: afl-3.0
---
|
SocialGrep | null | null | The written history or /r/Place, in posts and comments. | false | 1 | false | SocialGrep/the-reddit-place-dataset | 2022-07-01T17:51:57.000Z | null | false | 8ec4ba6640805906d0c61886e65810c8ee78a982 | [] | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original"
] | https://huggingface.co/datasets/SocialGrep/the-reddit-place-dataset/resolve/main/README.md | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for the-reddit-place-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-place-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditplacedataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditplacedataset)
### Dataset Summary
The written history or /r/Place, in posts and comments.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
|
dnes1983 | null | null | null | false | 1 | false | dnes1983/train | 2022-04-06T04:22:23.000Z | null | false | 1ab7981a2c7960c11a12a32578cf09ceaa76f8cf | [] | [] | https://huggingface.co/datasets/dnes1983/train/resolve/main/README.md | |
Jianxin1111 | null | null | null | false | 1 | false | Jianxin1111/juicycollection | 2022-04-06T04:27:33.000Z | null | false | 3ddcf36a47551096e85303f46a160239f7c37427 | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/Jianxin1111/juicycollection/resolve/main/README.md | ---
license: artistic-2.0
---
|
ChainYo | null | null | null | false | 15 | false | ChainYo/rvl-cdip | 2022-04-06T16:49:20.000Z | null | false | 66f430a1252ea1732413a80a56a1b6e8bc74264e | [] | [
"license:other"
] | https://huggingface.co/datasets/ChainYo/rvl-cdip/resolve/main/README.md | ---
license: other
---
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
For questions and comments please contact Adam Harley (aharley@scs.ryerson.ca).
The full dataset can be found [here](https://www.cs.cmu.edu/~aharley/rvl-cdip/).
## Labels
0: advertissement
1: budget
2: email
3: file folder
4: form
5: handwritten
6: invoice
7: letter
8: memo
9: news article
10: presentation
11: questionnaire
12: resume
13: scientific publication
14: scientific report
15: specification
## Citation
This dataset is from this [paper](https://www.cs.cmu.edu/~aharley/icdar15/) `A. W. Harley, A. Ufkes, K. G. Derpanis, "Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval," in ICDAR, 2015`
## License
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
## References
1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, "Building a test collection for complex document information processing," in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006
2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. http://legacy.library.ucsf.edu/.
|
israel | null | null | null | false | 7 | false | israel/Amharic-News-Text-classification-Dataset | 2022-04-06T09:27:52.000Z | null | false | b646090ef0d09981da9c9765c4d376b407aa5955 | [] | [
"arxiv:2103.05639",
"license:cc-by-4.0"
] | https://huggingface.co/datasets/israel/Amharic-News-Text-classification-Dataset/resolve/main/README.md | ---
license: cc-by-4.0
---
# An Amharic News Text classification Dataset
> In NLP, text classification is one of the primary problems we try to solve and its uses in language analyses are indisputable. The lack of labeled training data made it harder to do these tasks in low resource languages like Amharic. The task of collecting, labeling, annotating, and making valuable this kind of data will encourage junior researchers, schools, and machine learning practitioners to implement existing classification models in their language. In this short paper, we aim to introduce the Amharic text classification dataset that consists of more than 50k news articles that were categorized into 6 classes. This dataset is made available with easy baseline performances to encourage studies and better performance experiments.
```
@misc{https://doi.org/10.48550/arxiv.2103.05639,
doi = {10.48550/ARXIV.2103.05639},
url = {https://arxiv.org/abs/2103.05639},
author = {Azime, Israel Abebe and Mohammed, Nebil},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {An Amharic News Text classification Dataset},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
huggan | null | null | null | false | 43 | false | huggan/wikiart | 2022-09-20T20:55:49.000Z | null | false | 95a63ba9d977acef34a0203a13e4b5b794145526 | [] | [
"license:unknown",
"size_categories:10K<n<100K"
] | https://huggingface.co/datasets/huggan/wikiart/resolve/main/README.md | ---
license: unknown
license_details: "Data files © Original Authors"
size_categories:
- 10K<n<100K
---
## Dataset Description
- **Homepage:** https://www.wikiart.org/
### Dataset Summary
Dataset containing 81444 pieces of visual art from various artists, taken from WikiArt.org,
along with class labels for each image :
* "artist" : 129 artist classes, including a "Unknown Artist" class
* "genre" : 11 genre classes, including a "Unknown Genre" class
* "style" : 27 style classes
On WikiArt.org, the description for the "Artworks by Genre" page reads :
A genre system divides artworks according to depicted themes and objects. A classical hierarchy of genres was developed in European culture by the 17th century. It ranked genres in high – history painting and portrait, - and low – genre painting, landscape and still life. This hierarchy was based on the notion of man as the measure of all things. Landscape and still life were the lowest because they did not involve human subject matter. History was highest because it dealt with the noblest events of humanity. Genre system is not so much relevant for a contemporary art; there are just two genre definitions that are usually applied to it: abstract or figurative.
The "Artworks by Style" page reads :
A style of an artwork refers to its distinctive visual elements, techniques and methods. It usually corresponds with an art movement or a school (group) that its author is associated with.
## Dataset Structure
* "image" : image
* "artist" : 129 artist classes, including a "Unknown Artist" class
* "genre" : 11 genre classes, including a "Unknown Genre" class
* "style" : 27 style classes
### Source Data
Files taken from this [archive](https://archive.org/download/wikiart-dataset/wikiart.tar.gz), curated from the [WikiArt website](https://www.wikiart.org/).
## Additional Information
Note:
* The WikiArt dataset can be used only for non-commercial research purpose.
* The images in the WikiArt dataset were obtained from WikiArt.org.
* The authors are neither responsible for the content nor the meaning of these images.
By using the WikiArt dataset, you agree to obey the terms and conditions of WikiArt.org.
### Contributions
[`gigant`](https://huggingface.co/gigant) added this dataset to the hub. |
nealmgkr | null | null | null | false | 1 | false | nealmgkr/tminer_hs | 2022-04-06T09:45:48.000Z | null | false | 6aa6bccd5e72aac4a0e6d32b140564390a8a165a | [] | [
"arxiv:2103.04264"
] | https://huggingface.co/datasets/nealmgkr/tminer_hs/resolve/main/README.md | - This is a personal convenience copy of the binary Hate Speech (HS) dataset used in the T-Miner paper on defending against trojan attacks on text classifiers: https://arxiv.org/pdf/2103.04264.pdf
- The dataset is sourced from the original paper\'s Github repository: https://github.com/reza321/T-Miner
- Label mapping:
- 0 = hate speech
- 1 = normal speech
- If you use this dataset please cite the T-Miner paper (see bibtex below), and the two original papers from which T-Miner constructed the dataset (see paper for references):
```@inproceedings{azizi21tminer,
title={T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification},
author={Azizi, Ahmadreza and Tahmid, Ibrahim and Waheed, Asim and Mangaokar, Neal amd Pu, Jiameng and Javed, Mobin and Reddy, Chandan K. and Viswanath, Bimal},
booktitle={Proc. of USENIX Security},
year={2021}}
``` |
dalton72 | null | null | null | false | 2 | false | dalton72/twitter-sent | 2022-04-06T10:17:23.000Z | null | false | 12299c16f191d1c2976dd01907dd009a3393e19a | [] | [] | https://huggingface.co/datasets/dalton72/twitter-sent/resolve/main/README.md | |
albertvillanova | null | @article{mTet2022,
author = {Chinh Ngo, Hieu Tran, Long Phan, Trieu H. Trinh, Hieu Nguyen, Minh Nguyen, Minh-Thang Luong},
title = {MTet: Multi-domain Translation for English and Vietnamese},
journal = {https://github.com/vietai/mTet},
year = {2022},
} | MTet (Multi-domain Translation for English-Vietnamese) dataset contains roughly 4.2 million English-Vietnamese pairs of
texts, ranging across multiple different domains such as medical publications, religious texts, engineering articles,
literature, news, and poems.
This dataset extends our previous SAT (Style Augmented Translation) dataset (v1.0) by adding more high-quality
English-Vietnamese sentence pairs on various domains. | false | 1 | false | albertvillanova/mtet | 2022-10-08T07:42:34.000Z | null | false | 1cad77bdc16e9965ba15285d5fc9ca347d6cec3a | [] | [
"annotations_creators:no-annotation",
"language_creators:found",
"language:en",
"language:vi",
"license:cc-by-nc-sa-4.0",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"source_datasets:extended|bible_para",
"source_datasets:extended|kde4",
"source_dataset... | https://huggingface.co/datasets/albertvillanova/mtet/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
- vi
license:
- cc-by-nc-sa-4.0
multilinguality:
- translation
pretty_name: MTet
size_categories:
- 1M<n<10M
source_datasets:
- original
- extended|bible_para
- extended|kde4
- extended|opus_gnome
- extended|open_subtitles
- extended|tatoeba
task_categories:
- conditional-text-generation
task_ids:
- machine-translation
---
# Dataset Card for MTet
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://translate.vietai.org/
- **Repository:** https://github.com/vietai/mTet
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
MTet (Multi-domain Translation for English-Vietnamese) dataset contains roughly 4.2 million English-Vietnamese pairs of
texts, ranging across multiple different domains such as medical publications, religious texts, engineering articles,
literature, news, and poems.
This dataset extends our previous SAT (Style Augmented Translation) dataset (v1.0) by adding more high-quality
English-Vietnamese sentence pairs on various domains.
### Supported Tasks and Leaderboards
- Machine Translation
### Languages
The languages in the dataset are:
- Vietnamese (`vi`)
- English (`en`)
## Dataset Structure
### Data Instances
```
{
'translation': {
'en': 'He said that existing restrictions would henceforth be legally enforceable, and violators would be fined.',
'vi': 'Ông nói những biện pháp hạn chế hiện tại sẽ được nâng lên thành quy định pháp luật, và những ai vi phạm sẽ chịu phạt.'
}
}
```
### Data Fields
- `translation`:
- `en`: Parallel text in English.
- `vi`: Parallel text in Vietnamese.
### Data Splits
The dataset is in a single "train" split.
| | train |
|--------------------|--------:|
| Number of examples | 4163853 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```bibtex
@article{mTet2022,
author = {Chinh Ngo, Hieu Tran, Long Phan, Trieu H. Trinh, Hieu Nguyen, Minh Nguyen, Minh-Thang Luong},
title = {MTet: Multi-domain Translation for English and Vietnamese},
journal = {https://github.com/vietai/mTet},
year = {2022},
}
```
### Contributions
Thanks to [@albertvillanova](https://huggingface.co/albertvillanova) for adding this dataset.
|
StanBienaives | null | @InProceedings{huggingface:dataset,
title = {French Fiscal texts},
author={Stan Bienaives
},
year={2022}
} | This dataset is an extraction from the OPENDATA/JADE. A list of case laws from the French court "Conseil d'Etat". | false | 9 | false | StanBienaives/french-open-fiscal-texts | 2022-10-25T10:03:56.000Z | null | false | e4b81eb76e142bbe07326db59b0e77c9a0f0b831 | [] | [
"annotations_creators:no-annotation",
"language_creators:other",
"language:fr-FR",
"license:cc0-1.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:summarization",
"task_categories:feature-extraction"
] | https://huggingface.co/datasets/StanBienaives/french-open-fiscal-texts/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- other
language:
- fr-FR
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: french-open-fiscal-texts
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
- feature-extraction
task_ids: []
---
# Dataset Card for french-open-fiscal-texts
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://echanges.dila.gouv.fr/OPENDATA/JADE/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset is an extraction from the OPENDATA/JADE. A list of case laws from the French court "Conseil d'Etat".
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
fr-FR
## Dataset Structure
### Data Instances
```json
{
"file": "CETATEXT000007584427.xml",
"title": "Cour administrative d'appel de Marseille, 3�me chambre - formation � 3, du 21 octobre 2004, 00MA01080, in�dit au recueil Lebon",
"summary": "",
"content": "Vu la requête, enregistrée le 22 mai 2000, présentée pour M. Roger X, par Me Luherne, élisant domicile ...), et les mémoires complémentaires en date des 28 octobre 2002, 22 mars 2004 et 16 septembre 2004 ; M. X demande à la Cour :\n\n\n \n 11/ d'annuler le jugement n° 951520 en date du 16 mars 2000 par lequel le Tribunal administratif de Montpellier a rejeté sa requête tendant à la réduction des cotisations supplémentaires à l'impôt sur le revenu et des pénalités dont elles ont été assorties, auxquelles il a été assujetti au titre des années 1990, 1991 et 1992 ;\n\n\n \n 22/ de prononcer la réduction desdites cotisations ;\n\n\n \n 3°/ de condamner de l'Etat à lui verser une somme de 32.278 francs soit 4.920,75 euros"
}
```
### Data Fields
`file`: identifier on the JADE OPENDATA file
`title`: Name of the law case
`summary`: Summary provided by JADE (may be missing)
`content`: Text content of the case law
### Data Splits
train
test
## Dataset Creation
### Curation Rationale
This dataset is an attempt to gather multiple tax related french text law.
The first intent it to build model to summarize law cases
### Source Data
#### Initial Data Collection and Normalization
Collected from the https://echanges.dila.gouv.fr/OPENDATA/
- Filtering xml files containing "Code général des impôts" (tax related)
- Extracting content, summary, identifier, title
#### Who are the source language producers?
DILA
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] |
JeunesseAfricaine | null | null | null | false | 1 | false | JeunesseAfricaine/sheng_nlu | 2022-04-06T13:03:27.000Z | null | false | 1d8fa78d643f0207bfac31f2e42c056769e16fed | [] | [
"license:mit"
] | https://huggingface.co/datasets/JeunesseAfricaine/sheng_nlu/resolve/main/README.md | ---
license: mit
---
## Common User Intentions
#### Greetings
- Wasemaje
- uko aje btw
- oyah...
- Form
- Alafu niaje
- Poa Sana Mambo
- Niko poa
- Pia Mimi Niko salama
- Hope siku yako iko poa
- Siko poa kabisa
- Nimekuwa poa
- Umeshindaje
- Hope uko poa
- uko poa
- Sasa
- Vipi vipi
- Niko salama
- ..its been long.
- Nko fiti
- niko fiti
- Nmeamka fity..
- Vipi
- Unasemaje
- Aaaah...itakuaje sasaa..
- .iz vipi..itakuaje..
- Form ni gani bro...
- iz vipi
#### Affirm
- Hapo sawa...
- Fty
- sai
- Hio si ni better hadi
- Imebidi.
- Eeeh mazee
- mazeee
- Fity fity
- Oooh poapoa
- Yap
- Inakaa poa
- Yeah itabidi
- Ooooh...
- Si ndo nadaaiii😅
- Oooh sawa
- Okay sawa basi
- Venye utaamua ni sawa
- Sawa wacha tungoje
- lazima
- apa umenena
- Sawa basi
- walai
- Oooh
- inaweza mbaya
- itaweza mbaya
- ni sawa
- Iko poa
- Iko tu sawa hivo
- ilinbamba.
- Nimemada
- Btw hao ata mimi naona
- but inaeleweka
- pia mimi
- iende ikiendaga
- We jua ivo
- Hata Mimi
- Nataka
- Ooh.
- Chezea tu hapo
- isorait
- Ata yako ni kali
- Ntaicheck out Leo
- hmm. Okay
- Mimi sina shida
- ooooh io iko fity...
- hii ni ngori
- maze
- sawa
- banaa
- Aaah kumbe
- Safiii..
- Sasawa
- hio ni fityyy
- Yeah nliona
- Vizii...
- Eeeeh nmekua naiona...
- Yea
- Haina nomA
- katambe
- accept basi
- ni sawa
- Issaplan
- nmeget
- nimedai tu
- eeh
- Hio ni poa
- nadai sa hii
- Eeeeh
- mi nadai tu
- firi
- Hapo freshi
#### Deny
- Sipendi
- aih
- Nimegive up
- Yangu bado
- siezi make
- Sina😊
- Haileti
- Haiwezi
- Io sikuwa nikwambie
- Sikuwa
- Wacha ata
- ata sijui
- Sijasema
- Sijai
- hiyo haiezi
- Bado.
- Uku tricks...
- sidai
- achana nayo
- ziii
- si fityy
- Nimekataa Mimi
- Sijui
- Aiwezekani
- Bado sioni
#### Courtesy
- Imefika... shukran
- Haina ngori
- Inafaa hivo
- Utakuwa umeniokolea manzee
- Karibu
- Nyc one
- Hakuna pressure
- Gai. Pole
- Usijali I will
- Nimekufeel hapo
- Waah izaa
- Pole lkn
- Pole
- plz
- okay...pole
- thanks for pulling up lkn..
- shukran
- Eeeeh nyc
- Thanx for the info
- Uko aje
- haina pressure
- eih, iko fiti.
- vitu kama hizo
- sahii
#### Asking clarification
- check alafu unishow
- Sasa msee akishabuy anafanya aje
- Umeenda wapi
- nlikuwa nadai
- Nlikua nataka
- Ulipata
- leo jioni utakuwa?
- uko
- umelostia wapi?
- ingine?
- hii inamaanisha?
- Wewe Sasa ni nani?
- warrathos
- kwani nisiende sasa
- unadai zingine?
- Kwani
- Haiya...
- Unadu?
- inakuanga mangapiii...
- Kuna nn
- Nauliza
- Hakuna kwanini
- Nadai kujua what
- Kwanini hakuna
- Kwa nini hakuna
- Uliniambia
- Mbona
- Nlikua nashangaa
- Unadu nini
- Oooh mara moja
- Unaeza taka?
- unaeza make?
- Umeipata?
- wapi kwingine tena
- kuna yenye natafuta
- Sijajua bado
- Niko na ingine
- ulikuwa unataka
- ulinishow?
- ulinsho
- Umepata
- Ata stage hakuna?
- Huku hakuna kibandaski?
- Sai ndio uko available
- Ivo
- Inaeza
- Naeza
- Btw, nikuulize
- Uliza
- hadi sa hii
- Nauliza ndio nijue kama bado iko
- Btw ile hoteli tulienda na wewe apo kiimbo huendangi?
#### Comedy
- Ata kama
- Wasikupee pressure
- umeanza jokes
- Ulisumbua sana
- Unaeza niambia ivo
- usinicheke
- Hakuna😁😁kwanini
- aki wewe.
- naskia mpaka ulipiga sherehe
- sio?
- uko na kakitu
- Aaaaii
- .uko fity nayo..
- icome through mbaya...
#### Small talk
- Kuchil tu bana
- Inafaa hivo
- Acha niskizie
- Skujua hii stuff
- nacheza chini
- hii imesink deep.
- mi Niko
- khai, gai, ghaiye
- Woiye
- ndo nmeland
- Nimekuona
- Kaaai
- Nambie
- bado nashangaa aliipull thru maze
- Niambie
- Najua uko kejani
- Bado uko
- Utakuwa sawa
- Niko poa ata kama uniliacha hanging jana
- issa deal
- Walai io nilijua utasema
- hujawai sahau hii
- Sijajua bado
- Ni maroundi tu
- Enyewe imetoka mbali
- Hadi nimekuwa Tao leo
- Ni mnoma mbaya
- Anyway mambo ni polepole
- Imagine
- Sina la kusema
- Sai
- Najua umeboeka
#### Resolute
- Nataka leo
- hayo ndo maisha Sasa
- vile itakuja maze
- Acha tu
- Waaah Leo haiwezi
- Ni sawa tu
- Imeisha
- Itabidi
- siendagi
- siezi kuangusha
- nachangamkia hii
- Weno ivi...
- Hii price iko poa...
#### implore
- but nimetry tena
- aminia tu
- Ebu try
- Alafu
- naona hufeel kuongea
- Watu hawaongei?
- Itabidi tu umesort
- Naona huna shughuli yangu
- tufanye pamoja
- khai, gai, ghaiye
- so kalunch
- ama?
- Sahii ni the best time
- Kwanza sahii
- hii weekend
- Kaanza next weekend ni fity
- this weekend
- Acha ntacheki
- izo sasa..
- Acha tuone
- So tunafikanga ivor morning mapemaa
- naona uko rada
- mapema kiasi
- nimchapie niskie...
- Naisaka walai
#### Bye
- Ama kesho
- Ngoja nta rudi baadaye
- nacheki tu rada ya kesho
- Nitakusort kesho morning
- Ni hivo nimekafunga
- nitakushow
- Nextweek ndio inaeza
- Ntakuchapia kama ntamake
- Freshi
#### Sample Bot Responses
- tulia tu hana mambo mob
- si you know how we do it
- Form ni gani
- Oooh nmekuget
- znaeza kupea stress
- Hues make leo
- nshow password
- Nmeichangamkia design ya ngori
- Oooh nmekuget...
- ilicome through
- Naisaka walai
- kesho ntakuchapia
- nichapie niskie
- Aaaah..😅
- Alafu ile story ya
- Ooooh ebu ntasaka
- Saa ngapi uko free..
- Ama unasema ya
- Safiii..naona uko rada
- Ilkulemea🤣
- Acha ntacheki
- imeharibia form..
- Nmeitafuta
- Ndio nimeget
- inaeza saidia mtu
- Email yako ni gani
- Wacha niangalie
- nangoja ulipe
- nimeshikika
- Sawa tuma email
- Kwani ulimwambia nini
- Najua ata most of the time
- mara most btw
- Unajua tu ni risky
- unadai tu niseme mi ni robot
- kwanini
- ndio usiulizwe
- Ukiangalia niambie
- Last time ukinipigia nilikuwa nimeenda kuoshwa
- ikishaenda kwa mganga hairudi
- Hata Mimi ni hayo mambo madogo madogo ndio imenieka.
- We jua nafikirianga mingi ni venye zingine huwa sisemi
- Na najua
- unarelax
- mm ata sko tensed
- sahii ata ni risky
- but ntakuchapia
- oooh waah..
- aaaah ata ww
- hii si fityy
- maze itabidi tudunde virtual
- tunadunda wapiiii..
- kwani sa mi ndo nafaa kumshow kila time coz this is not the first time namwambia🤦♀️
- Wacha hizo.
- Yeah niko hapa
- Niko
- Give me sometime.
- Maze...nmecheza ki mimi
- Uko busy
- Chill kiasi
- Wacha nikusort
- ntakushow
- looking for you hupatikani
- Mnaniogopa ama
- Wewe unapenda free
- Nakusort sai chill mazee
- Kiasi
- relax mkubwa
- Sahii uko sorted sindio
- Ni juu
- bringing the future to us
- hiyo ni form yangu daily
- Ata mimi sitaki ufala 😂
- Imagine
- Uko sawa
- Uko sawa ama unaitaji ingine
- ka unaeza
- utanichapia tu
- unasemaje lakini
- Niulize
- Uko na number
- Ukiboeka wewe nitext
- unadai sa hii ?
- skuwa nimeona
- Acha nicheki
- Ni Friday bana
- Niko chilled tu
- Unadai aje.
- Utanichapia basi
- Umenyamaza sana bana
- imekam through ama
- Nategea umalize ndo nikushow ile form
- Guidance tu kiasi
- Tutadiscuss pia stori
- Nakwelewa
- tujue niaje
- itaweza mbaya
- Kuna hopes za kulearn
|
met | null | null | null | false | 1 | false | met/MetaIct | 2022-04-06T14:09:52.000Z | null | false | 556fad8e53bba25cc7d41d3204dca87254bc6f5d | [] | [
"license:other"
] | https://huggingface.co/datasets/met/MetaIct/resolve/main/README.md | ---
license: other
---
|
Jeneral | null | @TECHREPORT{FER2013 dataset,
author = {Prince Awuah Baffour},
title = {Facial Emotion Detection},
institution = {},
year = {2022}
} | null | false | 12 | false | Jeneral/fer-2013 | 2022-04-06T18:24:30.000Z | null | false | 3a46cbfae3f5b348449335f300666a0ae330f121 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/Jeneral/fer-2013/resolve/main/README.md | ---
license: apache-2.0
---
|
ChainYo | null | null | null | false | 1 | false | ChainYo/rvl-cdip-questionnaire | 2022-04-06T16:45:26.000Z | null | false | 70b2d68664a3c8e841f426cf8e43f4f669a75017 | [] | [
"license:other"
] | https://huggingface.co/datasets/ChainYo/rvl-cdip-questionnaire/resolve/main/README.md | ---
license: other
---
⚠️ This only a subpart of the original dataset, containing only `questionnaire`.
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
For questions and comments please contact Adam Harley (aharley@scs.ryerson.ca).
The full dataset can be found [here](https://www.cs.cmu.edu/~aharley/rvl-cdip/).
## Labels
0: letter
1: form
2: email
3: handwritten
4: advertissement
5: scientific report
6: scientific publication
7: specification
8: file folder
9: news article
10: budget
11: invoice
12: presentation
13: questionnaire
14: resume
15: memo
## Citation
This dataset is from this [paper](https://www.cs.cmu.edu/~aharley/icdar15/) `A. W. Harley, A. Ufkes, K. G. Derpanis, "Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval," in ICDAR, 2015`
## License
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
## References
1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, "Building a test collection for complex document information processing," in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006
2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. http://legacy.library.ucsf.edu/. |
ChainYo | null | null | null | false | 38 | false | ChainYo/rvl-cdip-invoice | 2022-04-06T16:57:20.000Z | null | false | fad615c9ceaecb4476b0a01f29c0a15b276b3a2b | [] | [
"license:other"
] | https://huggingface.co/datasets/ChainYo/rvl-cdip-invoice/resolve/main/README.md | ---
license: other
---
⚠️ This only a subpart of the original dataset, containing only `invoice`.
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
For questions and comments please contact Adam Harley (aharley@scs.ryerson.ca).
The full dataset can be found [here](https://www.cs.cmu.edu/~aharley/rvl-cdip/).
## Labels
0: letter
1: form
2: email
3: handwritten
4: advertissement
5: scientific report
6: scientific publication
7: specification
8: file folder
9: news article
10: budget
11: invoice
12: presentation
13: questionnaire
14: resume
15: memo
## Citation
This dataset is from this [paper](https://www.cs.cmu.edu/~aharley/icdar15/) `A. W. Harley, A. Ufkes, K. G. Derpanis, "Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval," in ICDAR, 2015`
## License
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
## References
1. D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard, "Building a test collection for complex document information processing," in Proc. 29th Annual Int. ACM SIGIR Conference (SIGIR 2006), pp. 665-666, 2006
2. The Legacy Tobacco Document Library (LTDL), University of California, San Francisco, 2007. http://legacy.library.ucsf.edu/. |
ukr-models | null | null | Large silver standard Ukrainian corpus annotated with morphology tags, syntax trees and PER, LOC, ORG NER-tags. | false | 3 | false | ukr-models/Ukr-Synth | 2022-10-24T18:18:01.000Z | null | false | 78a8da22c59e959592d3bba2ef6dacc08f877049 | [] | [
"annotations_creators:machine-generated",
"language_creators:found",
"language:uk",
"license:mit",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:parsing",
"task_ids:part-of-speech"
] | https://huggingface.co/datasets/ukr-models/Ukr-Synth/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- uk
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- parsing
- part-of-speech
pretty_name: Ukrainian synthetic dataset in conllu format
---
# Dataset Card for Ukr-Synth
## Dataset Description
### Dataset Summary
Large silver standard Ukrainian corpus annotated with morphology tags, syntax trees and PER, LOC, ORG NER-tags.
Represents a subsample of [Leipzig Corpora Collection for Ukrainian Language](https://wortschatz.uni-leipzig.de/en/download/Ukrainian). The source texts are newspaper texts split into sentences and shuffled. The sentrences are annotated using transformer-based models trained using gold standard Ukrainian language datasets.
### Languages
Ukrainian
## Dataset Structure
### Data Splits
| name |train |validation|
|---------|-------:|---------:|
|conll2003|1000000| 10000|
## Dataset Creation
### Source Data
Leipzig Corpora Collection:
D. Goldhahn, T. Eckart & U. Quasthoff: Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages.
In: Proceedings of the 8th International Language Resources and Evaluation (LREC'12), 2012
## Additional Information
### Licensing Information
MIT License
Copyright (c) 2022
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. |
openclimatefix | null | null | null | false | 1 | false | openclimatefix/era5 | 2022-09-07T16:25:48.000Z | null | false | 66651ce605381e1e099d82f992864db3396870e3 | [] | [
"license:mit"
] | https://huggingface.co/datasets/openclimatefix/era5/resolve/main/README.md | ---
license: mit
---
|
ucl-snlp-group-11 | null | null | null | false | 1 | false | ucl-snlp-group-11/guardian_crosswords | 2022-04-06T20:51:18.000Z | null | false | 3e483c44d3dd6525f3b9662a426ca047179868f0 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/ucl-snlp-group-11/guardian_crosswords/resolve/main/README.md | ---
license: afl-3.0
---
|
bible-nlp | null | \
@InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | null | false | 2 | false | bible-nlp/biblenlp-corpus | 2022-08-25T17:02:11.000Z | null | false | ec6549dd0e2ce12faf062fb4292857169b8b12d1 | [] | [
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"language:aau",
"language:aaz",
"language:abx",
"language:aby",
"language:acf",
"language:acu",
"language:adz",
"language:aey",
"language:agd",
"language:agg",
"language:agm",
"language:agn",
"language:agr",
"l... | https://huggingface.co/datasets/bible-nlp/biblenlp-corpus/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- aau
- aaz
- abx
- aby
- acf
- acu
- adz
- aey
- agd
- agg
- agm
- agn
- agr
- agu
- aia
- ake
- alp
- alq
- als
- aly
- ame
- amk
- amp
- amr
- amu
- anh
- anv
- aoi
- aoj
- apb
- apn
- apu
- apy
- arb
- arl
- arn
- arp
- aso
- ata
- atb
- atd
- atg
- auc
- aui
- auy
- avt
- awb
- awk
- awx
- azg
- azz
- bao
- bbb
- bbr
- bch
- bco
- bdd
- bea
- bel
- bgs
- bgt
- bhg
- bhl
- big
- bjr
- bjv
- bkd
- bki
- bkq
- bkx
- bla
- blw
- blz
- bmh
- bmk
- bmr
- bnp
- boa
- boj
- bon
- box
- bqc
- bre
- bsn
- bsp
- bss
- buk
- bus
- bvr
- bxh
- byx
- bzd
- bzj
- cab
- caf
- cao
- cap
- car
- cav
- cax
- cbc
- cbi
- cbk
- cbr
- cbs
- cbt
- cbu
- cbv
- cco
- ces
- cgc
- cha
- chd
- chf
- chk
- chq
- chz
- cjo
- cjv
- cle
- clu
- cme
- cmn
- cni
- cnl
- cnt
- cof
- con
- cop
- cot
- cpa
- cpb
- cpc
- cpu
- crn
- crx
- cso
- cta
- ctp
- ctu
- cub
- cuc
- cui
- cut
- cux
- cwe
- daa
- dad
- dah
- ded
- deu
- dgr
- dgz
- dif
- dik
- dji
- djk
- dob
- dwr
- dww
- dwy
- eko
- emi
- emp
- eng
- epo
- eri
- ese
- etr
- faa
- fai
- far
- for
- fra
- fuf
- gai
- gam
- gaw
- gdn
- gdr
- geb
- gfk
- ghs
- gia
- glk
- gmv
- gng
- gnn
- gnw
- gof
- grc
- gub
- guh
- gui
- gul
- gum
- guo
- gvc
- gvf
- gwi
- gym
- gyr
- hat
- haw
- hbo
- hch
- heb
- heg
- hix
- hla
- hlt
- hns
- hop
- hrv
- hub
- hui
- hus
- huu
- huv
- hvn
- ign
- ikk
- ikw
- imo
- inb
- ind
- ino
- iou
- ipi
- ita
- jac
- jao
- jic
- jiv
- jpn
- jvn
- kaq
- kbc
- kbh
- kbm
- kdc
- kde
- kdl
- kek
- ken
- kew
- kgk
- kgp
- khs
- kje
- kjs
- kkc
- kky
- klt
- klv
- kms
- kmu
- kne
- knf
- knj
- kos
- kpf
- kpg
- kpj
- kpw
- kqa
- kqc
- kqf
- kql
- kqw
- ksj
- ksr
- ktm
- kto
- kud
- kue
- kup
- kvn
- kwd
- kwf
- kwi
- kwj
- kyf
- kyg
- kyq
- kyz
- kze
- lac
- lat
- lbb
- leu
- lex
- lgl
- lid
- lif
- lww
- maa
- maj
- maq
- mau
- mav
- maz
- mbb
- mbc
- mbh
- mbl
- mbt
- mca
- mcb
- mcd
- mcf
- mcp
- mdy
- med
- mee
- mek
- meq
- met
- meu
- mgh
- mgw
- mhl
- mib
- mic
- mie
- mig
- mih
- mil
- mio
- mir
- mit
- miz
- mjc
- mkn
- mks
- mlh
- mlp
- mmx
- mna
- mop
- mox
- mph
- mpj
- mpm
- mpp
- mps
- mpx
- mqb
- mqj
- msb
- msc
- msk
- msm
- msy
- mti
- muy
- mva
- mvn
- mwc
- mxb
- mxp
- mxq
- mxt
- myu
- myw
- myy
- mzz
- nab
- naf
- nak
- nay
- nbq
- nca
- nch
- ncj
- ncl
- ncu
- ndj
- nfa
- ngp
- ngu
- nhg
- nhi
- nho
- nhr
- nhu
- nhw
- nhy
- nif
- nin
- nko
- nld
- nlg
- nna
- nnq
- not
- nou
- npl
- nsn
- nss
- ntj
- ntp
- nwi
- nyu
- obo
- ong
- ons
- ood
- opm
- ote
- otm
- otn
- otq
- ots
- pab
- pad
- pah
- pao
- pes
- pib
- pio
- pir
- pjt
- plu
- pma
- poe
- poi
- pon
- poy
- ppo
- prf
- pri
- ptp
- ptu
- pwg
- quc
- quf
- quh
- qul
- qup
- qvc
- qve
- qvh
- qvm
- qvn
- qvs
- qvw
- qvz
- qwh
- qxh
- qxn
- qxo
- rai
- rkb
- rmc
- roo
- rop
- rro
- ruf
- rug
- rus
- sab
- san
- sbe
- seh
- sey
- sgz
- shj
- shp
- sim
- sja
- sll
- smk
- snc
- snn
- sny
- som
- soq
- spa
- spl
- spm
- sps
- spy
- sri
- srm
- srn
- srp
- srq
- ssd
- ssg
- ssx
- stp
- sua
- sue
- sus
- suz
- swe
- swh
- swp
- sxb
- tac
- tav
- tbc
- tbl
- tbo
- tbz
- tca
- tee
- ter
- tew
- tfr
- tgp
- tif
- tim
- tiy
- tke
- tku
- tna
- tnc
- tnn
- tnp
- toc
- tod
- toj
- ton
- too
- top
- tos
- tpt
- trc
- tsw
- ttc
- tue
- tuo
- txu
- ubr
- udu
- ukr
- uli
- ura
- urb
- usa
- usp
- uvl
- vid
- vie
- viv
- vmy
- waj
- wal
- wap
- wat
- wbp
- wed
- wer
- wim
- wmt
- wmw
- wnc
- wnu
- wos
- wrk
- wro
- wsk
- wuv
- xav
- xed
- xla
- xnn
- xon
- xsi
- xtd
- xtm
- yaa
- yad
- yal
- yap
- yaq
- yby
- ycn
- yka
- yml
- yre
- yuj
- yut
- yuw
- yva
- zaa
- zab
- zac
- zad
- zai
- zaj
- zam
- zao
- zar
- zas
- zat
- zav
- zaw
- zca
- zia
- ziw
- zos
- zpc
- zpl
- zpo
- zpq
- zpu
- zpv
- zpz
- zsr
- ztq
- zty
- zyp
- be
- br
- cs
- ch
- zh
- de
- en
- eo
- fr
- ht
- he
- hr
- id
- it
- ja
- la
- nl
- ru
- sa
- so
- es
- sr
- sv
- to
- uk
- vi
license:
- cc-by-4.0
- other
multilinguality:
- translation
- multilingual
pretty_name: biblenlp-corpus
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
---
# Dataset Card for BibleNLP Corpus
### Dataset Summary
Partial and complete Bible translations in 615 languages, aligned by verse.
### Languages
aau, aaz, abx, aby, acf, acu, adz, aey, agd, agg, agm, agn, agr, agu, aia, ake, alp, alq, als, aly, ame, amk, amp, amr, amu, anh, anv, aoi, aoj, apb, apn, apu, apy, arb, arl, arn, arp, aso, ata, atb, atd, atg, auc, aui, auy, avt, awb, awk, awx, azg, azz, bao, bbb, bbr, bch, bco, bdd, bea, bel, bgs, bgt, bhg, bhl, big, bjr, bjv, bkd, bki, bkq, bkx, bla, blw, blz, bmh, bmk, bmr, bnp, boa, boj, bon, box, bqc, bre, bsn, bsp, bss, buk, bus, bvr, bxh, byx, bzd, bzj, cab, caf, cao, cap, car, cav, cax, cbc, cbi, cbk, cbr, cbs, cbt, cbu, cbv, cco, ces, cgc, cha, chd, chf, chk, chq, chz, cjo, cjv, cle, clu, cme, cmn, cni, cnl, cnt, cof, con, cop, cot, cpa, cpb, cpc, cpu, crn, crx, cso, cta, ctp, ctu, cub, cuc, cui, cut, cux, cwe, daa, dad, dah, ded, deu, dgr, dgz, dif, dik, dji, djk, dob, dwr, dww, dwy, eko, emi, emp, eng, epo, eri, ese, etr, faa, fai, far, for, fra, fuf, gai, gam, gaw, gdn, gdr, geb, gfk, ghs, gia, glk, gmv, gng, gnn, gnw, gof, grc, gub, guh, gui, gul, gum, guo, gvc, gvf, gwi, gym, gyr, hat, haw, hbo, hch, heb, heg, hix, hla, hlt, hns, hop, hrv, hub, hui, hus, huu, huv, hvn, ign, ikk, ikw, imo, inb, ind, ino, iou, ipi, ita, jac, jao, jic, jiv, jpn, jvn, kaq, kbc, kbh, kbm, kdc, kde, kdl, kek, ken, kew, kgk, kgp, khs, kje, kjs, kkc, kky, klt, klv, kms, kmu, kne, knf, knj, kos, kpf, kpg, kpj, kpw, kqa, kqc, kqf, kql, kqw, ksj, ksr, ktm, kto, kud, kue, kup, kvn, kwd, kwf, kwi, kwj, kyf, kyg, kyq, kyz, kze, lac, lat, lbb, leu, lex, lgl, lid, lif, lww, maa, maj, maq, mau, mav, maz, mbb, mbc, mbh, mbl, mbt, mca, mcb, mcd, mcf, mcp, mdy, med, mee, mek, meq, met, meu, mgh, mgw, mhl, mib, mic, mie, mig, mih, mil, mio, mir, mit, miz, mjc, mkn, mks, mlh, mlp, mmx, mna, mop, mox, mph, mpj, mpm, mpp, mps, mpx, mqb, mqj, msb, msc, msk, msm, msy, mti, muy, mva, mvn, mwc, mxb, mxp, mxq, mxt, myu, myw, myy, mzz, nab, naf, nak, nay, nbq, nca, nch, ncj, ncl, ncu, ndj, nfa, ngp, ngu, nhg, nhi, nho, nhr, nhu, nhw, nhy, nif, nin, nko, nld, nlg, nna, nnq, not, nou, npl, nsn, nss, ntj, ntp, nwi, nyu, obo, ong, ons, ood, opm, ote, otm, otn, otq, ots, pab, pad, pah, pao, pes, pib, pio, pir, pjt, plu, pma, poe, poi, pon, poy, ppo, prf, pri, ptp, ptu, pwg, quc, quf, quh, qul, qup, qvc, qve, qvh, qvm, qvn, qvs, qvw, qvz, qwh, qxh, qxn, qxo, rai, rkb, rmc, roo, rop, rro, ruf, rug, rus, sab, san, sbe, seh, sey, sgz, shj, shp, sim, sja, sll, smk, snc, snn, sny, som, soq, spa, spl, spm, sps, spy, sri, srm, srn, srp, srq, ssd, ssg, ssx, stp, sua, sue, sus, suz, swe, swh, swp, sxb, tac, tav, tbc, tbl, tbo, tbz, tca, tee, ter, tew, tfr, tgp, tif, tim, tiy, tke, tku, tna, tnc, tnn, tnp, toc, tod, toj, ton, too, top, tos, tpt, trc, tsw, ttc, tue, tuo, txu, ubr, udu, ukr, uli, ura, urb, usa, usp, uvl, vid, vie, viv, vmy, waj, wal, wap, wat, wbp, wed, wer, wim, wmt, wmw, wnc, wnu, wos, wrk, wro, wsk, wuv, xav, xed, xla, xnn, xon, xsi, xtd, xtm, yaa, yad, yal, yap, yaq, yby, ycn, yka, yml, yre, yuj, yut, yuw, yva, zaa, zab, zac, zad, zai, zaj, zam, zao, zar, zas, zat, zav, zaw, zca, zia, ziw, zos, zpc, zpl, zpo, zpq, zpu, zpv, zpz, zsr, ztq, zty, zyp
## Dataset Structure
### Data Fields
**translation**
- **languages** - an N length list of the languages of the translations, sorted alphabetically
- **translation** - an N length list with the translations each corresponding to the language specified in the above field
**files**
- **lang** - an N length list of the languages of the files, in order of input
- **file** - an N length list of the filenames from the corpus on github, each corresponding with the lang above
**ref** - the verse(s) contained in the record, as a list, with each represented with: ``<a three letter book code> <chapter number>:<verse number>``
**licenses** - an N length list of licenses, corresponding to the list of files above
**copyrights** - information on copyright holders, corresponding to the list of files above
### Usage
The dataset loading script requires installation of tqdm, ijson, and numpy
Specify the languages to be paired with a list and ISO 693-3 language codes, such as ``languages = ['eng', 'fra']``.
By default, the script will return individual verse pairs, as well as verses covering a full range. If only the individual verses is desired, use ``pair='single'``. If only the maximum range pairing is desired use ``pair='range'`` (for example, if one text uses the verse range covering GEN 1:1-3, all texts would return only the full length pairing).
## Sources
https://github.com/BibleNLP/ebible-corpus |
iluvvatar | null | null | null | false | 15 | false | iluvvatar/NEREL | 2022-10-23T05:37:30.000Z | null | false | e3c0b8bb3ef842f11f8b5420e998833f75f7e26b | [] | [
"language:ru",
"multilinguality:monolingual",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/iluvvatar/NEREL/resolve/main/README.md | ---
language:
- ru
multilinguality:
- monolingual
pretty_name: NEREL
task_categories:
- structure-prediction
task_ids:
- named-entity-recognition
---
# NEREL dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Citation Information](#citation-information)
- [Contacts](#contacts)
## Dataset Description
NEREL dataset (https://doi.org/10.48550/arXiv.2108.13112) is
a Russian dataset for named entity recognition and relation extraction.
NEREL is significantly larger than existing Russian datasets:
to date it contains 56K annotated named entities and 39K annotated relations.
Its important difference from previous datasets is annotation of nested named
entities, as well as relations within nested entities and at the discourse
level. NEREL can facilitate development of novel models that can extract
relations between nested named entities, as well as relations on both sentence
and document levels. NEREL also contains the annotation of events involving
named entities and their roles in the events.
You can see full entity types list in a subset "ent_types"
and full list of relation types in a subset "rel_types".
## Dataset Structure
There are three "configs" or "subsets" of the dataset.
Using
`load_dataset('MalakhovIlya/NEREL', 'ent_types')['ent_types']`
you can download list of entity types (
Dataset({features: ['type', 'link']})
) where "link" is a knowledge base name used in entity linking task.
Using
`load_dataset('MalakhovIlya/NEREL', 'rel_types')['rel_types']`
you can download list of entity types (
Dataset({features: ['type', 'arg1', 'arg2']})
) where "arg1" and "arg2" are lists of entity types that can take part in such
"type" of relation. \<ENTITY> stands for any type.
Using
`load_dataset('MalakhovIlya/NEREL', 'data')` or `load_dataset('MalakhovIlya/NEREL')`
you can download the data itself,
DatasetDict with 3 splits: "train", "test" and "dev".
Each of them contains text document with annotated entities, relations and
links.
"entities" are used in named-entity recognition task (see https://en.wikipedia.org/wiki/Named-entity_recognition).
"relations" are used in relationship extraction task (see https://en.wikipedia.org/wiki/Relationship_extraction).
"links" are used in entity linking task (see https://en.wikipedia.org/wiki/Entity_linking)
Each entity is represented by a string of the following format:
`"<id>\t<type> <start> <stop>\t<text>"`, where
`<id>` is an entity id,
`<type>` is one of entity types,
`<start>` is a position of the first symbol of entity in text,
`<stop>` is the last symbol position in text +1.
Each relation is represented by a string of the following format:
`"<id>\t<type> Arg1:<arg1_id> Arg2:<arg2_id>"`, where
`<id>` is a relation id,
`<arg1_id>` and `<arg2_id>` are entity ids.
Each link is represented by a string of the following format:
`"<id>\tReference <ent_id> <link>\t<text>"`, where
`<id>` is a link id,
`<ent_id>` is an entity id,
`<link>` is a reference to knowledge base entity (example: "Wikidata:Q1879675" if link exists, else "Wikidata:NULL"),
`<text>` is a name of entity in knowledge base if link exists, else empty string.
## Citation Information
@article{loukachevitch2021nerel,
title={NEREL: A Russian Dataset with Nested Named Entities, Relations and Events},
author={Loukachevitch, Natalia and Artemova, Ekaterina and Batura, Tatiana and Braslavski, Pavel and Denisov, Ilia and Ivanov, Vladimir and Manandhar, Suresh and Pugachev, Alexander and Tutubalina, Elena},
journal={arXiv preprint arXiv:2108.13112},
year={2021}
}
## Contacts
Malakhov Ilya
Telegram - https://t.me/noname_4710
|
mteb | null | null | null | false | 145 | false | mteb/reddit-clustering | 2022-09-27T19:13:31.000Z | null | false | b2805658ae38990172679479369a78b86de8c390 | [] | [
"language:en"
] | https://huggingface.co/datasets/mteb/reddit-clustering/resolve/main/README.md | ---
language:
- en
--- |
NLPC-UOM | null | null | null | false | 7 | false | NLPC-UOM/Sinhala-News-Category-classification | 2022-10-25T10:03:58.000Z | null | false | 7fb2f514ea683c5282dfec0a9672ece8de90ac50 | [] | [
"language_creators:crowdsourced",
"language:si",
"license:mit",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"task_categories:text-classification"
] | https://huggingface.co/datasets/NLPC-UOM/Sinhala-News-Category-classification/resolve/main/README.md | ---
annotations_creators: []
language_creators:
- crowdsourced
language:
- si
license:
- mit
multilinguality:
- monolingual
pretty_name: sinhala-news-category-classification
size_categories:
- 1K<n<10K
source_datasets: []
task_categories:
- text-classification
task_ids: []
---
This file contains news texts (sentences) belonging to 5 different news categories (political, business, technology, sports and Entertainment). The original dataset was released by Nisansa de Silva (*Sinhala Text Classification: Observations from the Perspective of a Resource Poor Language, 2015*). The original dataset is processed and cleaned of single word texts, English only sentences etc.
If you use this dataset, please cite {*Nisansa de Silva, Sinhala Text Classification: Observations from the Perspective of a Resource Poor Language, 2015*} and {*Dhananjaya et al. BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, 2022*} |
NLPC-UOM | null | null | null | false | 1 | false | NLPC-UOM/Sinhala-News-Source-classification | 2022-10-25T10:04:01.000Z | null | false | ac4d14eeb68efbef95e247542d4432ce674faeb1 | [] | [
"language_creators:crowdsourced",
"language:si",
"license:mit",
"multilinguality:monolingual",
"task_categories:text-classification"
] | https://huggingface.co/datasets/NLPC-UOM/Sinhala-News-Source-classification/resolve/main/README.md | ---
annotations_creators: []
language_creators:
- crowdsourced
language:
- si
license:
- mit
multilinguality:
- monolingual
pretty_name: sinhala-news-source-classification
size_categories: []
source_datasets: []
task_categories:
- text-classification
task_ids: []
---
This dataset contains Sinhala news headlines extracted from 9 news sources (websites) (Sri Lanka Army, Dinamina, GossipLanka, Hiru, ITN, Lankapuwath, NewsLK,
Newsfirst, World Socialist Web Site-Sinhala). This is a processed version of the corpus created by *Sachintha, D., Piyarathna, L., Rajitha, C., and Ranathunga, S. (2021). Exploiting parallel corpora to improve multilingual embedding based document and sentence alignment*. Single word sentences, invalid characters have been removed from the originally extracted corpus and also subsampled to handle class imbalance.
If you use this dataset please cite {*Dhananjaya et al. BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification, 2022*} |
projecte-aina | null | @misc{degibert2022sequencetosequence,
title={Sequence-to-Sequence Resources for Catalan},
author={Ona de Gibert and Ksenia Kharitonova and Blanca Calvo Figueras and Jordi Armengol-Estapé and Maite Melero},
year={2022},
eprint={2202.06871},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | false | 1 | false | projecte-aina/gencata | 2022-11-10T12:48:53.000Z | null | false | 24f5bb9c5344449d8411f3ca94cb639e08e759e7 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:ca",
"language:en",
"license:cc0-1.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"task_categories:text-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring"
] | https://huggingface.co/datasets/projecte-aina/gencata/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ca
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets: []
task_categories:
- text-classification
task_ids:
- semantic-similarity-scoring
- text-scoring
pretty_name: gencata
---
# Dataset Card for GEnCaTa
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:**[Quality versus Quantity: Building Catalan-English MT Resources](http://www.lrec-conf.org/proceedings/lrec2022/workshops/SIGUL/pdf/2022.sigul-1.8.pdf)
- **Point of Contact:** [Ona de Gibert Bonet](mailto:ona.degibert@bsc.es)
### Dataset Summary
GEnCaTa is a Catalan-English dataset annotated for Parallel Corpus Filtering for MT. It is extracted from a general domain corpus crawled from the Catalan Government domains and subdomains. The dataset consists of 51,908 instances that are composed by the a Catalan sentence, its English translation, and whether the pair is valid for MT or not.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for parallel corpus filtering. This task consists in automatically filtering out bad aligned sentences or sentences that are not good enough for MT training.
### Languages
The dataset is in Catalan (`ca-CA`) and English (`en-GB`).
## Dataset Structure
### Data Instances
```
{
'ca': '- El vostre vehicle quedi immobilitzat per l'aigua',
'en': 'You must leave your car and head for higher ground when:',
'label': '0'
}
```
### Data Fields
- `ca` (str): Catalan sentence
- `en` (str): English sentence
- 'label' (int): 0, if the sentences are not aligned, and 1, if they are aligned and valid for MT training.
### Data Splits
We split our dataset into train, dev and test splits (positive / negative samples):
- train: 23,897 / 8,011
- dev: 7,490 / 2,510
- test: 7,489 / 2,511
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of the new research line of parallel corpus filtering. Previous synthetic datasets exists, but to our knowledge, this is the first manually curated dataset for parallel sentence alignment.
### Source Data
#### Initial Data Collection and Normalization
We crawled the domains and subdomains of .gencat.cat and obtained comparable documents. Then we used Vecalign to perform sentence alignment.
#### Who are the source language producers?
The data comes from the official Catalan Government websites.
### Annotations
#### Annotation process
Two annotators reviewed the automatically aligned segments provided by Vecalign and labeled each pair as valid or not valid for MT training. This involves labeling as negative misaligned sentences, truncated sentences, and non-linguistic sentences.
#### Who are the annotators?
Four native Catalan speakers with a good understanding of the English language.
### Personal and Sensitive Information
Since all data comes from public websites, no anonymization process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of the field of Parallel Corpus Filtering and leads to higher-quality resources for Catalan Machine Translation systems.
### Discussion of Biases
We are aware that since the data comes from public web pages, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by MT4All CEF project and the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/).
### Citation
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```
@inproceedings{degibertbonet-EtAl:2022:SIGUL,
abstract = {In this work, we make the case of quality over quantity when training a MT system for a medium-to-low-resource language pair, namely Catalan-English. We compile our training corpus out of existing resources of varying quality and a new high-quality corpus. We also provide new evaluation translation datasets in three different domains. In the process of building Catalan-English parallel resources, we evaluate the impact of drastically filtering alignments in the resulting MT engines. Our results show that even when resources are limited, as in this case, it is worth filtering for quality. We further explore the cross-lingual transfer learning capabilities of the proposed model for parallel corpus filtering by applying it to other languages. All resources generated in this work are released under open license to encourage the development of language technology in Catalan.},
address = {Marseille, France},
author = {{de Gibert Bonet}, Ona and Kharitonova, Ksenia and {Calvo Figueras}, Blanca and Armengol-Estap{\'{e}}, Jordi and Melero, Maite},
booktitle = {Proceedings of the the 1st Annual Meeting of the ELRA/ISCA Special Interest Group on Under-Resourced Languages},
pages = {59--69},
publisher = {European Language Resources Association},
title = {{Quality versus Quantity: Building Catalan-English MT Resources}},
url = {http://www.lrec-conf.org/proceedings/lrec2022/workshops/SIGUL/pdf/2022.sigul-1.8.pdf},
year = {2022}
}
```
### Contributions
[N/A]
| |
mteb | null | null | null | false | 98 | false | mteb/stackexchange-clustering | 2022-09-27T19:11:56.000Z | null | false | 70a89468f6dccacc6aa2b12a6eac54e74328f235 | [] | [
"language:en"
] | https://huggingface.co/datasets/mteb/stackexchange-clustering/resolve/main/README.md | ---
language:
- en
--- |
mteb | null | null | null | false | 513 | false | mteb/twentynewsgroups-clustering | 2022-09-27T19:13:51.000Z | null | false | 091a54f9a36281ce7d6590ec8c75dd485e7e01d4 | [] | [
"language:en"
] | https://huggingface.co/datasets/mteb/twentynewsgroups-clustering/resolve/main/README.md | ---
language:
- en
--- |
skt | null | null | The dataset contains data for KoBEST dataset | false | 3,765 | false | skt/kobest_v1 | 2022-08-22T09:00:17.000Z | null | false | 46d3e24187694e12e7b4ae59b94c80b86ab774d8 | [] | [
"arxiv:2204.04541",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:ko",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original"
] | https://huggingface.co/datasets/skt/kobest_v1/resolve/main/README.md | ---
pretty_name: KoBEST
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ko
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
---
# Dataset Card for KoBEST
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/SKT-LSL/KoBEST_datarepo
- **Paper:**
- **Point of Contact:** https://github.com/SKT-LSL/KoBEST_datarepo/issues
### Dataset Summary
KoBEST is a Korean benchmark suite consists of 5 natural language understanding tasks that requires advanced knowledge in Korean.
### Supported Tasks and Leaderboards
Boolean Question Answering, Choice of Plausible Alternatives, Words-in-Context, HellaSwag, Sentiment Negation Recognition
### Languages
`ko-KR`
## Dataset Structure
### Data Instances
#### KB-BoolQ
An example of a data point looks as follows.
```
{'paragraph': '두아 리파(Dua Lipa, 1995년 8월 22일 ~ )는 잉글랜드의 싱어송라이터, 모델이다. BBC 사운드 오브 2016 명단에 노미닛되었다. 싱글 "Be the One"가 영국 싱글 차트 9위까지 오르는 등 성과를 보여주었다.',
'question': '두아 리파는 영국인인가?',
'label': 1}
```
#### KB-COPA
An example of a data point looks as follows.
```
{'premise': '물을 오래 끓였다.',
'question': '결과',
'alternative_1': '물의 양이 늘어났다.',
'alternative_2': '물의 양이 줄어들었다.',
'label': 1}
```
#### KB-WiC
An example of a data point looks as follows.
```
{'word': '양분',
'context_1': '토양에 [양분]이 풍부하여 나무가 잘 자란다. ',
'context_2': '태아는 모체로부터 [양분]과 산소를 공급받게 된다.',
'label': 1}
```
#### KB-HellaSwag
An example of a data point looks as follows.
```
{'context': '모자를 쓴 투수가 타자에게 온 힘을 다해 공을 던진다. 공이 타자에게 빠른 속도로 다가온다. 타자가 공을 배트로 친다. 배트에서 깡 소리가 난다. 공이 하늘 위로 날아간다.',
'ending_1': '외야수가 떨어지는 공을 글러브로 잡는다.',
'ending_2': '외야수가 공이 떨어질 위치에 자리를 잡는다.',
'ending_3': '심판이 아웃을 외친다.',
'ending_4': '외야수가 공을 따라 뛰기 시작한다.',
'label': 3}
```
#### KB-SentiNeg
An example of a data point looks as follows.
```
{'sentence': '택배사 정말 마음에 듬',
'label': 1}
```
### Data Fields
### KB-BoolQ
+ `paragraph`: a `string` feature
+ `question`: a `string` feature
+ `label`: a classification label, with possible values `False`(0) and `True`(1)
### KB-COPA
+ `premise`: a `string` feature
+ `question`: a `string` feature
+ `alternative_1`: a `string` feature
+ `alternative_2`: a `string` feature
+ `label`: an answer candidate label, with possible values `alternative_1`(0) and `alternative_2`(1)
### KB-WiC
+ `target_word`: a `string` feature
+ `context_1`: a `string` feature
+ `context_2`: a `string` feature
+ `label`: a classification label, with possible values `False`(0) and `True`(1)
### KB-HellaSwag
+ `target_word`: a `string` feature
+ `context_1`: a `string` feature
+ `context_2`: a `string` feature
+ `label`: a classification label, with possible values `False`(0) and `True`(1)
### KB-SentiNeg
+ `sentence`: a `string` feature
+ `label`: a classification label, with possible values `Negative`(0) and `Positive`(1)
### Data Splits
#### KB-BoolQ
+ train: 3,665
+ dev: 700
+ test: 1,404
#### KB-COPA
+ train: 3,076
+ dev: 1,000
+ test: 1,000
#### KB-WiC
+ train: 3,318
+ dev: 1,260
+ test: 1,260
#### KB-HellaSwag
+ train: 3,665
+ dev: 700
+ test: 1,404
#### KB-SentiNeg
+ train: 3,649
+ dev: 400
+ test: 397
+ test_originated: 397 (Corresponding training data where the test set is originated from.)
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
```
@misc{https://doi.org/10.48550/arxiv.2204.04541,
doi = {10.48550/ARXIV.2204.04541},
url = {https://arxiv.org/abs/2204.04541},
author = {Kim, Dohyeong and Jang, Myeongjun and Kwon, Deuk Sin and Davis, Eric},
title = {KOBEST: Korean Balanced Evaluation of Significant Tasks},
publisher = {arXiv},
year = {2022},
}
```
[More Information Needed]
### Contributions
Thanks to [@MJ-Jang](https://github.com/MJ-Jang) for adding this dataset. |
kniemiec | null | null | null | false | 1 | false | kniemiec/crack-segm | 2022-04-07T17:11:32.000Z | null | false | b54efd9e872e2df7c82afec86d0ef898dd3b6b72 | [] | [] | https://huggingface.co/datasets/kniemiec/crack-segm/resolve/main/README.md | |
johnnydevriese | null | null | null | false | 2 | false | johnnydevriese/airplanes | 2022-09-16T15:28:53.000Z | null | false | 5171fedc217c7bc893fa08f0e1d353a2cf666423 | [] | [
"multilinguality:monolingual",
"task_categories:image-classification",
"task_ids:multi-label-image-classification"
] | https://huggingface.co/datasets/johnnydevriese/airplanes/resolve/main/README.md | image-classification
---
annotations_creators: []
language_creators: []
language: []
license: []
multilinguality:
- monolingual
pretty_name: airplanes
size_categories: []
source_datasets: []
task_categories:
- image-classification
task_ids:
- multi-label-image-classification
---
# Dataset Card for airplanes
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
three classes of airplanes: drone, UAV, and fighter
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Drone images were taken from:
Wang, Ye, Yueru Chen, Jongmoo Choi, and C-C. Jay Kuo. “Towards Visible and Thermal Drone Monitoring with Convolutional Neural Networks.” APSIPA Transactions on Signal and Information Processing 8 (2019).
[mcl-drone-dataset](https://mcl.usc.edu/mcl-drone-dataset/) |
openclimatefix | null | null | null | false | 1 | false | openclimatefix/swedish-rainfall-radar | 2022-07-23T14:11:57.000Z | null | false | a860423bf48f6e01bb0ff7a28744eb589e0d7ddf | [] | [
"license:mit"
] | https://huggingface.co/datasets/openclimatefix/swedish-rainfall-radar/resolve/main/README.md | ---
license: mit
---
|
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/ratishsp__ent__1649421332 | 2022-04-08T12:35:35.000Z | null | false | 83551fe521307e2a05274a2150d1d554f898d083 | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:ENT",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/ratishsp__ent__1649421332/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: ENT
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: ENT
|
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/ratishsp__ncp_cc__1649422112 | 2022-04-08T12:48:34.000Z | null | false | 822ca2e2310fc76c47ac7e02c2316a260f63d83d | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:NCP_CC",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/ratishsp__ncp_cc__1649422112/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: NCP_CC
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: NCP_CC
|
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/ratishsp__ent__1649422569 | 2022-04-08T12:56:11.000Z | null | false | 8e91091fcdcf73d0dca08f4e73cd7b1cbf5c7b51 | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:ENT",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/ratishsp__ent__1649422569/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: ENT
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: ENT
|
GEM-submissions | null | null | null | false | 1 | false | GEM-submissions/ratishsp__ncp_cc__1649422863 | 2022-04-08T13:01:05.000Z | null | false | f6f5797f4852eb1ac0dad141ce7894ed6d71bf8a | [] | [
"benchmark:gem",
"type:prediction",
"submission_name:NCP_CC",
"tags:evaluation",
"tags:benchmark"
] | https://huggingface.co/datasets/GEM-submissions/ratishsp__ncp_cc__1649422863/resolve/main/README.md | ---
benchmark: gem
type: prediction
submission_name: NCP_CC
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: NCP_CC
|
bergoliveira | null | ALBUQUERQUE2022,author="Albuquerque, Hidelberg O. and Costa, Rosimeire and Silvestre, Gabriel and Souza, Ellen and da Silva, N{\'a}dia F. F. and Vit{\'o}rio, Douglas and Moriyama, Gyovana and Martins, Lucas and Soezima, Luiza and Nunes, Augusto and Siqueira, Felipe and Tarrega, Jo{\~a}o P. and Beinotti, Joao V. and Dias, Marcio and Silva, Matheus and Gardini, Miguel and Silva, Vinicius and de Carvalho, Andr{\'e} C. P. L. F. and Oliveira, Adriano L. I.", title="{UlyssesNER-Br}: A Corpus of Brazilian Legislative Documents for Named Entity Recognition", booktitle="Computational Processing of the Portuguese Language", year="2022", pages="3--14",@inproceedings{inPress, PROPOR2022} | PL-corpus is a Portuguese language dataset for named entity recognition applied to legislative documents. Its parte of the UlyssesBR-corpus, and consists entirely of manually annotated public bills texts (projetos de leis) and contains tags for persons, locations, date entities, organizations, legal foundation and bills. | false | 1 | false | bergoliveira/pl-corpus | 2022-10-23T05:38:32.000Z | ulyssesner-br | false | 2e232f92a79c80b5f7dfb36a85bfda58f20d631c | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:pt",
"language_bcp47:pt-BR",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/bergoliveira/pl-corpus/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- pt
language_bcp47:
- pt-BR
license:
- unknown
multilinguality:
- monolingual
paperswithcode_id: ulyssesner-br
pretty_name: pl-corpus
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- structure-prediction
task_ids:
- named-entity-recognition
---
# Dataset Card for pl-corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [UlyssesNER-Br homepage](https://github.com/Convenio-Camara-dos-Deputados/ulyssesner-br-propor)
- **Repository:** [UlyssesNER-Br repository](https://github.com/Convenio-Camara-dos-Deputados/ulyssesner-br-propor)
- **Paper:** [UlyssesNER-Br: A corpus of brazilian legislative documents for named entity recognition. In: Computational Processing of the Portuguese Language](https://link.springer.com/chapter/10.1007/978-3-030-98305-5_1)
- **Point of Contact:** [Hidelberg O. Albuquerque](mailto:hidelberg.albuquerque@ufrpe.br)
### Dataset Summary
PL-corpus is part of the UlyssesNER-Br, a corpus of Brazilian Legislative Documents for NER with quality baselines The presented corpus consists of 150 public bills from Brazilian Chamber of Deputies, manually annotated. Its contains semantic categories and types.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Brazilian Portuguese.
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
@InProceedings{ALBUQUERQUE2022,
author="Albuquerque, Hidelberg O.
and Costa, Rosimeire
and Silvestre, Gabriel
and Souza, Ellen
and da Silva, N{\'a}dia F. F.
and Vit{\'o}rio, Douglas
and Moriyama, Gyovana
and Martins, Lucas
and Soezima, Luiza
and Nunes, Augusto
and Siqueira, Felipe
and Tarrega, Jo{\~a}o P.
and Beinotti, Joao V.
and Dias, Marcio
and Silva, Matheus
and Gardini, Miguel
and Silva, Vinicius
and de Carvalho, Andr{\'e} C. P. L. F.
and Oliveira, Adriano L. I.",
title="{UlyssesNER-Br}: A Corpus of Brazilian Legislative Documents for Named Entity Recognition",
booktitle="Computational Processing of the Portuguese Language",
year="2022",
pages="3--14",
} |
lm233 | null | null | null | false | 1 | false | lm233/humor_train | 2022-04-08T18:13:45.000Z | null | false | 67e283fee4cd7cbabbe771d1df88382b043e914c | [] | [] | https://huggingface.co/datasets/lm233/humor_train/resolve/main/README.md | annotations_creators: []
language_creators: []
languages: []
licenses: []
multilinguality: []
pretty_name: humor_train
size_categories: []
source_datasets: []
task_categories: []
task_ids: [] |
McGill-NLP | null | null | TopiOCQA is an information-seeking conversational dataset with challenging topic switching phenomena. | false | 4 | false | McGill-NLP/TopiOCQA | 2022-10-23T05:39:27.000Z | null | false | cd9c5a04a8337dd20f1e5cb6a3e0614459eda591 | [] | [
"arxiv:2110.00768",
"annotations_creators:crowdsourced",
"language:en",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"task_categories:text-retrieval",
"task_categories:text-generation",
"task_ids:open-domain-cqa",
"task_ids:conversational-question-answeri... | https://huggingface.co/datasets/McGill-NLP/TopiOCQA/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100k
task_categories:
- text-retrieval
- text-generation
- sequence-modeling
task_ids:
- open-domain-cqa
- conversational-question-answering
pretty_name: Open-domain Conversational Question Answering with Topic Switching
---
# Dataset Card for TopiOCQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [TopiOCQA homepage](https://mcgill-nlp.github.io/topiocqa/)
- **Repository:** [TopiOCQA Github](https://github.com/McGill-NLP/topiocqa)
- **Paper:** [Open-domain Conversational Question Answering with Topic Switching](https://arxiv.org/abs/2110.00768)
- **Point of Contact:** [Vaibhav Adlakha](mailto:vaibhav.adlakha@mila.quebec)
### Dataset Summary
TopiOCQA is an information-seeking conversational dataset with challenging topic switching phenomena.
### Languages
The language in the dataset is English as spoken by the crowdworkers. The BCP-47 code for English is en.
## Additional Information
### Licensing Information
TopiOCQA is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```
@inproceedings{adlakha2022topiocqa,
title={Topi{OCQA}: Open-domain Conversational Question Answering with Topic Switching},
author={Adlakha, Vaibhav and Dhuliawala, Shehzaad and Suleman, Kaheer and de Vries, Harm and Reddy, Siva},
journal={Transactions of the Association for Computational Linguistics},
volume = {10},
pages = {468-483},
year = {2022},
month = {04},
year={2022},
issn = {2307-387X},
doi = {10.1162/tacl_a_00471},
url = {https://doi.org/10.1162/tacl\_a\_00471},
eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00471/2008126/tacl\_a\_00471.pdf},
}
``` |
nateraw | null | @article{DBLP:journals/corr/HaE17,
author = {David Ha and
Douglas Eck},
title = {A Neural Representation of Sketch Drawings},
journal = {CoRR},
volume = {abs/1704.03477},
year = {2017},
url = {http://arxiv.org/abs/1704.03477},
archivePrefix = {arXiv},
eprint = {1704.03477},
timestamp = {Mon, 13 Aug 2018 16:48:30 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/HaE17},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | The Quick Draw Dataset is a collection of 50 million drawings across 345 categories, contributed by players of the game Quick, Draw!. | false | 1 | false | nateraw/quickdraw | 2022-04-08T19:48:58.000Z | null | false | 545613aee11c3c7fa3748b8ca9cdfd1a92e64292 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/nateraw/quickdraw/resolve/main/README.md | ---
license: cc-by-4.0
---
|
ceyda | null | null | null | false | 56 | false | ceyda/smithsonian_butterflies | 2022-07-13T09:32:27.000Z | null | false | b14fd6edb25ad7646d25599565008cadc013f952 | [] | [
"annotations_creators:expert-generated",
"language:en",
"language_creators:expert-generated",
"license:cc0-1.0",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"task_categories:image-classification",
"task_ids:multi-label-image-classification"
] | https://huggingface.co/datasets/ceyda/smithsonian_butterflies/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Smithsonian Butterflies
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-label-image-classification
---
# Dataset Card for [Smithsonian Butterflies]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections [here](https://collections.si.edu/search/results.htm?q=butterfly&view=list&fq=online_media_type%3A%22Images%22&fq=topic%3A%22Insects%22&fq=data_source%3A%22NMNH+-+Entomology+Dept.%22&media.CC0=true&dsort=title&start=0)
### Dataset Summary
High-res images from Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections. Crawled
### Supported Tasks and Leaderboards
Includes metadata about the scientific name of butterflies, but there maybe missing values. Might be good for classification.
### Languages
English
## Dataset Structure
### Data Instances
# Example data
```
{'image_url': 'https://ids.si.edu/ids/deliveryService?id=ark:/65665/m3b3132f6666904de396880d9dc811c5cd',
'image_alt': 'view Aholibah Underwing digital asset number 1',
'id': 'ark:/65665/m3b3132f6666904de396880d9dc811c5cd',
'name': 'Aholibah Underwing',
'scientific_name': 'Catocala aholibah',
'gender': None,
'taxonomy': 'Animalia, Arthropoda, Hexapoda, Insecta, Lepidoptera, Noctuidae, Catocalinae',
'region': None,
'locality': None,
'date': None,
'usnm_no': 'EO400317-DSP',
'guid': 'http://n2t.net/ark:/65665/39b506292-715f-45a7-8511-b49bb087c7de',
'edan_url': 'edanmdm:nmnheducation_10866595',
'source': 'Smithsonian Education and Outreach collections',
'stage': None,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2000x1328 at 0x7F57D0504DC0>,
'image_hash': '27a5fe92f72f8b116d3b7d65bac84958',
'sim_score': 0.8440760970115662}
```
### Data Fields
sim-score indicates clip score for "pretty butterfly". This is to eliminate non-butterfly images(just id card images etc)
### Data Splits
No specific split exists.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
Crawled from "Education and Outreach" & "NMNH - Entomology Dept." collections found online [here](https://collections.si.edu/search/results.htm?q=butterfly&view=list&fq=online_media_type%3A%22Images%22&fq=topic%3A%22Insects%22&fq=data_source%3A%22NMNH+-+Entomology+Dept.%22&media.CC0=true&dsort=title&start=0)
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Doesn't include all butterfly species ## Additional Information
### Dataset Curators
Smithsonian "Education and Outreach" & "NMNH - Entomology Dept." collections
### Licensing Information
Only results marked: CC0
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset. |
gdwangh | null | null | null | false | 1 | false | gdwangh/kaggle-nlp-getting-start | 2022-04-09T08:13:03.000Z | null | false | 6b37397565bdbd6ede10e362e6a1be4c62083bb3 | [] | [] | https://huggingface.co/datasets/gdwangh/kaggle-nlp-getting-start/resolve/main/README.md | Dataset Summary
- Natural Language Processing with Disaster Tweets: https://www.kaggle.com/competitions/nlp-getting-started/data
- This particular challenge is perfect for data scientists looking to get started with Natural Language Processing. The competition dataset is not too big, and even if you don’t have much personal computing power, you can do all of the work in our free, no-setup, Jupyter Notebooks environment called Kaggle Notebooks.
Columns
- id - a unique identifier for each tweet
- text - the text of the tweet
- location - the location the tweet was sent from (may be blank)
- keyword - a particular keyword from the tweet (may be blank)
- target - in train.csv only, this denotes whether a tweet is about a real disaster (1) or not (0)
|
huggan | null | null | null | false | 1 | false | huggan/chebakia | 2022-05-27T11:53:19.000Z | null | false | 5ce8dc4c178d59d0fcb8f3e580f93fa95ed57901 | [] | [] | https://huggingface.co/datasets/huggan/chebakia/resolve/main/README.md | # Data Summary
This dataset contains images of Moroccan Chebakia (Traditional Ramadan Sweets).
# Data Source
All of the images were web scrapped using a google image search API.
### Contributions
[`Ilyas Moutawwakil`](https://huggingface.co/IlyasMoutawwakil) added this dataset to the hub. |
Guldeniz | null | null | null | false | 2 | false | Guldeniz/flower_dataset | 2022-04-09T20:52:59.000Z | null | false | cf40283692122fe32d2c1d009f5b1a674be473ad | [] | [] | https://huggingface.co/datasets/Guldeniz/flower_dataset/resolve/main/README.md | #flowersdataset #segmentation #VGG
# Dataset Card for Flowers Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Official VGG'S README.md](#official-vggs-README.md)
## Dataset Description
- **Homepage:** https://www.robots.ox.ac.uk/~vgg/data/flowers/17/index.html
- **Repository:** https://huggingface.co/datasets/Guldeniz/flower_dataset
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
VGG have created a 17 category flower dataset with 80 images for each class. The flowers chosen are some common flowers in the UK. The images have large scale, pose and light variations and there are also classes with large varations of images within the class and close similarity to other classes. The categories can be seen in the figure below. We randomly split the dataset into 3 different training, validation and test sets. A subset of the images have been groundtruth labelled for segmentation.
You can find the split files in the link, as a mat file.
### Official VGG's README.md
17 Flower Category Database
----------------------------------------------
This set contains images of flowers belonging to 17 different categories.
The images were acquired by searching the web and taking pictures. There are
80 images for each category.
The database was used in:
Nilsback, M-E. and Zisserman, A. A Visual Vocabulary for Flower Classification.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2006)
http://www.robots.ox.ac.uk/~vgg/publications/papers/nilsback06.{pdf,ps.gz}.
The datasplits used in this paper are specified in datasplits.mat
There are 3 separate splits. The results in the paper are averaged over the 3 splits.
Each split has a training file (trn1,trn2,trn3), a validation file (val1, val2, val3)
and a testfile (tst1, tst2 or tst3).
Segmentation Ground Truth
------------------------------------------------
The ground truth is given for a subset of the images from 13 different
categories.
More details can be found in:
Nilsback, M-E. and Zisserman, A. Delving into the whorl of flower segmentation.
Proceedings of the British Machine Vision Conference (2007)
http:www.robots.ox.ac.uk/~vgg/publications/papers/nilsback06.(pdf,ps.gz).
The ground truth file also contains the file imlist.mat, which indicated
which images in the original database that have been anotated.
Distance matrices
-----------------------------------------------
We provide two set of distance matrices:
1. distancematrices17gcfeat06.mat
- Distance matrices using the same features and segmentation as detailed in:
Nilsback, M-E. and Zisserman, A. A Visual Vocabulary for Flower Classification.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(2006)
http://www.robots.ox.ac.uk/~vgg/publications/papers/nilsback06.{pdf,ps.gz}.
2. distancematrices17itfeat08.mat
- Distance matrices using the same features as described in:
Nilsback, M-E. and Zisserman, A. Automated flower classification over a large number of classes.
Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing (2008)
http://www.robots.ox.ac.uk/~vgg/publications/papers/nilsback08.{pdf,ps.gz}.
and the iterative segmenation scheme detailed in
Nilsback, M-E. and Zisserman, A. Delving into the whorl of flower segmentation.
Proceedings of the British Machine Vision Conference (2007)
http:www.robots.ox.ac.uk/~vgg/publications/papers/nilsback06.(pdf,ps.gz). |
huggingnft | null | null | null | false | 1 | false | huggingnft/dooggies | 2022-04-16T17:59:05.000Z | null | false | c8356096c3ce93ad76030b135e33f4ccd099816e | [] | [
"tags:huggingnft",
"tags:nft",
"tags:huggan",
"tags:gan",
"tags:image",
"tags:images",
"task:unconditional-image-generation",
"datasets:huggingnft/dooggies",
"license:mit"
] | https://huggingface.co/datasets/huggingnft/dooggies/resolve/main/README.md | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/dooggies
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/dooggies).
Model is available [here](https://huggingface.co/huggingnft/dooggies).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/dooggies")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
huggingnft | null | null | null | false | 1 | false | huggingnft/cryptoadz-by-gremplin | 2022-04-16T17:59:06.000Z | null | false | 5ca85c638c922bdae8dfd4fbdf7d172ecb0c28d1 | [] | [
"tags:huggingnft",
"tags:nft",
"tags:huggan",
"tags:gan",
"tags:image",
"tags:images",
"task:unconditional-image-generation",
"datasets:huggingnft/cryptoadz-by-gremplin",
"license:mit"
] | https://huggingface.co/datasets/huggingnft/cryptoadz-by-gremplin/resolve/main/README.md | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/cryptoadz-by-gremplin
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/cryptoadz-by-gremplin).
Model is available [here](https://huggingface.co/huggingnft/cryptoadz-by-gremplin).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/cryptoadz-by-gremplin")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
huggingnft | null | null | null | false | 5 | false | huggingnft/cyberkongz | 2022-04-16T17:59:06.000Z | null | false | 81ecc730edb35304a79c59ee811c056bd68775e8 | [] | [
"tags:huggingnft",
"tags:nft",
"tags:huggan",
"tags:gan",
"tags:image",
"tags:images",
"task:unconditional-image-generation",
"datasets:huggingnft/cyberkongz",
"license:mit"
] | https://huggingface.co/datasets/huggingnft/cyberkongz/resolve/main/README.md | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/cyberkongz
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/cyberkongz).
Model is available [here](https://huggingface.co/huggingnft/cyberkongz).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/cyberkongz")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
huggingnft | null | null | null | false | 1 | false | huggingnft/mini-mutants | 2022-04-16T17:59:06.000Z | null | false | fffef77aafbde453e1e78f72adc287fbbac3bc15 | [] | [
"tags:huggingnft",
"tags:nft",
"tags:huggan",
"tags:gan",
"tags:image",
"tags:images",
"task:unconditional-image-generation",
"datasets:huggingnft/mini-mutants",
"license:mit"
] | https://huggingface.co/datasets/huggingnft/mini-mutants/resolve/main/README.md | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/mini-mutants
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/mini-mutants).
Model is available [here](https://huggingface.co/huggingnft/mini-mutants).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/mini-mutants")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
huggingnft | null | null | null | false | 1 | false | huggingnft/theshiboshis | 2022-04-16T17:59:06.000Z | null | false | f8ff5ec9ffd286d395a88ea1407957bc457df703 | [] | [
"tags:huggingnft",
"tags:nft",
"tags:huggan",
"tags:gan",
"tags:image",
"tags:images",
"task:unconditional-image-generation",
"datasets:huggingnft/theshiboshis",
"license:mit"
] | https://huggingface.co/datasets/huggingnft/theshiboshis/resolve/main/README.md | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/theshiboshis
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/theshiboshis).
Model is available [here](https://huggingface.co/huggingnft/theshiboshis).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/theshiboshis")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
huggingnft | null | null | null | false | 5 | false | huggingnft/cryptopunks | 2022-04-16T17:59:07.000Z | null | false | 9c963cdf5cd5df0924c0cd0fcd0d44acae67a15a | [] | [
"tags:huggingnft",
"tags:nft",
"tags:huggan",
"tags:gan",
"tags:image",
"tags:images",
"task:unconditional-image-generation",
"datasets:huggingnft/cryptopunks",
"license:mit"
] | https://huggingface.co/datasets/huggingnft/cryptopunks/resolve/main/README.md | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/cryptopunks
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/cryptopunks).
Model is available [here](https://huggingface.co/huggingnft/cryptopunks).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/cryptopunks")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
huggingnft | null | null | null | false | 1 | false | huggingnft/nftrex | 2022-04-16T17:59:07.000Z | null | false | a7b35e95225cdeca125e0ba77f29ccebedc3d48d | [] | [
"tags:huggingnft",
"tags:nft",
"tags:huggan",
"tags:gan",
"tags:image",
"tags:images",
"task:unconditional-image-generation",
"datasets:huggingnft/nftrex",
"license:mit"
] | https://huggingface.co/datasets/huggingnft/nftrex/resolve/main/README.md | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/nftrex
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/nftrex).
Model is available [here](https://huggingface.co/huggingnft/nftrex).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/nftrex")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
huggingnft | null | null | null | false | 1 | false | huggingnft/etherbears | 2022-04-16T17:59:07.000Z | null | false | 4ab22a713cd38dc0275a53b7b945975ce63fead8 | [] | [
"tags:huggingnft",
"tags:nft",
"tags:huggan",
"tags:gan",
"tags:image",
"tags:images",
"task:unconditional-image-generation",
"datasets:huggingnft/etherbears",
"license:mit"
] | https://huggingface.co/datasets/huggingnft/etherbears/resolve/main/README.md | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/etherbears
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/etherbears).
Model is available [here](https://huggingface.co/huggingnft/etherbears).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/etherbears")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
huggingnft | null | null | null | false | 1 | false | huggingnft/alpacadabraz | 2022-04-16T17:59:07.000Z | null | false | 3ca436a670f55f3fb909dacf588c575885b8aaa2 | [] | [
"tags:huggingnft",
"tags:nft",
"tags:huggan",
"tags:gan",
"tags:image",
"tags:images",
"task:unconditional-image-generation",
"datasets:huggingnft/alpacadabraz",
"license:mit"
] | https://huggingface.co/datasets/huggingnft/alpacadabraz/resolve/main/README.md | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/alpacadabraz
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/alpacadabraz).
Model is available [here](https://huggingface.co/huggingnft/alpacadabraz).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/alpacadabraz")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.