text-classification bool 2 classes | text stringlengths 0 664k |
|---|---|
false |
# Dataset Card for SNAP
## Dataset Description
- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
- **Paper:** [Segmenting hashtags using automatically created training data](http://www.lrec-conf.org/proceedings/lrec2016/pdf/708_Paper.pdf)
### Dataset Summary
Automatically segmented 803K SNAP Twitter Data Set hashtags with the heuristic described in the paper "Segmenting hashtags using automatically created training data".
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 0,
"hashtag": "BrandThunder",
"segmentation": "Brand Thunder"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{celebi2016segmenting,
title={Segmenting hashtags using automatically created training data},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
booktitle={Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)},
pages={2981--2985},
year={2016}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library.
|
false |
# Dataset Card for ekar_chinese
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ekar-leaderboard.github.io
- **Paper:** [E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning](https://aclanthology.org/2022.findings-acl.311)
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1671/overview
- **Point of Contact:** jjchen19@fudan.edu.cn
### Dataset Summary
***New!***(9/18/2022) E-KAR `v1.1` is officially released (at the `main` branch), **with a higher-quality English dataset!** In `v1.1`, we further improve the Chinese-to-English translation quality of the English E-KAR, with over 600 problems and over 1,000 explanations manually adjusted. You can still find previous version (as in the paper) in the `v1.0` branch in the repo. For more information please refer to https://ekar-leaderboard.github.io.
The ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area.
### Supported Tasks and Leaderboards
- `analogical-qa`: The dataset can be used to train a model for analogical reasoning in the form of multiple-choice QA.
- `explanation-generation`: The dataset can be used to generate free-text explanations to rationalize analogical reasoning.
This dataset supports two task modes: EASY mode and HARD mode:
- `EASY mode`: where query explanation can be used as part of the input.
- `HARD mode`: no explanation is allowed as part of the input.
### Languages
This dataset is in Chinese, with its [English version](https://huggingface.co/datasets/Jiangjie/ekar_english).
## Dataset Structure
### Data Instances
```json
{
"id": "982f17-en",
"question": "plant:coal",
"choices": {
"label": [
"A",
"B",
"C",
"D"
],
"text": [
"white wine:aged vinegar",
"starch:corn",
"milk:yogurt",
"pickled cabbage:cabbage"
]
},
"answerKey": "C",
"explanation": [
"\"plant\" is the raw material of \"coal\".",
"both \"white wine\" and \"aged vinegar\" are brewed.",
"\"starch\" is made of \"corn\", and the order of words is inconsistent with the query.",
"\"yogurt\" is made from \"milk\".",
"\"pickled cabbage\" is made of \"cabbage\", and the word order is inconsistent with the query."
],
"relation": [
[["plant", "coal", "R3.7"]],
[["white wine", "aged vinegar", "R2.4"]],
[["corn", "starch", "R3.7"]],
[["milk", "yogurt", "R3.7"]],
[["cabbage", "pickled cabbage", "R3.7"]]
]
}
```
### Data Fields
- id: a string identifier for each example.
- question: query terms.
- choices: candidate answer terms.
- answerKey: correct answer.
- explanation: explanations for query (1st) and candidate answers (2nd-5th).
- relation: annotated relations for terms in the query (1st) and candidate answers (2nd-5th).
### Data Splits
| name |train|validation|test|
|:-----:|:---:|:--------:|:--:|
|default| 1155 | 165 | 335 |
|description| | | blinded |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop analogical reasoning systems that are right for the right reasons.
### Discussion of Biases
This dataset is sourced and translated from the Civil Service Examinations of China. Therefore, it may contain information biased to Chinese culture.
### Other Known Limitations
1. The explanation annotation process in E-KAR (not the EG task) is mostly post-hoc and reflects only the result of reasoning. Humans solve the analogy problems in a trial-and-error manner, i.e., adjusting the abduced source structure and trying to find the most suited one for all candidate answers. Therefore, such explanations cannot offer supervision for intermediate reasoning.
2. E-KAR only presents one feasible explanation for each problem, whereas there may be several.
## Additional Information
### Dataset Curators
The dataset was initially created and curated by Jiangjie Chen (Fudan University, ByteDance), Rui Xu (Fudan University), Ziquan Fu (Brain Technologies, Inc.), Wei Shi (South China University of Technology), Xinbo Zhang (ByteDance), Changzhi Sun (ByteDance) and other colleagues at ByteDance and Fudan University.
### Licensing Information
[Needs More Information]
### Citation Information
```latex
@inproceedings{chen-etal-2022-e,
title = "{E}-{KAR}: A Benchmark for Rationalizing Natural Language Analogical Reasoning",
author = "Chen, Jiangjie and
Xu, Rui and
Fu, Ziquan and
Shi, Wei and
Li, Zhongqiao and
Zhang, Xinbo and
Sun, Changzhi and
Li, Lei and
Xiao, Yanghua and
Zhou, Hao",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.311",
pages = "3941--3955",
}
```
|
false |
# Fashion-Mnist-C (Corrupted Fashion-Mnist)
A corrupted Fashion-MNIST benchmark for testing out-of-distribution robustness of computer vision models, which were trained on Fashion-Mmnist.
[Fashion-Mnist](https://github.com/zalandoresearch/fashion-mnist) is a drop-in replacement for MNIST and Fashion-Mnist-C is a corresponding drop-in replacement for [MNIST-C](https://arxiv.org/abs/1906.02337).
## Corruptions
The following corruptions are applied to the images, equivalently to MNIST-C:
- **Noise** (shot noise and impulse noise)
- **Blur** (glass and motion blur)
- **Transformations** (shear, scale, rotate, brightness, contrast, saturate, inverse)
In addition, we apply various **image flippings and turnings**: For fashion images, flipping the image does not change its label,
and still keeps it a valid image. However, we noticed that in the nominal fmnist dataset, most images are identically oriented
(e.g. most shoes point to the left side). Thus, flipped images provide valid OOD inputs.
Most corruptions are applied at a randomly selected level of *severity*, s.t. some corrupted images are really hard to classify whereas for others the corruption, while present, is subtle.
## Examples
| Turned | Blurred | Rotated | Noise | Noise | Turned |
| ------------- | ------------- | --------| --------- | -------- | --------- |
| <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_0.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_1.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_6.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_3.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_4.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_5.png" width="100" height="100"> |
## Citation
If you use this dataset, please cite the following paper:
```
@inproceedings{Weiss2022SimpleTechniques,
title={Simple Techniques Work Surprisingly Well for Neural Network Test Prioritization and Active Learning},
author={Weiss, Michael and Tonella, Paolo},
booktitle={Proceedings of the 31th ACM SIGSOFT International Symposium on Software Testing and Analysis},
year={2022}
}
```
Also, you may want to cite FMNIST and MNIST-C.
## Credits
- Fashion-Mnist-C is inspired by Googles MNIST-C and our repository is essentially a clone of theirs. See their [paper](https://arxiv.org/abs/1906.02337) and [repo](https://github.com/google-research/mnist-c).
- Find the nominal (i.e., non-corrupted) Fashion-MNIST dataset [here](https://github.com/zalandoresearch/fashion-mnist).
|
false |
# Dataset Card for Team-PIXEL/rendered-bookcorpus
## Dataset Description
- **Homepage:** [https://github.com/xplip/pixel](https://github.com/xplip/pixel)
- **Repository:** [https://github.com/xplip/pixel](https://github.com/xplip/pixel)
- **Papers:** [Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
](https://arxiv.org/abs/1506.06724), [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991)
- **Point of Contact:** [Phillip Rust](mailto:p.rust@di.ku.dk)
- **Size of downloaded dataset files:** 63.58 GB
- **Size of the generated dataset:** 63.59 GB
- **Total amount of disk used:** 127.17 GB
### Dataset Summary
This dataset is a version of the BookCorpus available at [https://huggingface.co/datasets/bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen) with examples rendered as images with resolution 16x8464 pixels.
The original BookCorpus was introduced by Zhu et al. (2015) in [Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books](https://arxiv.org/abs/1506.06724) and contains 17868 books of various genres. The rendered BookCorpus was used to train the [PIXEL](https://huggingface.co/Team-PIXEL/pixel-base) model introduced in the paper [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991) by Phillip Rust, Jonas F. Lotz, Emanuele Bugliarello, Elizabeth Salesky, Miryam de Lhoneux, and Desmond Elliott.
The BookCorpusOpen dataset was rendered book-by-book into 5.4M examples containing approximately 1.1B words in total. The dataset is stored as a collection of 162 parquet files. It was rendered using the script openly available at [https://github.com/xplip/pixel/blob/main/scripts/data/prerendering/prerender_bookcorpus.py](https://github.com/xplip/pixel/blob/main/scripts/data/prerendering/prerender_bookcorpus.py). The text renderer uses a PyGame backend and a collection of merged Google Noto Sans fonts. The PyGame backend does not support complex text layouts (e.g. ligatures and right-to-left scripts) or emoji, so occurrences of such text in the BookCorpus have not been rendered accurately.
Each example consists of a "pixel_values" field which stores a 16x8464 (height, width) grayscale image containing the rendered text, and an integer value "num_patches" which stores how many image patches (when splitting the image into 529 non-overlapping patches of resolution 16x16 pixels) in the associated images contain actual text, i.e. are neither blank (fully white) nor are the fully black end-of-sequence patch.
The rendered BookCorpus can be loaded via the datasets library as follows:
```python
from datasets import load_dataset
# Download the full dataset to disk
load_dataset("Team-PIXEL/rendered-bookcorpus", split="train")
# Stream the dataset directly from the hub
load_dataset("Team-PIXEL/rendered-bookcorpus", split="train", streaming=True)
```
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 63.58 GB
- **Size of the generated dataset:** 63.59 GB
- **Total amount of disk used:** 127.17 GB
An example of 'train' looks as follows.
```
{
"pixel_values": <PIL.PngImagePlugin.PngImageFile image mode=L size=8464x16
"num_patches": "498"
}
```
### Data Fields
The data fields are the same among all splits.
- `pixel_values`: an `Image` feature.
- `num_patches`: a `Value(dtype="int64")` feature.
### Data Splits
|train|
|:----|
|5400000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The books have been crawled from smashwords.com, see their [terms of service](https://www.smashwords.com/about/tos) for more information.
A data sheet for this dataset has also been created and published in [Addressing "Documentation Debt" in Machine Learning Research: A Retrospective Datasheet for BookCorpus](https://arxiv.org/abs/2105.05241)
### Citation Information
```bibtex
@InProceedings{Zhu_2015_ICCV,
title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books},
author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}
```
```bibtex
@article{rust-etal-2022-pixel,
title={Language Modelling with Pixels},
author={Phillip Rust and Jonas F. Lotz and Emanuele Bugliarello and Elizabeth Salesky and Miryam de Lhoneux and Desmond Elliott},
journal={arXiv preprint},
year={2022},
url={https://arxiv.org/abs/2207.06991}
}
```
### Contact Person
This dataset was added by Phillip Rust.
Github: [@xplip](https://github.com/xplip)
Twitter: [@rust_phillip](https://twitter.com/rust_phillip) |
false | |
true |
# Dataset Card for "UnpredicTable-en-wikipedia-org" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ethanperez.net/unpredictable
- **Repository:** https://github.com/JunShern/few-shot-adaptation
- **Paper:** Few-shot Adaptation Works with UnpredicTable Data
- **Point of Contact:** junshern@nyu.edu, perez@nyu.edu
### Dataset Summary
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
There are several dataset versions available:
* [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites.
* [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
* [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset.
* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):
* [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low)
* [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium)
* [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high)
* UnpredicTable data subsets based on the website of origin:
* [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com)
* [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net)
* [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com)
* [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com)
* [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com)
* [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com)
* [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org)
* [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org)
* [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com)
* [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com)
* [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com)
* [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com)
* [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com)
* [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com)
* [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com)
* [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com)
* [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com)
* [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org)
* [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org)
* [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org)
* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):
* [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00)
* [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01)
* [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02)
* [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03)
* [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04)
* [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05)
* [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06)
* [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07)
* [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08)
* [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09)
* [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10)
* [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11)
* [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12)
* [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13)
* [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14)
* [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15)
* [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16)
* [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17)
* [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18)
* [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19)
* [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20)
* [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21)
* [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22)
* [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23)
* [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24)
* [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25)
* [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26)
* [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27)
* [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28)
* [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29)
* [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise)
### Supported Tasks and Leaderboards
Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.
### Languages
English
## Dataset Structure
### Data Instances
Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
### Data Fields
'task': task identifier
'input': column elements of a specific row in the table.
'options': for multiple choice classification, it provides the options to choose from.
'output': target column element of the same row as input.
'pageTitle': the title of the page containing the table.
'outputColName': output column name
'url': url to the website containing the table
'wdcFile': WDC Web Table Corpus file
### Data Splits
The UnpredicTable datasets do not come with additional data splits.
## Dataset Creation
### Curation Rationale
Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
### Source Data
#### Initial Data Collection and Normalization
We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
#### Who are the source language producers?
The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
### Annotations
#### Annotation process
Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low),
[UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.
#### Who are the annotators?
Annotations were carried out by a lab assistant.
### Personal and Sensitive Information
The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.
### Discussion of Biases
Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.
### Other Known Limitations
No additional known limitations.
## Additional Information
### Dataset Curators
Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez
### Licensing Information
Apache 2.0
### Citation Information
```
@misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
}
```
|
false |
# German Backtranslated Paraphrase Dataset
This is a dataset of more than 21 million German paraphrases.
These are text pairs that have the same meaning but are expressed with different words.
The source of the paraphrases are different parallel German / English text corpora.
The English texts were machine translated back into German to obtain the paraphrases.
This dataset can be used for example to train semantic text embeddings.
To do this, for example, [SentenceTransformers](https://www.sbert.net/)
and the [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss)
can be used.
## Maintainers
[](https://www.welove.ai/)
This dataset is open sourced by [Philip May](https://may.la/)
and maintained by the [One Conversation](https://www.welove.ai/)
team of [Deutsche Telekom AG](https://www.telekom.com/).
## Our pre-processing
Apart from the back translation, we have added more columns (for details see below). We have carried out the following pre-processing and filtering:
- We dropped text pairs where one text was longer than 499 characters.
- In the [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) texts we have removed the `" · Global Voices"` suffix.
## Your post-processing
You probably don't want to use the dataset as it is, but filter it further.
This is what the additional columns of the dataset are for.
For us it has proven useful to delete the following pairs of sentences:
- `min_char_len` less than 15
- `jaccard_similarity` greater than 0.3
- `de_token_count` greater than 30
- `en_de_token_count` greater than 30
- `cos_sim` less than 0.85
## Columns description
- **`uuid`**: a uuid calculated with Python `uuid.uuid4()`
- **`en`**: the original English texts from the corpus
- **`de`**: the original German texts from the corpus
- **`en_de`**: the German texts translated back from English (from `en`)
- **`corpus`**: the name of the corpus
- **`min_char_len`**: the number of characters of the shortest text
- **`jaccard_similarity`**: the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index) of both sentences - see below for more details
- **`de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
- **`en_de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
- **`cos_sim`**: the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) of both sentences measured with [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
## Anomalies in the texts
It is noticeable that the [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php) texts have weird dash prefixes. This looks like this:
```
- Hast du was draufgetan?
```
To remove them you could apply this function:
```python
import re
def clean_text(text):
text = re.sub("^[-\s]*", "", text)
text = re.sub("[-\s]*$", "", text)
return text
df["de"] = df["de"].apply(clean_text)
df["en_de"] = df["en_de"].apply(clean_text)
```
## Parallel text corpora used
| Corpus name & link | Number of paraphrases |
|-----------------------------------------------------------------------|----------------------:|
| [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php) | 18,764,810 |
| [WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix-v1.php) | 1,569,231 |
| [Tatoeba v2022-03-03](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php) | 313,105 |
| [TED2020 v1](https://opus.nlpl.eu/TED2020-v1.php) | 289,374 |
| [News-Commentary v16](https://opus.nlpl.eu/News-Commentary-v16.php) | 285,722 |
| [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) | 70,547 |
| **sum** |. **21,292,789** |
## Back translation
We have made the back translation from English to German with the help of [Fairseq](https://github.com/facebookresearch/fairseq).
We used the `transformer.wmt19.en-de` model for this purpose:
```python
en2de = torch.hub.load(
"pytorch/fairseq",
"transformer.wmt19.en-de",
checkpoint_file="model1.pt:model2.pt:model3.pt:model4.pt",
tokenizer="moses",
bpe="fastbpe",
)
```
## How the Jaccard similarity was calculated
To calculate the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index)
we are using the [SoMaJo tokenizer](https://github.com/tsproisl/SoMaJo)
to split the texts into tokens.
We then `lower()` the tokens so that upper and lower case letters no longer make a difference. Below you can find a code snippet with the details:
```python
from somajo import SoMaJo
LANGUAGE = "de_CMC"
somajo_tokenizer = SoMaJo(LANGUAGE)
def get_token_set(text, somajo_tokenizer):
sentences = somajo_tokenizer.tokenize_text([text])
tokens = [t.text.lower() for sentence in sentences for t in sentence]
token_set = set(tokens)
return token_set
def jaccard_similarity(text1, text2, somajo_tokenizer):
token_set1 = get_token_set(text1, somajo_tokenizer=somajo_tokenizer)
token_set2 = get_token_set(text2, somajo_tokenizer=somajo_tokenizer)
intersection = token_set1.intersection(token_set2)
union = token_set1.union(token_set2)
jaccard_similarity = float(len(intersection)) / len(union)
return jaccard_similarity
```
## Load this dataset
### With Hugging Face Datasets
```python
# pip install datasets
from datasets import load_dataset
dataset = load_dataset("deutsche-telekom/ger-backtrans-paraphrase")
train_dataset = dataset["train"]
```
### With Pandas
If you want to download the csv file and then load it with Pandas you can do it like this:
```python
df = pd.read_csv("train.csv")
```
## Citations, Acknowledgements and Licenses
**OpenSubtitles**
- citation: P. Lison and J. Tiedemann, 2016, [OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles](http://www.lrec-conf.org/proceedings/lrec2016/pdf/947_Paper.pdf). In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)
- also see http://www.opensubtitles.org/
- license: no special license has been provided at OPUS for this dataset
**WikiMatrix v1**
- citation: Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, [WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia](https://arxiv.org/abs/1907.05791), arXiv, July 11 2019
- license: [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
**Tatoeba v2022-03-03**
- citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
- license: [CC BY 2.0 FR](https://creativecommons.org/licenses/by/2.0/fr/)
- copyright: https://tatoeba.org/eng/terms_of_use
**TED2020 v1**
- citation: Reimers, Nils and Gurevych, Iryna, [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813), In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, November 2020
- acknowledgements to [OPUS](https://opus.nlpl.eu/) for this service
- license: please respect the [TED Talks Usage Policy](https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy)
**News-Commentary v16**
- citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
- license: no special license has been provided at OPUS for this dataset
**GlobalVoices v2018q4**
- citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
- license: no special license has been provided at OPUS for this dataset
## Citation
```latex
@misc{ger-backtrans-paraphrase,
title={Deutsche-Telekom/ger-backtrans-paraphrase - dataset at Hugging Face},
url={https://huggingface.co/datasets/deutsche-telekom/ger-backtrans-paraphrase},
year={2022},
author={May, Philip}
}
```
## Licensing
Copyright (c) 2022 [Philip May](https://may.la/),
[Deutsche Telekom AG](https://www.telekom.com/)
This work is licensed under [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
|
false |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
SK-QuAD is the first QA dataset for the Slovak language.
It is manually annotated, so it has no distortion caused by
machine translation. The dataset is thematically diverse – it
does not overlap with SQuAD – it brings new knowledge.
It passed the second round of annotation – each question
and the answer were seen by at least two annotators.
### Supported Tasks and Leaderboards
- Question answering
- Document retrieval
### Languages
- Slovak
## Dataset Structure
#### squad_v2
- **Size of downloaded dataset files:** 44.34 MB
- **Size of the generated dataset:** 122.57 MB
- **Total amount of disk used:** 166.91 MB
-
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [94, 87, 94, 94],
"text": ["10th and 11th centuries", "in the 10th and 11th centuries", "10th and 11th centuries", "10th and 11th centuries"]
},
"context": "\"The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave thei...",
"id": "56ddde6b9a695914005b9629",
"question": "When were the Normans in Normandy?",
"title": "Normans"
}
```
### Data Fields
The data fields are the same among all splits.
#### squad_v2
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| | Train | Dev | Translated |
| ------------- | -----: | -----: | -------: |
| Documents | 8,377 | 940 | 442 |
| Paragraphs | 22,062 | 2,568 | 18,931 |
| Questions | 81,582 | 9,583 | 120,239 |
| Answers | 65,839 | 7,822 | 79,978 |
| Unanswerable | 15,877 | 1,784 | 40,261 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Deutsche Telekom Systems Solutions Slovakia
- Technical Univesity of Košice
### Licensing Information
Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
false |
# Peanut Comic Strip Dataset (Snoopy & Co.)

This is a dataset Peanuts comic strips from `1950/10/02` to `2000/02/13`.
There are `77,457` panels extracted from `17,816` comic strips.
The dataset size is approximately `4.4G`.
Each row in the dataset contains the following fields:
- `image`: `PIL.Image` containing the extracted panel.
- `panel_name`: unique identifier for the row.
- `characters`: `tuple[str, ...]` of characters included in the comic strip the panel is part of.
- `themes`: `tuple[str, ...]` of theme in the comic strip the panel is part of.
- `color`: `str` indicating whether the panel is grayscale or in color.
- `caption`: [BLIP-2_OPT_6.7B](https://huggingface.co/docs/transformers/main/model_doc/blip-2) generated caption from the panel.
- `year`: `int` storing the year the specific panel was released.
> **OPT-6.7B has a non-commercial use license and so this dataset cannot be used for commercial projects. If you need a dataset for commercial use please see [this similar dataset](https://huggingface.co/datasets/afmck/peanuts-flan-t5-xl) that uses Flan-T5-XL, which allows for commercial use.**
Character and theme information was extracted from [Peanuts Wiki (Fandom)](https://peanuts.fandom.com/wiki/Peanuts_Wiki) using [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/).
Images were extracted from [Peanuts Search](https://peanuts-search.com/).
Only strips with the following characters were extracted:
```
- "Charlie Brown"
- "Sally Brown"
- "Joe Cool" # Snoopy alter-ego
- "Franklin"
- "Violet Gray"
- "Eudora"
- "Frieda"
- "Marcie"
- "Peppermint Patty"
- "Patty"
- "Pig-Pen"
- "Linus van Pelt"
- "Lucy van Pelt"
- "Rerun van Pelt"
- "Schroeder"
- "Snoopy"
- "Shermy"
- "Spike"
- "Woodstock"
- "the World War I Flying Ace" # Snoopy alter-ego
```
### Extraction Details
Panel detection and extraction was done using the following codeblock:
```python
def check_contour(cnt):
area = cv2.contourArea(cnt)
if area < 600:
return False
_, _, w, h = cv2.boundingRect(cnt)
if w / h < 1 / 2: return False
if w / h > 2 / 1: return False
return True
def get_panels_from_image(path):
panels = []
original_img = cv2.imread(path)
gray = cv2.cvtColor(original_img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5,5), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=1)
invert = 255 - opening
cnts, _ = cv2.findContours(invert, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
idx = 0
for cnt in cnts:
if not check_contour(cnt): continue
idx += 1
x,y,w,h = cv2.boundingRect(cnt)
roi = original_img[y:y+h,x:x+w]
panels.append(roi)
return panels
```
`check_contour` will reject panels with `area < 600` or with aspect ratios larger than `2` or smaller than `0.5`.
Grayscale detection was done using the following codeblock:
```python
def is_grayscale(panel):
LAB_THRESHOLD = 10.
img = cv2.cvtColor(panel, cv2.COLOR_RGB2LAB)
_, ea, eb = cv2.split(img)
de = abs(ea - eb)
mean_e = np.mean(de)
return mean_e < LAB_THRESHOLD
```
Captioning was done using the standard BLIP-2 pipeline shown in the [Huggingface docs](https://huggingface.co/docs/transformers/main/model_doc/blip-2) using beam search over 10 beams and a repetition penalty of `2.0`.
Raw captions are extracted and no postprocessing is applied. You may wish to normalise captions (such as replacing "cartoon" with "peanuts cartoon") or incorporate extra metadata into prompts. |
false |
# Dataset Card for Helpful Instructions
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: Lewis Tunstall**
### Dataset Summary
Helpful Instructions is a dataset of `(instruction, completion)` pairs that are derived from public datasets. As the name suggests, it focuses on instructions that are "helpful", i.e. the kind of questions or tasks a human user might instruct an AI assistant to perform. You can load the dataset as follows:
```python
from datasets import load_dataset
# Load all subsets
helpful_instructions = load_dataset("HuggingFaceH4/helpful_instructions", name="all")
# Load a single subset
helpful_instructions_subset = load_dataset("HuggingFaceH4/helpful_instructions", name="self_instruct")
```
### Supported Tasks and Leaderboards
This dataset can be used to fine-tune pretrained language models to follow instructions.
### Changelog
* March 5, 2023: `v1.0.0` release, with subsets from `HuggingFaceH4/self_instruct` (`self_instruct`, `super_natural_instructions`, `prompt_source`) |
false | # German
The [German dataset](https://archive.ics.uci.edu/ml/datasets/Statlog+%28German+Credit+Data%29) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Dataset on loan grants to customers.
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|-----------------------------------------------------------------|
| encoding | | Encoding dictionary showing original values of encoded features.|
| loan | Binary classification | Has the loan request been accepted? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/german", "loan")["train"]
```
# Features
|**Feature** |**Type** |
|------------------------------------|-----------|
|`checking_account_status` | `int8` |
|`account_life_in_months` | `int8` |
|`credit_status` | `int8` |
|`loan_purpose` | `string` |
|`current_credit` | `int32` |
|`current_savings` | `int8` |
|`employed_since` | `int8` |
|`installment_rate_percentage` | `int8` |
|`sex` | `int8` |
|`marital_status` | `string` |
|`guarantors` | `int8` |
|`years_living_in_current_residence` | `int8` |
|`age` | `int8` |
|`installment_plans` | `string` |
|`housing_status` | `int8` |
|`nr_credit_accounts_in_bank` | `int8` |
|`job_status` | `int8` |
|`number_of_people_in_support` | `int8` |
|`has_registered_phone_number` | `int8` |
|`is_foreign` | `int8` | |
false |
## 蔬菜图像数据集
### 背景
最初的实验是用世界各地发现的15种常见蔬菜进行的。实验选择的蔬菜有:豆类、苦瓜、葫芦、茄子、西兰花、卷心菜、辣椒、胡萝卜、花椰菜、黄瓜、木瓜、土豆、南瓜、萝卜和番茄。共使用了来自15个类的21000张图像,其中每个类包含1400张尺寸为224×224、格式为*.jpg的图像。数据集中70%用于培训,15%用于验证,15%用于测试。
### 目录
此数据集包含三个文件夹:
- train (15000 张图像)
- test (3000 张图像)
- validation (3000 张图像)
### 数据收集
这个数据集中的图像是我们为一个项目从蔬菜农场和市场收集的。
### 制作元数据文件
运行下面`python`的代码,就可以在桌面生成三个csv格式的元数据文件、一个分类数据文件(需要放入到数据文件中)
```python
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
1.下载的数据文件 Vegetable Images.zip ,并解压到桌面
2.然后执行 python generate.py 即可生成三个元数据文件和一个分类数据文件
"""
import os
from pathlib import Path
category_dict = {
'Bean': '豆类',
'Bitter_Gourd': '苦瓜',
'Bottle_Gourd': '葫芦',
'Brinjal': '茄子',
'Broccoli': '西兰花',
'Cabbage': '卷心菜',
'Capsicum': '辣椒',
'Carrot': '胡萝卜',
'Cauliflower': '花椰菜',
'Cucumber': '黄瓜',
'Papaya': '木瓜',
'Potato': '土豆',
'Pumpkin': '南瓜',
'Radish': '萝卜',
'Tomato': '番茄',
}
base_path = Path.home().joinpath('desktop')
data = '\n'.join((item for item in category_dict.values())) # 注意:利用了python 3.6之后字典插入有序的特性
base_path.joinpath('classname.txt').write_text(data, encoding='utf-8')
def create(filename):
csv_path = base_path.joinpath(f'{filename}.csv')
with csv_path.open('wt', encoding='utf-8', newline='') as csv:
csv.writelines([f'image,category{os.linesep}'])
data_path = base_path.joinpath('Vegetable Images', filename)
batch = 0
datas = []
keys = list(category_dict.keys())
for image_path in data_path.rglob('*.jpg'):
batch += 1
part1 = str(image_path).removeprefix(str(base_path)).replace('\\', '/')[1:]
part2 = keys.index(image_path.parents[0].name)
datas.append(f'{part1},{part2}{os.linesep}')
if batch > 100:
csv.writelines(datas)
datas.clear()
if datas:
csv.writelines(datas)
return csv_path.stat().st_size
if __name__ == '__main__':
print(create('train'))
print(create('test'))
print(create('validation'))
```
### 致谢
非常感谢原始数据集提供方 [Vegetable Image Dataset](https://www.kaggle.com/datasets/misrakahmed/vegetable-image-dataset)。
### 克隆数据
```bash
git clone https://huggingface.co/datasets/cc92yy3344/vegetable.git
``` |
false | # Isolet
The [Isolet dataset](https://archive-beta.ics.uci.edu/dataset/54/isolet) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|--------------------------|
| isolet | Multiclass classification | What letter was uttered? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/isolet", "isolet")["train"]
``` |
false | # Fertility
The [Fertility dataset](https://archive.ics.uci.edu/ml/datasets/Fertility) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Classify fertility abnormalities of patients.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------------------|
| encoding | | Encoding dictionary |
| fertility | Binary classification | Does the patient have fertility issues? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/fertility", "fertility")["train"]
```
# Features
|**Feature** |**Type** |
|----------------------------------------|------------------|
| season_of_sampling | `[string]` |
| age_at_time_of_sampling | `[int8]` |
| has_had_childhood_diseases | `[bool]` |
| has_had_serious_trauma | `[bool]` |
| has_had_surgical_interventions | `[bool]` |
| has_had_high_fevers_in_the_past_year | `[string]` |
| frequency_of_alcohol_consumption | `[float16]` |
| smoking_frequency | `[string]` |
| number_of_sitting_hours_per_day | `[float16]` | |
false | # Dexter
The [Dexter dataset](https://archive-beta.ics.uci.edu/dataset/168/dexter) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** |
|-----------------------|---------------------------|
| dexter | Binary classification.|
|
false | |
false | # Sydt
Synthetic dataset. |
false | # Dataset Card for Dataset Name
## Dataset Description
- Homepage: https://www.kaggle.com/datasets/disisbig/hindi-text-short-and-large-summarization-corpus?select=test.csv
### Dataset Summary
Hindi Text Short and Large Summarization Corpus is a collection of ~180k articles with their headlines and summary collected from Hindi News Websites.
This is a first of its kind Dataset in Hindi which can be used to benchmark models for Text summarization in Hindi. This does not contain articles contained in Hindi Text Short Summarization Corpus which is being released parallely with this Dataset.
The dataset retains original punctuation, numbers etc in the articles.
### Languages
The language is Hindi.
### Licensing Information
MIT
### Citation Information
https://www.kaggle.com/datasets/disisbig/hindi-text-short-and-large-summarization-corpus?select=test.csv
### Contributions
|
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false | ## Dataset Card for Oshindonga-Dialogues
### Dataset Summary
Oshondonga-dialogues is a dialogue corpus for general dialogues in Oshondonga. The goal is to have access to some of Africa's datasets for conversational modeling. The Oshindonga language is a Bantu language widely spoken in the northern part of Namibia.
### How to use
With a single call to the 'load_dataset' function, the dataset can be downloaded and prepared for use on your local storage.
```
from datasets import load_dataset
oshindonga_dialogues = load_dataset("meyabase/oshindonga-dialogues")
```
### Dataset Structure
The data is provided in one set of files with Q&A pairs.
```
{
'ID': '42af0df-21f6-4d3d-82a8-acfb1e248f1e'
'question': 'Ingapi?',
'answer': 'Oodola omulongo nandatu'
}
```
### Data Splits
Data is split into `train/dev` and `test` in the ratios of 80%, 10%, and 10%. Overall, the data has a length of 82 `question` and "answer" pairs.
### Citation
```
@MISC{oshindonga_dialogues,
author = {Meyabase Platforms},
title = {Oshindonga Dialogues for AI Chatbots},
url = {https://huggingface.co/datasets/meyabase/oshindonga-dialogues},
year = 2023
}
``` |
false |
# Dataset Card for STAN Large
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [mounicam/hashtag_master](https://github.com/mounicam/hashtag_master)
- **Paper:** [Multi-task Pairwise Neural Ranking for Hashtag Segmentation](https://aclanthology.org/P19-1242/)
### Dataset Summary
The description below was taken from the paper "Multi-task Pairwise Neural Ranking for Hashtag Segmentation"
by Maddela et al..
"STAN large, our new expert curated dataset, which includes all 12,594 unique English hashtags and their
associated tweets from the same Stanford dataset.
STAN small is the most commonly used dataset in previous work. However, after reexamination, we found annotation
errors in 6.8% of the hashtags in this dataset, which is significant given that the error rate of the state-of-the art
models is only around 10%. Most of the errors were related to named entities. For example, #lionhead,
which refers to the “Lionhead” video game company, was labeled as “lion head”.
We therefore constructed the STAN large dataset of 12,594 hashtags with additional quality control for human annotations."
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 15,
"hashtag": "PokemonPlatinum",
"segmentation": "Pokemon Platinum",
"alternatives": {
"segmentation": [
"Pokemon platinum"
]
}
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `alternatives`: other segmentations that are also accepted as a gold segmentation.
Although `segmentation` has exactly the same characters as `hashtag` except for the spaces, the segmentations inside `alternatives` may have characters corrected to uppercase.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{maddela-etal-2019-multi,
title = "Multi-task Pairwise Neural Ranking for Hashtag Segmentation",
author = "Maddela, Mounica and
Xu, Wei and
Preo{\c{t}}iuc-Pietro, Daniel",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1242",
doi = "10.18653/v1/P19-1242",
pages = "2538--2549",
abstract = "Hashtags are often employed on social media and beyond to add metadata to a textual utterance with the goal of increasing discoverability, aiding search, or providing additional semantics. However, the semantic content of hashtags is not straightforward to infer as these represent ad-hoc conventions which frequently include multiple words joined together and can include abbreviations and unorthodox spellings. We build a dataset of 12,594 hashtags split into individual segments and propose a set of approaches for hashtag segmentation by framing it as a pairwise ranking problem between candidate segmentations. Our novel neural approaches demonstrate 24.6{\%} error reduction in hashtag segmentation accuracy compared to the current state-of-the-art method. Finally, we demonstrate that a deeper understanding of hashtag semantics obtained through segmentation is useful for downstream applications such as sentiment analysis, for which we achieved a 2.6{\%} increase in average recall on the SemEval 2017 sentiment analysis dataset.",
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
false |
# Dataset Card for AIDS
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://wiki.nci.nih.gov/display/NCIDTPdata/AIDS+Antiviral+Screen+Data)**
- **Paper:**: (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-aids)
### Dataset Summary
The `AIDS` dataset is a dataset containing compounds checked for evidence of anti-HIV activity..
### Supported Tasks and Leaderboards
`AIDS` should be used for molecular classification, a binary classification task. The score used is accuracy with cross validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | medium |
| #graphs | 1999 |
| average #nodes | 15.5875 |
| average #edges | 32.39 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under license unknown.
### Citation Information
```
@inproceedings{Morris+2020,
title={TUDataset: A collection of benchmark datasets for learning with graphs},
author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann},
booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)},
archivePrefix={arXiv},
eprint={2007.08663},
url={www.graphlearning.io},
year={2020}
}
```
```
@InProceedings{10.1007/978-3-540-89689-0_33,
author="Riesen, Kaspar
and Bunke, Horst",
editor="da Vitoria Lobo, Niels
and Kasparis, Takis
and Roli, Fabio
and Kwok, James T.
and Georgiopoulos, Michael
and Anagnostopoulos, Georgios C.
and Loog, Marco",
title="IAM Graph Database Repository for Graph Based Pattern Recognition and Machine Learning",
booktitle="Structural, Syntactic, and Statistical Pattern Recognition",
year="2008",
publisher="Springer Berlin Heidelberg",
address="Berlin, Heidelberg",
pages="287--297",
abstract="In recent years the use of graph based representation has gained popularity in pattern recognition and machine learning. As a matter of fact, object representation by means of graphs has a number of advantages over feature vectors. Therefore, various algorithms for graph based machine learning have been proposed in the literature. However, in contrast with the emerging interest in graph based representation, a lack of standardized graph data sets for benchmarking can be observed. Common practice is that researchers use their own data sets, and this behavior cumbers the objective evaluation of the proposed methods. In order to make the different approaches in graph based machine learning better comparable, the present paper aims at introducing a repository of graph data sets and corresponding benchmarks, covering a wide spectrum of different applications.",
isbn="978-3-540-89689-0"
}
``` |
false | # Dataset Card for "cartoon-blip-captions"
|
true | # Dataset Card for "tripadvisor-hotel-reviews"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/andrewmvd/trip-advisor-hotel-reviews
- **Paper:** https://zenodo.org/record/1219899
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Hotels play a crucial role in traveling and with the increased access to information new pathways of selecting the best ones emerged.
With this dataset, consisting of 20k reviews crawled from Tripadvisor, you can explore what makes a great hotel and maybe even use this model in your travels!
Citations on a scale from 1 to 5.
### Languages
english
### Citation Information
If you use this dataset in your research, please credit the authors.
Citation
Alam, M. H., Ryu, W.-J., Lee, S., 2016. Joint multi-grain topic sentiment: modeling semantic aspects for online reviews. Information Sciences 339, 206–223.
DOI
License
CC BY NC 4.0
Splash banner
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset. |
true |
# Dataset Summary
**hystoclass** (hybrid social text and tabular classification)has been collected from Instagram stories with privacy in mind. In addition to the texts published in the stories, this dataset has graphic features such as background color, text color, and font. also has a Textual feature named 'content' in the Persian language.
# Classes
This dataset is divided into **18 classes** by human supervision:
Event, Political, Advertising and business, Romantic, Motivational, Literature, Social Networks, Scientific, Social, IT, Advices, Academic, Cosmetic and Feminine, Religious, Sport, Property and housing, Tourism and Medical.
[Github](https://github.com/pooyaphoenix/hystoclass)
[Email](https://pooyachavoshi@gmail.com)
|
false |
Manually created seed dataset used in bootstrapping in the Self-instruct paper https://arxiv.org/abs/2212.10560. This is part of the instruction fine-tuning datasets. |
false |
For more info on data collection and the preprocessing algorithm, please see [Fast Anime PromptGen](https://huggingface.co/FredZhang7/anime-anything-promptgen-v2).
## 80K unique prompts
- `safebooru_clean`: Cleaned prompts with upscore ≥ 8 from the Safebooru API
---
For disclaimers about the Danbooru data, please see [Danbooru Tag Generator](https://huggingface.co/FredZhang7/danbooru-tag-generator).
## 100K unique prompts (each)
- `danbooru_raw`: Raw prompts with upscore ≥ 3 from Danbooru API
- `danbooru_clean`: Cleaned prompts with upscore ≥ 3 from Danbooru API
---
## Python
Download and save the dataset to anime_prompts.csv locally.
```bash
pip install datasets
```
```python
import csv
import datasets
dataset = datasets.load_dataset("FredZhang7/anime-prompts-180K")
train = dataset["train"]
safebooru_clean = train["safebooru_clean"]
danbooru_clean = train["danbooru_clean"]
danbooru_raw = train["danbooru_raw"]
with open("anime_prompts.csv", "w") as f:
writer = csv.writer(f)
writer.writerow(["safebooru_clean", "danbooru_clean", "danbooru_raw"])
for i in range(len(safebooru_clean)):
writer.writerow([safebooru_clean[i], danbooru_clean[i], danbooru_raw[i]])
``` |
false |
# Dataset Card for CSL
## Dataset Description
CSL is the Chinese Scientific Literature Dataset.
- **Paper:** https://aclanthology.org/2022.coling-1.344
- **Repository:** https://github.com/ydli-ai/CSL
### Dataset Summary
The dataset contains titles, abstracts, keywords of papers written in Chinese from several academic fields.
### Languages
- Chinese
## Dataset Structure
### Data Instances
| Split | Documents |
|-----------------|----------:|
| `csl` | 396k |
### Data Fields
- `doc_id`: unique identifier for this document
- `title`: title of the paper
- `abstract`: abstract of the paper
- `keywords`: keywords associated with the paper
- `category`: the broad category of the paper
- `category_eng`: English translaction of the broad category (e.g., Engineering)
- `discipline`: academic discipline of the paper
- `discipline_eng`: English translation of the academic discipline (e.g., Agricultural Engineering)
## Dataset Usage
Using 🤗 Datasets:
```python
from datasets import load_dataset
dataset = load_dataset('neuclir/csl')['csl']
```
## License & Citation
This dataset is based off the [Chinese Scientific Literature Dataset](https://github.com/ydli-ai/CSL) under Apache 2.0.
The primay change is the addition of `doc_id`s, English translactions of the category and discipline descriptions by a native speaker,
and basic de-duplication. Code that performed this modification is avalable in [this repository](https://github.com/NeuCLIR/csl-preprocess).
If you use this data, please cite:
```
@inproceedings{li-etal-2022-csl,
title = "{CSL}: A Large-scale {C}hinese Scientific Literature Dataset",
author = "Li, Yudong and
Zhang, Yuqing and
Zhao, Zhe and
Shen, Linlin and
Liu, Weijie and
Mao, Weiquan and
Zhang, Hui",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.344",
pages = "3917--3923",
}
```
|
false |
# fleece2instructions-inputs-alpaca-cleaned
This data was downloaded from the [alpaca-lora](https://github.com/tloen/alpaca-lora) repo under the `ODC-BY` license (see [snapshot here](https://web.archive.org/web/20230325034703/https://github.com/tloen/alpaca-lora/blob/main/DATA_LICENSE)) and processed to text2text format. The license under which the data was downloaded from the source applies to this repo.
Note that the `inputs` and `instruction` columns in the original dataset have been aggregated together for text2text generation. Each has a token with either `<instruction>` or `<inputs>` in front of the relevant text, both for model understanding and regex separation later.
## Processing details
- Drop rows with `output` having less then 4 words (via `nltk.word_tokenize`)
- This dataset **does** include both the original `instruction`s and the `inputs` columns, aggregated together into `instructions_inputs`
- In the `instructions_inputs` column, the text is delineated via tokens that are either `<instruction>` or `<inputs>` in front of the relevant text, both for model understanding and regex separation later.
## contents
```python
DatasetDict({
train: Dataset({
features: ['instructions_inputs', 'output'],
num_rows: 43537
})
test: Dataset({
features: ['instructions_inputs', 'output'],
num_rows: 2418
})
validation: Dataset({
features: ['instructions_inputs', 'output'],
num_rows: 2420
})
})
```
## examples

## token counts
t5

bart
 |
false | # Ozone
The [Ozone dataset](https://archive.ics.uci.edu/ml/datasets/Ozone) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| spect | Binary classification | Is there an ozone layer?|
| spectf | Binary classification | Is there an ozone layer?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/spect", "spect")["train"]
``` |
false | # Car
The [Car dataset](https://archive-beta.ics.uci.edu/dataset/19/car+evaluation) from the [UCI repository](https://archive-beta.ics.uci.edu).
Classify the acceptability level of a car for resale.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| car | Multiclass classification | What is the acceptability level of the car?|
| car_binary | Binary classification | Is the car acceptable?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/car", "car_binary")["train"]
``` |
false | # PageBlocks
The [PageBlocks dataset](https://archive-beta.ics.uci.edu/dataset/76/page_blocks) from the [UCI repository](https://archive-beta.ics.uci.edu/).
How many transitions does the page block have?
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| page_blocks | Multiclass classification |
| page_blocks_binary| Binary classification | |
false | # Hypo
The Hypo dataset.
# Configurations and tasks
| **Configuration** | **Task** | **Description**|
|-----------------------|---------------------------|----------------|
| hypo | Multiclass classification.| What kind of hypothyroidism does the patient have? |
| has_hypo | Binary classification.| Does the patient hypothyroidism does the patient have? |
|
false | # P53
The [P53 dataset](https://archive-beta.ics.uci.edu/dataset/170/p53) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| p53 | Binary classification.| |
|
false | # Uscensus
[US census dataset]() from [UCI](). |
false | # Dataset Card for "modeling"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
false | # Dataset Card for "dolly_hhrlhf"
This is the dataset from mosaic mosaicml/dolly_hhrlhf removing some duplicates found.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false |
# Dataset Card for [Squad-UNIB]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset create by own colecting for NLP task individual
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"answers": {
"answer_start": [1],
"text": ["This is a test text"]
},
"context": "This is a test context.",
"id": "1",
"question": "Is this a test?",
"title": "train test"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
false | |
false | |
false |
# Dataset Card for NRU-HSE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [glushkovato/hashtag_segmentation](https://github.com/glushkovato/hashtag_segmentation/)
- **Paper:** [Char-RNN and Active Learning for Hashtag Segmentation](https://arxiv.org/abs/1911.03270)
### Dataset Summary
Real hashtags collected from several pages about civil services on vk.com (a Russian social network) and then segmented manually.
### Languages
Russian
## Dataset Structure
### Data Instances
```
{
"index": 0,
"hashtag": "ЁлкаВЗазеркалье",
"segmentation": "Ёлка В Зазеркалье"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{glushkova2019char,
title={Char-RNN and Active Learning for Hashtag Segmentation},
author={Glushkova, Taisiya and Artemova, Ekaterina},
journal={arXiv preprint arXiv:1911.03270},
year={2019}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
false | # Spanish_Biomedical_Crawled_Corpus
This is a dataset retrieved directly from [this link](https://zenodo.org/record/5510033#.Ykho3-hByUk), which was originally developed by [BSC](https://temu.bsc.es/). This is a direct copy-paste of the usage, limitations and license of the original dataset:
```
Description
The largest Spanish biomedical and heath corpus to date gathered from a massive Spanish health domain crawler over more than 3,000 URLs were downloaded and preprocessed. The collected data have been preprocessed to produce the CoWeSe (Corpus Web Salud Español) resource, a large-scale and high-quality corpus intended for biomedical and health NLP in Spanish.
Directory structure
CoWeSe.txt: the CoWeSe corpus; an empty line separates each document
License
The corpus is released under this licensing scheme:
- We do not own any of the text from which these data has been extracted and preprocessed to be ready for use for language modeling tasks.
- We license the actual packaging of these data under a CC0 1.0 Universal License
Notice and take down policy
Notice: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
Clearly identify the copyrighted work claimed to be infringed.
Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate
Copyright (c) 2021 Text Mining Unit at BSC
```
License, distribution and usage conditions of the original dataset apply.
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this dataset.
### Citation
```
@misc{carrino2021spanish,
title={Spanish Biomedical Crawled Corpus: A Large, Diverse Dataset for Spanish Biomedical Language Models},
author={Casimiro Pio Carrino and Jordi Armengol-Estapé and Ona de Gibert Bonet and Asier Gutiérrez-Fandiño and Aitor Gonzalez-Agirre and Martin Krallinger and Marta Villegas},
year={2021},
eprint={2109.07765},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
[Go to the official paper from the dataset for more information](https://arxiv.org/abs/2109.07765).
|
true |
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### Languages
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. |
true |
# Dataset Card for "UnpredicTable-5k" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ethanperez.net/unpredictable
- **Repository:** https://github.com/JunShern/few-shot-adaptation
- **Paper:** Few-shot Adaptation Works with UnpredicTable Data
- **Point of Contact:** junshern@nyu.edu, perez@nyu.edu
### Dataset Summary
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
There are several dataset versions available:
* [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites.
* [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
* [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset.
* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):
* [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low)
* [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium)
* [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high)
* UnpredicTable data subsets based on the website of origin:
* [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com)
* [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net)
* [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com)
* [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com)
* [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com)
* [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com)
* [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org)
* [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org)
* [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com)
* [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com)
* [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com)
* [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com)
* [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com)
* [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com)
* [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com)
* [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com)
* [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com)
* [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org)
* [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org)
* [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org)
* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):
* [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00)
* [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01)
* [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02)
* [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03)
* [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04)
* [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05)
* [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06)
* [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07)
* [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08)
* [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09)
* [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10)
* [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11)
* [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12)
* [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13)
* [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14)
* [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15)
* [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16)
* [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17)
* [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18)
* [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19)
* [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20)
* [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21)
* [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22)
* [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23)
* [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24)
* [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25)
* [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26)
* [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27)
* [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28)
* [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29)
* [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise)
### Supported Tasks and Leaderboards
Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.
### Languages
English
## Dataset Structure
### Data Instances
Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
### Data Fields
'task': task identifier
'input': column elements of a specific row in the table.
'options': for multiple choice classification, it provides the options to choose from.
'output': target column element of the same row as input.
'pageTitle': the title of the page containing the table.
'outputColName': output column name
'url': url to the website containing the table
'wdcFile': WDC Web Table Corpus file
### Data Splits
The UnpredicTable datasets do not come with additional data splits.
## Dataset Creation
### Curation Rationale
Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
### Source Data
#### Initial Data Collection and Normalization
We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
#### Who are the source language producers?
The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
### Annotations
#### Annotation process
Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low),
[UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.
#### Who are the annotators?
Annotations were carried out by a lab assistant.
### Personal and Sensitive Information
The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.
### Discussion of Biases
Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.
### Other Known Limitations
No additional known limitations.
## Additional Information
### Dataset Curators
Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez
### Licensing Information
Apache 2.0
### Citation Information
```
@misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
}
```
|
false |
# Dataset Card for IWSLT 2014
## Dataset Description
- **Homepage:** [https://sites.google.com/site/iwsltevaluation2014](https://sites.google.com/site/iwsltevaluation2014)
dataset_info:
- config_name: de-en
features:
- name: translation
languages:
- de
- en
splits:
- name: train
num_examples: 171721
- name: test
num_examples: 4698
- name: validation
num_examples: 887
|
true |
# Dataset Card for Swiss Court View Generation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Swiss Judgment Prediction is a multilingual, diachronic dataset of 329K Swiss Federal Supreme Court (FSCS) cases. This dataset is part of a challenging text generation task.
### Supported Tasks and Leaderboards
### Languages
Switzerland has four official languages with three languages German, French and Italian being represented. The decisions are written by the judges and clerks in the language of the proceedings.
| Language | Subset | Number of Documents Full |
|------------|------------|--------------------------|
| German | **de** | 160K |
| French | **fr** | 128K |
| Italian | **it** | 41K |
## Dataset Structure
### Data Fields
```
- decision_id: unique identifier for the decision
- facts: facts section of the decision
- considerations: considerations section of the decision
- label: label of the decision
- law_area: area of law of the decision
- language: language of the decision
- year: year of the decision
- court: court of the decision
- chamber: chamber of the decision
- canton: canton of the decision
- region: region of the decision
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
*Visu, Ronja, Joel*
*Title: Blabliblablu*
*Name of conference*
```
cit
```
### Contributions
|
false |
# Dataset Card for CSC
中文拼写纠错数据集
- **Repository:** https://github.com/shibing624/pycorrector
## Dataset Description
Chinese Spelling Correction (CSC) is a task to detect and correct misspelled characters in Chinese texts.
CSC is challenging since many Chinese characters are visually or phonologically similar but with quite different semantic meanings.
中文拼写纠错数据集,共27万条,是通过原始SIGHAN13、14、15年数据集和Wang271k数据集合并整理后得到,json格式,带错误字符位置信息。
### Original Dataset Summary
- test.json 和 dev.json 为 **SIGHAN数据集**, 包括SIGHAN13 14 15,来自 [官方csc.html](http://nlp.ee.ncu.edu.tw/resource/csc.html) ,文件大小:339kb,4千条。
- train.json 为 **Wang271k数据集**,包括 Wang271k ,来自 [Automatic-Corpus-Generation dimmywang提供](https://github.com/wdimmy/Automatic-Corpus-Generation/blob/master/corpus/train.sgml) ,文件大小:93MB,27万条。
如果只想用SIGHAN数据集,可以这样取数据:
```python
from datasets import load_dataset
dev_ds = load_dataset('shibing624/CSC', split='validation')
print(dev_ds)
print(dev_ds[0])
test_ds = load_dataset('shibing624/CSC', split='test')
print(test_ds)
print(test_ds[0])
```
### Supported Tasks and Leaderboards
中文拼写纠错任务
The dataset designed for csc task training pretrained language models.
### Languages
The data in CSC are in Chinese.
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"id": "B2-4029-3",
"original_text": "晚间会听到嗓音,白天的时候大家都不会太在意,但是在睡觉的时候这嗓音成为大家的恶梦。",
"wrong_ids": [
5,
31
],
"correct_text": "晚间会听到噪音,白天的时候大家都不会太在意,但是在睡觉的时候这噪音成为大家的恶梦。"
}
```
### Data Fields
字段解释:
- id:唯一标识符,无意义
- original_text: 原始错误文本
- wrong_ids: 错误字的位置,从0开始
- correct_text: 纠正后的文本
### Data Splits
| | train | dev | test |
|---------------|------:|--:|--:|
| CSC | 251835条 | 27981条 | 1100条 |
### Licensing Information
The dataset is available under the Apache 2.0.
### Citation Information
```latex
@misc{Xu_Pycorrector_Text_error,
title={Pycorrector: Text error correction tool},
author={Xu Ming},
year={2021},
howpublished={\url{https://github.com/shibing624/pycorrector}},
}
```
### Contributions
[shibing624](https://github.com/shibing624) 整理并上传 |
false | # Heart
The [Heart dataset](https://archive.ics.uci.edu/ml/datasets/Heart) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Does the patient have heart disease?
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| hungary | Binary classification |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/heart", "hungary")["train"]
``` |
false | # Mushroom
The [Mushroom dataset](https://archive.ics.uci.edu/ml/datasets/Mushroom) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|---------------------------|
| mushroom | Binary classification | Is the mushroom poisonous?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/mushroom")["train"]
``` |
false | # Contraceptive
The [Contraceptive dataset](https://archive-beta.ics.uci.edu/dataset/30/contraceptive+method+choice) from the [UCI repository](https://archive-beta.ics.uci.edu).
Does the couple use contraceptives?
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-------------------------|
| contraceptive | Binary classification | Does the couple use contraceptives?|
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/contraceptive", "contraceptive")["train"]
``` |
false | # Glass
The [Glass dataset](https://archive-beta.ics.uci.edu/dataset/42/glass+identification) from the [UCI repository](https://archive-beta.ics.uci.edu).
Classify the type of glass.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|--------------------------|
| glass | Multiclass classification | Classify glass type. |
| windows | Binary classification | Is this windows glass? |
| vehicles | Binary classification | Is this vehicles glass? |
| containers | Binary classification | Is this containers glass?|
| tableware | Binary classification | Is this tableware glass? |
| headlamps | Binary classification | Is this headlamps glass? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/glass", "glass")["train"]
``` |
false |
# Dataset Card for RVL-CDIP
## Extension
The data loader provides support for loading easyOCR files together with the images
It is not included under '../data', yet is available upon request via email <firstname@contract.fit>.
## Table of Contents
- [Dataset Card for RVL-CDIP](#dataset-card-for-rvl-cdip)
- [Extension](#extension)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [The RVL-CDIP Dataset](https://www.cs.cmu.edu/~aharley/rvl-cdip/)
- **Repository:**
- **Paper:** [Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval](https://arxiv.org/abs/1502.07058)
- **Leaderboard:** [RVL-CDIP leaderboard](https://paperswithcode.com/dataset/rvl-cdip)
- **Point of Contact:** [Adam W. Harley](mailto:aharley@cmu.edu)
### Dataset Summary
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given document into one of 16 classes representing document types (letter, form, etc.). The leaderboard for this task is available [here](https://paperswithcode.com/sota/document-image-classification-on-rvl-cdip).
### Languages
All the classes and documents use English as their primary language.
## Dataset Structure
### Data Instances
A sample from the training set is provided below :
```
{
'image': <PIL.TiffImagePlugin.TiffImageFile image mode=L size=754x1000 at 0x7F9A5E92CA90>,
'label': 15
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing a document.
- `label`: an `int` classification label.
<details>
<summary>Class Label Mappings</summary>
```json
{
"0": "letter",
"1": "form",
"2": "email",
"3": "handwritten",
"4": "advertisement",
"5": "scientific report",
"6": "scientific publication",
"7": "specification",
"8": "file folder",
"9": "news article",
"10": "budget",
"11": "invoice",
"12": "presentation",
"13": "questionnaire",
"14": "resume",
"15": "memo"
}
```
</details>
### Data Splits
| |train|test|validation|
|----------|----:|----:|---------:|
|# of examples|320000|40000|40000|
The dataset was split in proportions similar to those of ImageNet.
- 320000 images were used for training,
- 40000 images for validation, and
- 40000 images for testing.
## Dataset Creation
### Curation Rationale
From the paper:
> This work makes available a new labelled subset of the IIT-CDIP collection, containing 400,000
document images across 16 categories, useful for training new CNNs for document analysis.
### Source Data
#### Initial Data Collection and Normalization
The same as in the IIT-CDIP collection.
#### Who are the source language producers?
The same as in the IIT-CDIP collection.
### Annotations
#### Annotation process
The same as in the IIT-CDIP collection.
#### Who are the annotators?
The same as in the IIT-CDIP collection.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was curated by the authors - Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis.
### Licensing Information
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
### Citation Information
```bibtex
@inproceedings{harley2015icdar,
title = {Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval},
author = {Adam W Harley and Alex Ufkes and Konstantinos G Derpanis},
booktitle = {International Conference on Document Analysis and Recognition ({ICDAR})}},
year = {2015}
}
```
### Contributions
Thanks to [@dnaveenr](https://github.com/dnaveenr) for adding this dataset. |
false | |
false |
# Dataset Card for Erhu Playing Technique Database(4-class)
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/CCMUSIC/erhu_playing_tech_4>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** N/A
### Dataset Summary
This dataset is an audio dataset containing 927 audio clips recorded by China Conservatory of Music, each with a performance technique of erhu. It is a part of DCMI[1], which is a dataset of Chinese Musical Instruments. We divide all the recorded techniques into the following 11 categories according to [2] [3] [4] [5]
```
+ detache 分弓 (72)
+ forte (8)
+ medium (8)
+ piano (56)
+ diangong 垫弓 (28)
+ harmonic 泛音 (18)
+ natural 自然泛音 (6)
+ artificial 人工泛音 (12)
+ legato&slide&glissando 连弓&滑音&大滑音 (114)
+ glissando_down 大滑音 下行 (4)
+ glissando_up 大滑音 上行 (4)
+ huihuayin_down 下回滑音 (18)
+ huihuayin_long_down 后下回滑音 (12)
+ legato&slide_up 向上连弓 包含滑音 (24)
+ forte (8)
+ medium (8)
+ piano (8)
+ slide_dianzhi 垫指滑音 (4)
+ slide_down 向下滑音 (16)
+ slide_legato 连线滑音 (16)
+ slide_up 向上滑音 (16)
+ percussive 打击类音效 (21)
+ dajigong 大击弓 (11)
+ horse 马嘶 (2)
+ stick 敲击弓 (8)
+ pizzicato 拨弦 (96)
+ forte (30)
+ medium (29)
+ piano (30)
+ left 左手勾弦 (6)
+ ricochet 抛弓 (36)
+ staccato 顿弓 (141)
+ forte (47)
+ medium (46)
+ piano (48)
+ tremolo 颤弓 (144)
+ forte (48)
+ medium (48)
+ piano (48)
+ trill 颤音 (202)
+ long 长颤音 (141)
+ forte (46)
+ medium (47)
+ piano (48)
+ short 短颤音 (61)
+ down 下颤音 (30)
+ up 上颤音 (31)
+ vibrato 揉弦 (56)
+ late (13)
+ press 压揉 (6)
+ roll 滚揉 (28)
+ slide 滑揉 (9)
```
### Supported Tasks and Leaderboards
Erhu Playing Technique Classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.wav
### Data Fields
trill, staccato, slide, others
### Data Splits
trainset, testset
## Dataset Creation
### Curation Rationale
Lack of a dataset for Erhu playing tech
### Source Data
#### Initial Data Collection and Normalization
Zhaorui Liu, Monan Zhou
#### Who are the source language producers?
Students from CCMUSIC
### Annotations
#### Annotation process
This dataset is an audio dataset containing 927 audio clips recorded by China Conservatory of Music, each with a performance technique of erhu.
#### Who are the annotators?
Students from CCMUSIC
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Advancing the Digitization Process of Traditional Chinese Instruments
### Discussion of Biases
Only for Erhu
### Other Known Limitations
Not Specific Enough in Categorization
## Additional Information
### Dataset Curators
Zijin Li
### Licensing Information
```
MIT License
Copyright (c) 2023 CCMUSIC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}},
month = nov,
year = 2021,
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
[1] Zijin Li, Xiaojing Liang, Jingyu Liu, Wei Li, Jiaxing Zhu, Baoqiang Han, DCMI: A Database of Chinese Musical Instruments, DLfM ’18, Sep 2018, Paris, France<br>
[2] [Chapter 9. Erhu of Bowed Stringed Instruments](https://www.atlasensemble.nl/assets/files/instruments/Erhu/Erhu%20by%20Samuel%20Wong%20Shengmiao.pdf)<br>
[3] 梁广程, 潘永璋. 乐器法手册(增订本)[M]. 人民音乐出版社, 1996.<br>
[4] [Erhu, info for composers](https://www.lantungmusic.com/erhu/for-composers)<br>
[5] 权吉浩, 中西乐器法 [M]. 人民音乐出版社, 2016.
### Contributions
Provide a dataset for Erhu playing tech |
false | |
false |
# IVA Kotlin GitHub Code Dataset
## Dataset Description
This is the curated IVA Kotlin dataset extracted from GitHub.
It contains curated Kotlin files gathered with the purpose to train a code generation model.
The dataset consists of 383380 Kotlin code files from GitHub totaling ~542MB of data.
The [uncurated](https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint) dataset was created from the public GitHub dataset on Google BiqQuery.
### How to use it
To download the full dataset:
```python
from datasets import load_dataset
dataset = load_dataset('mvasiliniuc/iva-kotlin-codeint-clean', split='train')
```
Other details are available for each field:
```python
from datasets import load_dataset
dataset = load_dataset('mvasiliniuc/iva-kotlin-codeint-clean', split='train')
print(dataset[723])
#OUTPUT:
{
"repo_name":"oboenikui/UnivCoopFeliCaReader",
"path":"app/src/main/java/com/oboenikui/campusfelica/ScannerActivity.kt",
"copies":"1",
"size":"5635",
"content":"....public override fun onPause() {\n if (this.isFinishing) {\n adapter.disableForegroundDispatch(this)\n }\n super.onPause()\n }\n\n override ...}\n",
"license":"apache-2.0",
"hash":"e88cfd99346cbef640fc540aac3bf20b",
"line_mean":37.8620689655,
"line_max":199,
"alpha_frac":0.5724933452,
"ratio":5.0222816399,
"autogenerated":false,
"config_or_test":false,
"has_no_keywords":false,
"has_few_assignments":false
}
```
## Data Structure
### Data Fields
|Field|Type|Description|
|---|---|---|
|repo_name|string|name of the GitHub repository|
|path|string|path of the file in GitHub repository|
|copies|string|number of occurrences in dataset|
|content|string|content of source file|
|size|string|size of the source file in bytes|
|license|string|license of GitHub repository|
|hash|string|Hash of content field.|
|line_mean|number|Mean line length of the content.
|line_max|number|Max line length of the content.
|alpha_frac|number|Fraction between mean and max line length of content.
|ratio|number|Character/token ratio of the file with tokenizer.
|autogenerated|boolean|True if the content is autogenerated by looking for keywords in the first few lines of the file.
|config_or_test|boolean|True if the content is a configuration file or a unit test.
|has_no_keywords|boolean|True if a file has none of the keywords for Kotlin Programming Language.
|has_few_assignments|boolean|True if file uses symbol '=' less than `minimum` times.
### Instance
```json
{
"repo_name":"oboenikui/UnivCoopFeliCaReader",
"path":"app/src/main/java/com/oboenikui/campusfelica/ScannerActivity.kt",
"copies":"1",
"size":"5635",
"content":"....",
"license":"apache-2.0",
"hash":"e88cfd99346cbef640fc540aac3bf20b",
"line_mean":37.8620689655,
"line_max":199,
"alpha_frac":0.5724933452,
"ratio":5.0222816399,
"autogenerated":false,
"config_or_test":false,
"has_no_keywords":false,
"has_few_assignments":false
}
```
## Languages
The dataset contains only Kotlin files.
```json
{
"Kotlin": [".kt"]
}
```
## Licenses
Each entry in the dataset contains the associated license. The following is a list of licenses involved and their occurrences.
```json
{
"agpl-3.0":4052,
"apache-2.0":114641,
"artistic-2.0":159,
"bsd-2-clause":474,
"bsd-3-clause":4571,
"cc0-1.0":198,
"epl-1.0":991,
"gpl-2.0":5625,
"gpl-3.0":25102,
"isc":436,
"lgpl-2.1":146,
"lgpl-3.0":3406,
"mit":39399,
"mpl-2.0":1819,
"unlicense":824
}
```
## Dataset Statistics
```json
{
"Total size": "~261 MB",
"Number of files": 201843,
"Number of files under 500 bytes": 3697,
"Average file size in bytes": 5205,
}
```
## Curation Process
* Removal of duplication files based on file hash.
* Removal of file templates. File containing the following: [${PACKAGE_NAME}, ${NAME}, ${VIEWHOLDER_CLASS}, ${ITEM_CLASS}]
* Removal of the files containing the following words in the first 10 lines: `generated, auto-generated", "autogenerated", "automatically generated`
* Removal of the files containing the following words in the first 10 lines with a probability of 0.7: `test", "unit test", "config", "XCTest", "JUnit`
* Removal of file with the rate of alphanumeric characters below 0.3 of the file.
* Removal of near duplication based MinHash and Jaccard similarity.
* Removal of files with mean line length above 100.
* Removal of files without mention of keywords with a probability of 0.7: [`"fun ", "val ", "var ", "if ", "else ", "while ", "for ", "return ", "class ", "data ", "struct ", "interface ", "when ", "catch "`]
* Removal of files that use the assignment operator `=` less than 3 times.
* Removal of files with the ratio between the number of characters and number of tokens after tokenization lower than 1.5.
Curation process is a derivation of the one used in CodeParrot project: https://huggingface.co/codeparrot
## Data Splits
The dataset only contains a train split which is separated into train and valid which can be found here:
* Clean Version Train: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-train
* Clean Version Valid: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-valid
# Considerations for Using the Data
The dataset comprises source code from various repositories, potentially containing harmful or biased code,
along with sensitive information such as passwords or usernames.
|
false | |
false |
<div align="center">
<img width="640" alt="manot/pothole-segmentation" src="https://huggingface.co/datasets/manot/pothole-segmentation/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['potholes', 'object', 'pothole', 'potholes']
```
### Number of Images
```json
{'valid': 157, 'test': 80, 'train': 582}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("manot/pothole-segmentation", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/abdulmohsen-fahad-f7pdw/road-damage-xvt2d/dataset/3](https://universe.roboflow.com/abdulmohsen-fahad-f7pdw/road-damage-xvt2d/dataset/3?ref=roboflow2huggingface)
### Citation
```
@misc{ road-damage-xvt2d_dataset,
title = { road damage Dataset },
type = { Open Source Dataset },
author = { abdulmohsen fahad },
howpublished = { \\url{ https://universe.roboflow.com/abdulmohsen-fahad-f7pdw/road-damage-xvt2d } },
url = { https://universe.roboflow.com/abdulmohsen-fahad-f7pdw/road-damage-xvt2d },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { jun },
note = { visited on 2023-06-13 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on June 13, 2023 at 8:47 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 819 images.
Potholes are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
|
true | # full-hh-rlhf-ru
This is translated version of [Dahoas/full-hh-rlhf](https://huggingface.co/datasets/Dahoas/full-hh-rlhf) dataset into Russian. |
false |
# Dataset Card for HashSet Distant Sampled
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
HashSet Distant Sampled is a sample of 20,000 camel cased hashtags from the HashSet Distant dataset.
### Languages
Hindi and English.
## Dataset Structure
### Data Instances
```
{
'index': 282559,
'hashtag': 'Youth4Nation',
'segmentation': 'Youth 4 Nation'
}
```
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
false |
# Dataset Card for HashSet Distant
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
### Languages
Hindi and English.
## Dataset Structure
### Data Instances
```
{
'index': 282559,
'hashtag': 'Youth4Nation',
'segmentation': 'Youth 4 Nation'
}
```
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
false |
# Dataset Card for HashSet Manual
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Manual: contains 1.9k manually annotated hashtags. Each row consists of the hashtag, segmented hashtag ,named entity annotations, whether the hashtag contains mix of hindi and english tokens and/or contains non-english tokens.
### Languages
Mostly Hindi and English.
## Dataset Structure
### Data Instances
```
{
"index": 10,
"hashtag": "goodnewsmegan",
"segmentation": "good news megan",
"spans": {
"start": [
8
],
"end": [
13
],
"text": [
"megan"
]
},
"source": "roman",
"gold_position": null,
"mix": false,
"other": false,
"ner": true,
"annotator_id": 1,
"annotation_id": 2088,
"created_at": "2021-12-30 17:10:33.800607",
"updated_at": "2021-12-30 17:10:59.714840",
"lead_time": 3896.182,
"rank": {
"position": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10
],
"candidate": [
"goodnewsmegan",
"goodnewsmeg an",
"goodnews megan",
"goodnewsmega n",
"go odnewsmegan",
"good news megan",
"good newsmegan",
"g oodnewsmegan",
"goodnewsme gan",
"goodnewsm egan"
]
}
}
```
### Data Fields
- `index`: a numerical index annotated by Kodali et al..
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `spans`: named entity spans.
- `source`: data source.
- `gold_position`: position of the gold segmentation on the `segmentation` field inside the `rank`.
- `mix`: The hashtag has a mix of English and Hindi tokens.
- `other`: The hashtag has non-English tokens.
- `ner`: The hashtag has named entities.
- `annotator_id`: annotator ID.
- `annotation_id`: annotation ID.
- `created_at`: Creation date timestamp.
- `updated_at`: Update date timestamp.
- `lead_time`: Lead time field annotated by Kodali et al..
- `rank`: Rank of each candidate selected by a baseline word segmenter ( WordBreaker ).
- `candidates`: Candidates selected by a baseline word segmenter ( WordBreaker ).
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
false |
# Dataset Card for STAN Small
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [mounicam/hashtag_master](https://github.com/mounicam/hashtag_master)
- **Paper:** [Multi-task Pairwise Neural Ranking for Hashtag Segmentation](https://aclanthology.org/P19-1242/)
### Dataset Summary
Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 300,
"hashtag": "microsoftfail",
"segmentation": "microsoft fail",
"alternatives": {
"segmentation": [
"Microsoft fail"
]
}
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `alternatives`: other segmentations that are also accepted as a gold segmentation.
Although `segmentation` has exactly the same characters as `hashtag` except for the spaces, the segmentations inside `alternatives` may have characters corrected to uppercase.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@misc{bansal2015deep,
title={Towards Deep Semantic Analysis Of Hashtags},
author={Piyush Bansal and Romil Bansal and Vasudeva Varma},
year={2015},
eprint={1501.03210},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
false |
# Dataset Card for BOUN
## Dataset Description
- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
- **Paper:** [Segmenting Hashtags and Analyzing Their Grammatical Structure](https://asistdl.onlinelibrary.wiley.com/doi/epdf/10.1002/asi.23989?author_access_token=qbKcE1jrre5nbv_Tn9csbU4keas67K9QMdWULTWMo8NOtY2aA39ck2w5Sm4ePQ1MZhbjCdEuaRlPEw2Kd12jzvwhwoWP0fdroZAwWsmXHPXxryDk_oBCup1i9_VDNIpU)
### Dataset Summary
Dev-BOUN is a Development set that includes 500 manually segmented hashtags. These are selected from tweets about movies,
tv shows, popular people, sports teams etc.
Test-BOUN is a Test set that includes 500 manually segmented hashtags. These are selected from tweets about movies, tv shows, popular people, sports teams etc.
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 0,
"hashtag": "tryingtosleep",
"segmentation": "trying to sleep"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{celebi2018segmenting,
title={Segmenting hashtags and analyzing their grammatical structure},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
journal={Journal of the Association for Information Science and Technology},
volume={69},
number={5},
pages={675--686},
year={2018},
publisher={Wiley Online Library}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
false |
# Dataset Card for Dev-Stanford
## Dataset Description
- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
- **Paper:** [Segmenting Hashtags and Analyzing Their Grammatical Structure](https://asistdl.onlinelibrary.wiley.com/doi/epdf/10.1002/asi.23989?author_access_token=qbKcE1jrre5nbv_Tn9csbU4keas67K9QMdWULTWMo8NOtY2aA39ck2w5Sm4ePQ1MZhbjCdEuaRlPEw2Kd12jzvwhwoWP0fdroZAwWsmXHPXxryDk_oBCup1i9_VDNIpU)
### Dataset Summary
1000 hashtags manually segmented by Çelebi et al. for development purposes,
randomly selected from the Stanford Sentiment Tweet Corpus by Sentiment140.
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 15,
"hashtag": "marathonmonday",
"segmentation": "marathon monday"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{celebi2018segmenting,
title={Segmenting hashtags and analyzing their grammatical structure},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
journal={Journal of the Association for Information Science and Technology},
volume={69},
number={5},
pages={675--686},
year={2018},
publisher={Wiley Online Library}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
false |
# Dataset Card for Test-Stanford
## Dataset Description
- **Paper:** [Towards Deep Semantic Analysis Of Hashtags](https://arxiv.org/abs/1501.03210)
### Dataset Summary
Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 1467856821,
"hashtag": "therapyfail",
"segmentation": "therapy fail",
"gold_position": 8,
"rank": {
"position": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20
],
"candidate": [
"therap y fail",
"the rap y fail",
"t her apy fail",
"the rap yfail",
"t he rap y fail",
"thera py fail",
"ther apy fail",
"th era py fail",
"therapy fail",
"therapy fai l",
"the r apy fail",
"the rapyfa il",
"the rapy fail",
"t herapy fail",
"the rapyfail",
"therapy f ai l",
"therapy fa il",
"the rapyf a il",
"therapy f ail",
"the ra py fail"
]
}
}
```
### Data Fields
- `index`: a numerical index annotated by Kodali et al..
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `gold_position`: position of the gold segmentation on the `segmentation` field inside the `rank`.
- `rank`: Rank of each candidate selected by a baseline word segmenter ( Segmentations Seeder Module ).
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@misc{bansal2015deep,
title={Towards Deep Semantic Analysis Of Hashtags},
author={Piyush Bansal and Romil Bansal and Vasudeva Varma},
year={2015},
eprint={1501.03210},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
true |
# Dataset Card for "zulu-stance"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://arxiv.org/abs/2205.03153](https://arxiv.org/abs/2205.03153)
- **Repository:**
- **Paper:** [https://arxiv.org/pdf/2205.03153](https://arxiv.org/pdf/2205.03153)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 212.54 KiB
- **Size of the generated dataset:** 186.76 KiB
- **Total amount of disk used:** 399.30KiB
### Dataset Summary
This is a stance detection dataset in the Zulu language. The data is translated to Zulu by Zulu native speakers, from English source texts.
Our paper aims at utilizing this progress made for English to transfers that knowledge into other languages, which is a non-trivial task due to the domain gap between English and the target languages. We propose a black-box non-intrusive method that utilizes techniques from Domain Adaptation to reduce the domain gap, without requiring any human expertise in the target language, by leveraging low-quality data in both a supervised and unsupervised manner. This allows us to rapidly achieve similar results for stance detection for the Zulu language, the target language in this work, as are found for English. A natively-translated dataset is used for evaluation of domain transfer.
### Supported Tasks and Leaderboards
*
### Languages
Zulu (`bcp47:zu`)
## Dataset Structure
### Data Instances
#### zulu_stance
- **Size of downloaded dataset files:** 212.54 KiB
- **Size of the generated dataset:** 186.76 KiB
- **Total amount of disk used:** 399.30KiB
An example of 'train' looks as follows.
```
{
'id': '0',
'text': 'ubukhulu be-islam buba sobala lapho i-smartphone ifaka i-ramayana njengo-ramadan. #semst',
'target': 'Atheism',
'stance': 1}
```
### Data Fields
- `id`: a `string` feature.
- `text`: a `string` expressing a stance.
- `target`: a `string` of the target/topic annotated here.
- `stance`: a class label representing the stance the text expresses towards the target. Full tagset with indices:
```
0: "FAVOR",
1: "AGAINST",
2: "NONE",
```
### Data Splits
| name |train|
|---------|----:|
|zulu_stance|1343 sentences|
## Dataset Creation
### Curation Rationale
To enable stance detection in Zulu and also to measure domain transfer in translation
### Source Data
#### Initial Data Collection and Normalization
The original data is taken from [Semeval2016 task 6: Detecting stance in tweets.](https://aclanthology.org/S16-1003/),
and then translated manually to Zulu.
#### Who are the source language producers?
English-speaking Twitter users.
### Annotations
#### Annotation process
See [Semeval2016 task 6: Detecting stance in tweets.](https://aclanthology.org/S16-1003/); the annotations are taken from there.
#### Who are the annotators?
See [Semeval2016 task 6: Detecting stance in tweets.](https://aclanthology.org/S16-1003/); the annotations are taken from there.
### Personal and Sensitive Information
The data was public at the time of collection. User names are preserved.
## Considerations for Using the Data
### Social Impact of Dataset
There's a risk of user-deleted content being in this data. The data has NOT been vetted for any content, so there's a risk of harmful text.
### Discussion of Biases
While the data is in Zulu, the source text is not from or about Zulu-speakers, and so still expresses the social biases and topics found in English-speaking Twitter users. Further, some of the topics are USA-specific. The sentiments and ideas in this dataset do not represent Zulu speakers.
### Other Known Limitations
The above limitations apply.
## Additional Information
### Dataset Curators
The dataset is curated by the paper's authors.
### Licensing Information
The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@inproceedings{dlamini_zulu_stance,
title={Bridging the Domain Gap for Stance Detection for the Zulu language},
author={Dlamini, Gcinizwe and Bekkouch, Imad Eddine Ibrahim and Khan, Adil and Derczynski, Leon},
booktitle={Proceedings of IEEE IntelliSys},
year={2022}
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
false |
# Dataset Card for "crd3"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CRD3 homepage](https://github.com/RevanthRameshkumar/CRD3)
- **Repository:** [CRD3 repository](https://github.com/RevanthRameshkumar/CRD3)
- **Paper:** [Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset](https://www.aclweb.org/anthology/2020.acl-main.459/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 279.93 MB
- **Size of the generated dataset:** 4020.33 MB
- **Total amount of disk used:** 4300.25 MB
### Dataset Summary
Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset.
Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game.
The dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding
abstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player
collaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail,
and semantic ties to the previous dialogues.
### Supported Tasks and Leaderboards
`summarization`: The dataset can be used to train a model for abstractive summarization. A [fast abstractive summarization-RL](https://github.com/ChenRocks/fast_abs_rl) model was presented as a baseline, which achieves ROUGE-L-F1 of 25.18.
### Languages
The text in the dataset is in English, as spoken by actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.
## Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
### Data Instances
#### default
- **Size of downloaded dataset files:** 279.93 MB
- **Size of the generated dataset:** 4020.33 MB
- **Total amount of disk used:** 4300.25 MB
An example of 'train' looks as follows.
```
{
"alignment_score": 3.679936647415161,
"chunk": "Wish them a Happy Birthday on their Facebook and Twitter pages! Also, as a reminder: D&D Beyond streams their weekly show (\"And Beyond\") every Wednesday on twitch.tv/dndbeyond.",
"chunk_id": 1,
"turn_end": 6,
"turn_num": 4,
"turn_start": 4,
"turns": {
"names": ["SAM"],
"utterances": ["Yesterday, guys, was D&D Beyond's first one--", "first one-year anniversary. Take two. Hey guys,", "yesterday was D&D Beyond's one-year anniversary.", "Wish them a happy birthday on their Facebook and", "Twitter pages."]
}
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `chunk`: a `string` feature.
- `chunk_id`: a `int32` feature.
- `turn_start`: a `int32` feature.
- `turn_end`: a `int32` feature.
- `alignment_score`: a `float32` feature.
- `turn_num`: a `int32` feature.
- `turns`: a dictionary feature containing:
- `names`: a `string` feature.
- `utterances`: a `string` feature.
### Data Splits
| name | train |validation| test |
|-------|------:|---------:|------:|
|default|26,232| 3,470|4,541|
## Dataset Creation
### Curation Rationale
Dialogue understanding and abstractive summarization remain both important and challenging problems for computational linguistics. Current paradigms in summarization modeling have specific failures in capturing semantics and pragmatics, content selection, rewriting, and evaluation in the domain of long, story-telling dialogue. CRD3 offers a linguistically rich dataset to explore these domains.
### Source Data
#### Initial Data Collection and Normalization
Dungeons and Dragons is a popular roleplaying game that is driven by structured storytelling. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons. This dataset consists of 159 episodes of the show, where the episodes are transcribed. Inconsistencies (e.g. spelling of speaker names) were manually resolved.
The abstractive summaries were collected from the [Critical Role Fandom wiki](https://criticalrole.fandom.com/)
#### Who are the source language producers?
The language producers are actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
CRTranscript provided transcripts of the show; contributors of the Critical Role Wiki provided the abstractive summaries.
### Licensing Information
This work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa-4.0]., as corresponding to the Critical Role Wiki https://criticalrole.fandom.com/
### Citation Information
```
@inproceedings{
title = {Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset},
author = {Rameshkumar, Revanth and Bailey, Peter},
year = {2020},
publisher = {Association for Computational Linguistics},
conference = {ACL}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
|
false |
# Dataset Card for audioset2022
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [AudioSet Ontology](https://research.google.com/audioset/ontology/index.html)
- **Repository:** [Needs More Information]
- **Paper:** [Audio Set: An ontology and human-labeled dataset for audio events](https://research.google.com/pubs/pub45857.html)
- **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/dataset/audioset)
### Dataset Summary
The AudioSet ontology is a collection of sound events organized in a hierarchy. The ontology covers a wide range of everyday sounds, from human and animal sounds, to natural and environmental sounds, to musical and miscellaneous sounds.
**This repository only includes audio files for DCASE 2022 - Task 3**
The included labels are limited to:
- Female speech, woman speaking
- Male speech, man speaking
- Clapping
- Telephone
- Telephone bell ringing
- Ringtone
- Laughter
- Domestic sounds, home sounds
- Vacuum cleaner
- Kettle whistle
- Mechanical fan
- Walk, footsteps
- Door
- Cupboard open or close
- Music
- Background music
- Pop music
- Musical instrument
- Acoustic guitar
- Marimba, xylophone
- Cowbell
- Piano
- Electric piano
- Rattle (instrument)
- Water tap, faucet
- Bell
- Bicycle bell
- Chime
- Knock
### Supported Tasks and Leaderboards
- `audio-classification`: The dataset can be used to train a model for Sound Event Detection/Localization.
**The recordings only includes the single channel audio. For Localization tasks, it will required to apply RIR information**
### Languages
None
## Dataset Structure
### Data Instances
**WIP**
```
{
'file':
}
```
### Data Fields
- file: A path to the downloaded audio file in .mp3 format.
### Data Splits
This dataset only includes audio file from the unbalance train list.
The data comprises two splits: weak labels and strong labels.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially downloaded by Nelson Yalta (nelson.yalta@ieee.org).
### Licensing Information
[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0)
### Citation Information
```
@inproceedings{45857,
title = {Audio Set: An ontology and human-labeled dataset for audio events},
author = {Jort F. Gemmeke and Daniel P. W. Ellis and Dylan Freedman and Aren Jansen and Wade Lawrence and R. Channing Moore and Manoj Plakal and Marvin Ritter},
year = {2017},
booktitle = {Proc. IEEE ICASSP 2017},
address = {New Orleans, LA}
}
```
|
false |
# Dataset Card for "PiC: Phrase Retrieval"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://phrase-in-context.github.io/](https://phrase-in-context.github.io/)
- **Repository:** [https://github.com/phrase-in-context](https://github.com/phrase-in-context)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Thang Pham](<thangpham@auburn.edu>)
### Dataset Summary
PR is a phrase retrieval task with the goal of finding a phrase **t** in a given document **d** such that **t** is semantically similar to the query phrase, which is the paraphrase **q**<sub>1</sub> provided by annotators.
We release two versions of PR: **PR-pass** and **PR-page**, i.e., datasets of 3-tuples (query **q**<sub>1</sub>, target phrase **t**, document **d**) where **d** is a random 11-sentence passage that contains **t** or an entire Wikipedia page.
While PR-pass contains 28,147 examples, PR-page contains slightly fewer examples (28,098) as we remove those trivial examples whose Wikipedia pages contain exactly the query phrase (in addition to the target phrase).
Both datasets are split into 5K/3K/~20K for test/dev/train, respectively.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English.
## Dataset Structure
### Data Instances
**PR-pass**
* Size of downloaded dataset files: 43.61 MB
* Size of the generated dataset: 36.98 MB
* Total amount of disk used: 80.59 MB
An example of 'train' looks as follows.
```
{
"id": "3478-1",
"title": "https://en.wikipedia.org/wiki?curid=181261",
"context": "The 425t was a 'pizza box' design with a single network expansion slot. The 433s was a desk-side server systems with multiple expansion slots. Compatibility. PC compatibility was possible either through software emulation, using the optional product DPCE, or through a plug-in card carrying an Intel 80286 processor. A third-party plug-in card with a 386 was also available. An Apollo Token Ring network card could also be placed in a standard PC and network drivers allowed it to connect to a server running a PC SMB (Server Message Block) file server. Usage. Although Apollo systems were easy to use and administer, they became less cost-effective because the proprietary operating system made software more expensive than Unix software. The 68K processors were slower than the new RISC chips from Sun and Hewlett-Packard. Apollo addressed both problems by introducing the RISC-based DN10000 and Unix-friendly Domain/OS operating system. However, the DN10000, though fast, was extremely expensive, and a reliable version of Domain/OS came too late to make a difference.",
"query": "dependable adaptation",
"answers": {
"text": ["reliable version"],
"answer_start": [1006]
}
}
```
**PR-page**
* Size of downloaded dataset files: 421.56 MB
* Size of the generated dataset: 412.17 MB
* Total amount of disk used: 833.73 MB
An example of 'train' looks as follows.
```
{
"id": "5961-2",
"title": "https://en.wikipedia.org/wiki?curid=354711",
"context": "Joseph Locke FRSA (9 August 1805 – 18 September 1860) was a notable English civil engineer of the nineteenth century, particularly associated with railway projects. Locke ranked alongside Robert Stephenson and Isambard Kingdom Brunel as one of the major pioneers of railway development. Early life and career. Locke was born in Attercliffe, Sheffield in Yorkshire, moving to nearby Barnsley when he was five. By the age of 17, Joseph had already served an apprenticeship under William Stobart at Pelaw, on the south bank of the Tyne, and under his own father, William. He was an experienced mining engineer, able to survey, sink shafts, to construct railways, tunnels and stationary engines. Joseph's father had been a manager at Wallbottle colliery on Tyneside when George Stephenson was a fireman there. In 1823, when Joseph was 17, Stephenson was involved with planning the Stockton and Darlington Railway. He and his son Robert Stephenson visited William Locke and his son at Barnsley and it was arranged that Joseph would go to work for the Stephensons. The Stephensons established a locomotive works near Forth Street, Newcastle upon Tyne, to manufacture locomotives for the new railway. Joseph Locke, despite his youth, soon established a position of authority. He and Robert Stephenson became close friends, but their friendship was interrupted, in 1824, by Robert leaving to work in Colombia for three years. Liverpool and Manchester Railway. George Stephenson carried out the original survey of the line of the Liverpool and Manchester Railway, but this was found to be flawed, and the line was re-surveyed by a talented young engineer, Charles Vignoles. Joseph Locke was asked by the directors to carry out another survey of the proposed tunnel works and produce a report. The report was highly critical of the work already done, which reflected badly on Stephenson. Stephenson was furious and henceforth relations between the two men were strained, although Locke continued to be employed by Stephenson, probably because the latter recognised his worth. Despite the many criticisms of Stephenson's work, when the bill for the new line was finally passed, in 1826, Stephenson was appointed as engineer and he appointed Joseph Locke as his assistant to work alongside Vignoles, who was the other assistant. However, a clash of personalities between Stephenson and Vignoles led to the latter resigning, leaving Locke as the sole assistant engineer. Locke took over responsibility for the western half of the line. One of the major obstacles to be overcome was Chat Moss, a large bog that had to be crossed. Although, Stephenson usually gets the credit for this feat, it is believed that it was Locke who suggested the correct method for crossing the bog. Whilst the line was being built, the directors were trying to decide whether to use standing engines or locomotives to propel the trains. Robert Stephenson and Joseph Locke were convinced that locomotives were vastly superior, and in March 1829 the two men wrote a report demonstrating the superiority of locomotives when used on a busy railway. The report led to the decision by the directors to hold an open trial to find the best locomotive. This was the Rainhill Trials, which were run in October 1829, and were won by \"Rocket\". When the line was finally opened in 1830, it was planned for a procession of eight trains to travel from Liverpool to Manchester and back. George Stephenson drove the leading locomotive \"Northumbrian\" and Joseph Locke drove \"Rocket\". The day was marred by the death of William Huskisson, the Member of Parliament for Liverpool, who was struck and killed by \"Rocket\". Grand Junction Railway. In 1829 Locke was George Stephenson's assistant, given the job of surveying the route for the Grand Junction Railway. This new railway was to join Newton-le-Willows on the Liverpool and Manchester Railway with Warrington and then on to Birmingham via Crewe, Stafford and Wolverhampton, a total of 80 miles. Locke is credited with choosing the location for Crewe and recommending the establishment there of shops required for the building and repairs of carriages and wagons as well as engines. During the construction of the Liverpool and Manchester Railway, Stephenson had shown a lack of ability in organising major civil engineering projects. On the other hand, Locke's ability to manage complex projects was well known. The directors of the new railway decided on a compromise whereby Locke was made responsible for the northern half of the line and Stephenson was made responsible for the southern half. However Stephenson's administrative inefficiency soon became apparent, whereas Locke estimated the costs for his section of the line so meticulously and speedily, that he had all of the contracts signed for his section of the line before a single one had been signed for Stephenson's section. The railway company lost patience with Stephenson, but tried to compromise by making both men joint-engineers. Stephenson's pride would not let him accept this, and so he resigned from the project. By autumn of 1835 Locke had become chief engineer for the whole of the line. This caused a rift between the two men, and strained relations between Locke and Robert Stephenson. Up to this point, Locke had always been under George Stephenson's shadow. From then on, he would be his own man, and stand or fall by his own achievements. The line was opened on 4 July 1837. New methods. Locke's route avoided as far as possible major civil engineering works. The main one was the Dutton Viaduct which crosses the River Weaver and the Weaver Navigation between the villages of Dutton and Acton Bridge in Cheshire. The viaduct consists of 20 arches with spans of 20 yards. An important feature of the new railway was the use of double-headed (dumb-bell) wrought-iron rail supported on timber sleepers at 2 ft 6 in intervals. It was intended that when the rails became worn they could be turned over to use the other surface, but in practice it was found that the chairs into which the rails were keyed caused wear to the bottom surface so that it became uneven. However this was still an improvement on the fish-bellied, wrought-iron rails still being used by Robert Stephenson on the London and Birmingham Railway. Locke was more careful than Stephenson to get value for his employers' money. For the Penkridge Viaduct Stephenson had obtained a tender of £26,000. After Locke took over, he gave the potential contractor better information and agreed a price of only £6,000. Locke also tried to avoid tunnels because in those days tunnels often took longer and cost more than planned. The Stephensons regarded 1 in 330 as the maximum slope that an engine could manage and Robert Stephenson achieved this on the London and Birmingham Railway by using seven tunnels which added both cost and delay. Locke avoided tunnels almost completely on the Grand Junction but exceeded the slope limit for six miles south of Crewe. Proof of Locke's ability to estimate costs accurately is given by the fact that the construction of the Grand Junction line cost £18,846 per mile as against Locke's estimate of £17,000. This is amazingly accurate compared with the estimated costs for the London and Birmingham Railway (Robert Stephenson) and the Great Western Railway (Brunel). Locke also divided the project into a few large sections rather than many small ones. This allowed him to work closely with his contractors to develop the best methods, overcome problems and personally gain practical experience of the building process and of the contractors themselves. He used the contractors who worked well with him, especially Thomas Brassey and William Mackenzie, on many other projects. Everyone gained from this cooperative approach whereas Brunel's more adversarial approach eventually made it hard for him to get anyone to work for him. Marriage. In 1834 Locke married Phoebe McCreery, with whom he adopted a child. He was elected to the Royal Society in 1838. Lancaster and Carlisle Railway. A significant difference in philosophy between George Stephenson and Joseph Locke and the surveying methods they employed was more than a mere difference of opinion. Stephenson had started his career at a time when locomotives had little power to overcome excessive gradients. Both George and Robert Stephenson were prepared to go to great lengths to avoid steep gradients that would tax the locomotives of the day, even if this meant choosing a circuitous path that added on extra miles to the line of the route. Locke had more confidence in the ability of modern locomotives to climb these gradients. An example of this was the Lancaster and Carlisle Railway, which had to cope with the barrier of the Lake District mountains. In 1839 Stephenson proposed a circuitous route that avoided the Lake District altogether by going all the way round Morecambe Bay and West Cumberland, claiming: 'This is the only practicable line from Liverpool to Carlisle. The making of a railway across Shap Fell is out of the question.' The directors rejected his route and chose the one proposed by Joseph Locke, one that used steep gradients and passed over Shap Fell. The line was completed by Locke and was a success. Locke's reasoned that by avoiding long routes and tunnelling, the line could be finished more quickly, with less capital costs, and could start earning revenue sooner. This became known as the 'up and over' school of engineering (referred to by Rolt as 'Up and Down,' or Rollercoaster). Locke took a similar approach in planning the Caledonian Railway, from Carlisle to Glasgow. In both railways he introduced gradients of 1 in 75, which severely taxed fully laden locomotives, for even as more powerful locomotives were introduced, the trains that they pulled became heavier. It may therefore be argued that Locke, although his philosophy carried the day, was not entirely correct in his reasoning. Even today, Shap Fell is a severe test of any locomotive. Manchester and Sheffield Railway. Locke was subsequently appointed to build a railway line from Manchester to Sheffield, replacing Charles Vignoles as chief engineer, after the latter had been beset by misfortunes and financial difficulties. The project included the three-mile Woodhead Tunnel, and the line opened, after many delays, on 23 December 1845. The building of the line required over a thousand navvies and cost the lives of thirty-two of them, seriously injuring 140 others. The Woodhead Tunnel was such a difficult undertaking that George Stephenson claimed that it could not be done, declaring that he would eat the first locomotive that got through the tunnel. Subsequent commissions. In the north, Locke also designed the Lancaster and Preston Junction Railway; the Glasgow, Paisley and Greenock Railway; and the Caledonian Railway from Carlisle to Glasgow and Edinburgh. In the south, he worked on the London and Southampton Railway, later called the London and South Western Railway, designing, among other structures, Nine Elms to Waterloo Viaduct, Richmond Railway Bridge (1848, since replaced), and Barnes Bridge (1849), both across the River Thames, tunnels at Micheldever, and the 12-arch Quay Street viaduct and the 16-arch Cams Hill viaduct, both in Fareham (1848). He was actively involved in planning and building many railways in Europe (assisted by John Milroy), including the Le Havre, Rouen, Paris rail link, the Barcelona to Mataró line and the Dutch Rhenish Railway. He was present in Paris when the Versailles train crash occurred in 1842, and produced a statement concerning the facts for General Charles Pasley of the Railway Inspectorate. He also experienced a catastrophic failure of one of his viaducts built on the new Paris-Le Havre link. . The viaduct was of stone and brick at Barentin near Rouen, and was the longest and highest on the line. It was 108 feet high, and consisted of 27 arches, each 50 feet wide, with a total length of over 1600 feet. A boy hauling ballast for the line up an adjoining hillside early that morning (about 6.00 am) saw one arch (the fifth on the Rouen side) collapse, and the rest followed suit. Fortunately, no one was killed, although several workmen were injured in a mill below the structure. Locke attributed the catastrophic failure to frost action on the new lime cement, and premature off-centre loading of the viaduct with ballast. It was rebuilt at Thomas Brassey's cost, and survives to the present. Having pioneered many new lines in France, Locke also helped establish the first locomotive works in the country. Distinctive features of Locke's railway works were economy, the use of masonry bridges wherever possible and the absence of tunnels. An illustration of this is that there is no tunnel between Birmingham and Glasgow. Relationship with Robert Stephenson. Locke and Robert Stephenson had been good friends at the beginning of their careers, but their friendship had been marred by Locke's falling out with Robert's father. It seems that Robert felt loyalty to his father required that he should take his side. It is significant that after the death of George Stephenson in August 1848, the friendship of the two men was revived. When Robert Stephenson died in October 1859, Joseph Locke was a pallbearer at his funeral. Locke is reported to have referred to Robert as 'the friend of my youth, the companion of my ripening years, and a competitor in the race of life'. Locke was also on friendly terms with his other engineering rival, Isambard Kingdom Brunel. In 1845, Locke and Stephenson were both called to give evidence before two committees. In April a House of Commons Select Committee was investigating the atmospheric railway system proposed by Brunel. Brunel and Vignoles spoke in support of the system, whilst Locke and Stephenson spoke against it. The latter two were to be proved right in the long run. In August the two gave evidence before the Gauge Commissioners who were trying to arrive at a standard gauge for the whole country. Brunel spoke in favour of the 7 ft gauge he was using on the Great Western Railway. Locke and Stephenson spoke in favour of the 4 ft 8½in gauge that they had used on several lines. The latter two won the day and their gauge was adopted as the standard. Later life and legacy. Locke served as President of the Institution of Civil Engineers in between December 1857 and December 1859. He also served as Member of Parliament for Honiton in Devon from 1847 until his death. Joseph Locke died on 18 September 1860, apparently from appendicitis, whilst on a shooting holiday. He is buried in London's Kensal Green Cemetery. He outlived his friends/rivals Robert Stephenson and Isambard Brunel by less than a year; all three engineers died between 53 and 56 years of age, a circumstance attributed by Rolt to sheer overwork, accomplishing more in their brief lives than many achieve in a full three score and ten. Locke Park in Barnsley was dedicated to his memory by his widow Phoebe in 1862. It features a statue of Locke plus a folly, 'Locke Tower'. Locke's greatest legacy is the modern day West Coast Main Line (WCML), which was formed by the joining of the Caledonian, Lancaster & Carlisle, Grand Junction railways to Robert Stephenson's London & Birmingham Railway. As a result, around three-quarters of the WCML's route was planned and engineered by Locke.",
"query": "accurate approach",
"answers": {
"text": ["correct method"],
"answer_start": [2727]
}
}
```
### Data Fields
The data fields are the same among all subsets and splits.
* id: a string feature.
* title: a string feature.
* context: a string feature.
* question: a string feature.
* answers: a dictionary feature containing:
* text: a list of string features.
* answer_start: a list of int32 features.
### Data Splits
| name |train|validation|test|
|--------------------|----:|---------:|---:|
|PR-pass |20147| 3000|5000|
|PR-page |20098| 3000|5000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The source passages + answers are from Wikipedia and the source of queries were produced by our hired linguistic experts from [Upwork.com](https://upwork.com).
#### Who are the source language producers?
We hired 13 linguistic experts from [Upwork.com](https://upwork.com) for annotation and more than 1000 human annotators on Mechanical Turk along with another set of 5 Upwork experts for 2-round verification.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
13 linguistic experts from [Upwork.com](https://upwork.com).
### Personal and Sensitive Information
No annotator identifying details are provided.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset is a joint work between Adobe Research and Auburn University.
Creators: [Thang M. Pham](https://scholar.google.com/citations?user=eNrX3mYAAAAJ), [David Seunghyun Yoon](https://david-yoon.github.io/), [Trung Bui](https://sites.google.com/site/trungbuistanford/), and [Anh Nguyen](https://anhnguyen.me).
[@PMThangXAI](https://twitter.com/pmthangxai) added this dataset to HuggingFace.
### Licensing Information
This dataset is distributed under [Creative Commons Attribution-NonCommercial 4.0 International (CC-BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/)
### Citation Information
```
@article{pham2022PiC,
title={PiC: A Phrase-in-Context Dataset for Phrase Understanding and Semantic Search},
author={Pham, Thang M and Yoon, Seunghyun and Bui, Trung and Nguyen, Anh},
journal={arXiv preprint arXiv:2207.09068},
year={2022}
}
``` |
false | Source of dataset: [Kaggle](https://www.kaggle.com/datasets/l33tc0d3r/indian-food-classification)
This Dataset contains different images of food in 20 different classes. Some of the classes are of Indian food. All the images are extracted from google. Images per classes are little so Data augmentation and transfer learning will be best suited here.
Classes of the model: "burger", "butter_naan", "chai", "chapati", "chole_bhature", "dal_makhani", "dhokla", "fried_rice", "idli", "jalebi", "kaathi_rolls", "kadai_paneer", "kulfi", "masala_dosa", "momos", "paani_puri", "pakode", "pav_bhaji", "pizza", "samosa" |
true |
This dataset consists of a approx 50k collection of research articles from **PubMed** repository. Originally these documents are manually annotated by Biomedical Experts with their MeSH labels and each articles are described in terms of 10-15 MeSH labels. In this Dataset we have huge numbers of labels present as a MeSH major which is raising the issue of extremely large output space and severe label sparsity issues. To solve this Issue Dataset has been Processed and mapped to its root as Described in the Below Figure.

 |
false |
# LVIS
### Dataset Summary
This dataset is the implementation of LVIS dataset into Hugging Face datasets. Please visit the original website for more information.
- https://www.lvisdataset.org/
### Loading
This code returns train, validation and test generators.
```python
from datasets import load_dataset
dataset = load_dataset("winvoker/lvis")
```
Objects is a dictionary which contains annotation information like bbox, class.
```
DatasetDict({
train: Dataset({
features: ['id', 'image', 'height', 'width', 'objects'],
num_rows: 100170
})
validation: Dataset({
features: ['id', 'image', 'height', 'width', 'objects'],
num_rows: 4809
})
test: Dataset({
features: ['id', 'image', 'height', 'width', 'objects'],
num_rows: 19822
})
})
```
### Access Generators
```python
train = dataset["train"]
validation = dataset["validation"]
test = dataset["test"]
```
An example row is as follows.
```json
{ 'id': 0,
'image': '000000437561.jpg',
'height': 480,
'width': 640,
'objects': {
'bboxes': [[[392, 271, 14, 3]],
'classes': [117],
'segmentation': [[376, 272, 375, 270, 372, 269, 371, 269, 373, 269, 373]]
}
}
``` |
false |
This is a copy of the [Multi-News](https://huggingface.co/datasets/multi_news) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==10`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8793 | 0.7460 | 0.2213 | 0.8264 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8748 | 0.7453 | 0.2173 | 0.8232 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8775 | 0.7480 | 0.2187 | 0.8250 | |
true |
# bAbi_nli
bAbI tasks recasted as natural language inference.
https://github.com/facebookarchive/bAbI-tasks
tasksource recasting code:
https://colab.research.google.com/drive/1J_RqDSw9iPxJSBvCJu-VRbjXnrEjKVvr?usp=sharing
```bibtex
@article{weston2015towards,
title={Towards ai-complete question answering: A set of prerequisite toy tasks},
author={Weston, Jason and Bordes, Antoine and Chopra, Sumit and Rush, Alexander M and Van Merri{\"e}nboer, Bart and Joulin, Armand and Mikolov, Tomas},
journal={arXiv preprint arXiv:1502.05698},
year={2015}
}
``` |
true |
Original source: https://github.com/openai/generating-reviews-discovering-sentiment
This dataset is different from the dataset distributed by GLUE, which means the metric **shouldn't be compared with the SST2 performance in GLUE**.
The description of SST2 dataset in the paper is the following.
> The Stanford Sentiment Treebank (SST)(Socher et al., 2013) was created specifically to evaluate more complex compositional models of language. It is de-rived from the same base dataset as MR but was relabeledvia Amazon Mechanical and includes dense labeling of thephrases of parse trees computed for all sentences. For thebinary subtask, this amounts to 76961 total labels com-pared to the 6920 sentence level labels. As a demonstrationof the capability of unsupervised representation learning tosimplify data collection and remove preprocessing steps,our reported results ignore these dense labels and computedparse trees, using only the raw text and sentence level la-bels
|
false |
# Dataset Card for Alpaca MT
## Dataset Description
- **Homepage:** https://crfm.stanford.edu/2023/03/13/alpaca.html
- **Repository:** https://github.com/juletx/alpaca-lora-mt
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** Rohan Taori
### Dataset Summary
Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better. This dataset also includes machine-translated data for 6 Iberian languages: Portuguese, Spanish, Catalan, Basque, Galician and Asturian. Translation was done using NLLB-200 3.3B model.
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500).
In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl).
### Supported Tasks and Leaderboards
The Alpaca dataset designed for instruction training pretrained language models.
### Languages
The original data in Alpaca is in English (BCP-47 en). We also provide machine-translated data for 6 Iberian languages: Portuguese (BCP-47 pt), Spanish (BCP-47 es), Catalan (BCP-47 ca), Basque (BCP-47 eu), Galician (BCP-47 gl) and Asturian (BCP-47 at).
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"instruction": "Create a classification task by clustering the given list of items.",
"input": "Apples, oranges, bananas, strawberries, pineapples",
"output": "Class 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
"text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nCreate a classification task by clustering the given list of items.\n\n### Input:\nApples, oranges, bananas, strawberries, pineapples\n\n### Response:\nClass 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
}
```
### Data Fields
The data fields are as follows:
* `instruction`: describes the task the model should perform. Each of the 52K instructions is unique.
* `input`: optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input.
* `output`: the answer to the instruction as generated by `text-davinci-003`.
* `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors for fine-tuning their models.
### Data Splits
| | train |
|---------------|------:|
| en | 52002 |
| pt | 52002 |
| es | 52002 |
| ca | 52002 |
| eu | 52002 |
| gl | 52002 |
| at | 52002 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Excerpt the [blog post](https://crfm.stanford.edu/2023/03/13/alpaca.html) accompanying the release of this dataset:
> We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on instruction-following language models, resulting in better science and ultimately new techniques to address the existing deficiencies with these models. At the same time, any release carries some risk. First, we recognize that releasing our training recipe reveals the feasibility of certain capabilities. On one hand, this enables more people (including bad actors) to create models that could cause harm (either intentionally or not). On the other hand, this awareness might incentivize swift defensive action, especially from the academic community, now empowered by the means to perform deeper safety research on such models. Overall, we believe that the benefits for the research community outweigh the risks of this particular release. Given that we are releasing the training recipe, we believe that releasing the data, model weights, and training code incur minimal further risk, given the simplicity of the recipe. At the same time, releasing these assets has enormous benefits for reproducible science, so that the academic community can use standard datasets, models, and code to perform controlled comparisons and to explore extensions. Deploying an interactive demo for Alpaca also poses potential risks, such as more widely disseminating harmful content and lowering the barrier for spam, fraud, or disinformation. We have put into place two risk mitigation strategies. First, we have implemented a content filter using OpenAI’s content moderation API, which filters out harmful content as defined by OpenAI’s usage policies. Second, we watermark all the model outputs using the method described in Kirchenbauer et al. 2023, so that others can detect (with some probability) whether an output comes from Alpaca 7B. Finally, we have strict terms and conditions for using the demo; it is restricted to non-commercial uses and to uses that follow LLaMA’s license agreement. We understand that these mitigation measures can be circumvented once we release the model weights or if users train their own instruction-following models. However, by installing these mitigations, we hope to advance the best practices and ultimately develop community norms for the responsible deployment of foundation models.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The `alpaca` data is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
### Contributions
[More Information Needed] |
true |
# VSR: Visual Spatial Reasoning
This is the **random set** of **VSR**: *Visual Spatial Reasoning* (TACL 2023) [[paper]](https://arxiv.org/abs/2205.00363).
### Usage
```python
from datasets import load_dataset
data_files = {"train": "train.jsonl", "dev": "dev.jsonl", "test": "test.jsonl"}
dataset = load_dataset("cambridgeltl/vsr_random", data_files=data_files)
```
Note that the image files still need to be downloaded separately. See [`data/`](https://github.com/cambridgeltl/visual-spatial-reasoning/tree/master/data) for details.
Go to our [github repo](https://github.com/cambridgeltl/visual-spatial-reasoning) for more introductions.
### Citation
If you find VSR useful:
```bibtex
@article{Liu2022VisualSR,
title={Visual Spatial Reasoning},
author={Fangyu Liu and Guy Edward Toh Emerson and Nigel Collier},
journal={Transactions of the Association for Computational Linguistics},
year={2023},
}
``` |
true | |
false |
# Dataset Card for poker-cards-cxcvz
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/poker-cards-cxcvz
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
poker-cards-cxcvz
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/poker-cards-cxcvz
### Citation Information
```
@misc{ poker-cards-cxcvz,
title = { poker cards cxcvz Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/poker-cards-cxcvz } },
url = { https://universe.roboflow.com/object-detection/poker-cards-cxcvz },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
false |
# Dataset Card for vehicles-q0x2v
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/vehicles-q0x2v
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
vehicles-q0x2v
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/vehicles-q0x2v
### Citation Information
```
@misc{ vehicles-q0x2v,
title = { vehicles q0x2v Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/vehicles-q0x2v } },
url = { https://universe.roboflow.com/object-detection/vehicles-q0x2v },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
false |
# Dataset Card for stambecco_data_it
This repository contains the dataset used to train Stambecco.
This dataset is an Italian translation of the Stanford [Alpaca-Cleaned dataset](https://github.com/gururise/AlpacaDataCleaned).
Please refer to the Stambecco repo for more info.
|
true |
# Dataset Card for climate_commitments_actions
## Dataset Description
- **Homepage:** [climatebert.ai](https://climatebert.ai)
- **Repository:**
- **Paper:** [papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435)
- **Leaderboard:**
- **Point of Contact:** [Nicolas Webersinke](mailto:nicolas.webersinke@fau.de)
### Dataset Summary
We introduce an expert-annotated dataset for identifying climate-related paragraphs about climate commitments and actions in corporate disclosures.
### Supported Tasks and Leaderboards
The dataset supports a binary classification task of whether a given climate-related paragraph is about climate commitments and actions or not.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
'text': '− Scope 3: Optional scope that includes indirect emissions associated with the goods and services supply chain produced outside the organization. Included are emissions from the transport of products from our logistics centres to stores (downstream) performed by external logistics operators (air, land and sea transport) as well as the emissions associated with electricity consumption in franchise stores.',
'label': 0
}
```
### Data Fields
- text: a climate-related paragraph extracted from corporate annual reports and sustainability reports
- label: the label (0 -> not talking about climate commitmens and actions, 1 -> talking about climate commitmens and actions)
### Data Splits
The dataset is split into:
- train: 1,000
- test: 320
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains climate-related paragraphs extracted from financial disclosures by firms. We collect text from corporate annual reports and sustainability reports.
For more information regarding our sample selection, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the annotators?
The authors and students at Universität Zürich and Friedrich-Alexander-Universität Erlangen-Nürnberg with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
- Nicolas Webersinke
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch).
### Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
### Contributions
Thanks to [@webersni](https://github.com/webersni) for adding this dataset. |
false | # Lrs
The [Lrs dataset](https://archive-beta.ics.uci.edu/dataset/93/low+resolution+spectrometer) from the [UCI repository](https://archive-beta.ics.uci.edu).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------|
| lrs | Multiclass classification | Classify lrs type. |
| lrs_0 | Binary classification | Is this instance of class 0? |
| lrs_1 | Binary classification | Is this instance of class 1? |
| lrs_2 | Binary classification | Is this instance of class 2? |
| lrs_3 | Binary classification | Is this instance of class 3? |
| lrs_4 | Binary classification | Is this instance of class 4? |
| lrs_5 | Binary classification | Is this instance of class 5? |
| lrs_6 | Binary classification | Is this instance of class 6? |
| lrs_7 | Binary classification | Is this instance of class 7? |
| lrs_8 | Binary classification | Is this instance of class 8? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/lrs", "lrs")["train"]
``` |
false | # Gisette
The [Gisette dataset](https://archive-beta.ics.uci.edu/dataset/170/gisette) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| gisette | Binary classification.| |
|
false | # Dataset Card for "Chinese_modern_classical"
数据来自于[NiuTrans/Classical-Modern: 非常全的文言文(古文)-现代文平行语料 (github.com)](https://github.com/NiuTrans/Classical-Modern)。
由于原始数据中部分古文没有译文,所以本数据集的数据仅包括了[双语数据 ](https://github.com/NiuTrans/Classical-Modern/tree/main/双语数据)。
|
true | ERROR: type should be string, got "\nhttps://github.com/selenashe/ScoNe\nNLI subset, original part (excluding one-scope)\n```\n@misc{she2023scone,\n title={ScoNe: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning}, \n author={Jingyuan Selena She and Christopher Potts and Samuel R. Bowman and Atticus Geiger},\n year={2023},\n eprint={2305.19426},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n```" |
true |
# Dataset Card for ReCoRD_TH
### Dataset Description
This dataset is Thai translated version of [ReCoRD](https://huggingface.co/datasets/super_glue/viewer/record) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation. |
false | |
false |
# Dataset Card for VilaQuAD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://doi.org/10.5281/zenodo.4562337
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [Carlos Rodríguez-Penagos](mailto:carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](mailto:carme.armentano@bsc.es)
### Dataset Summary
VilaQuAD, An extractive QA dataset for Catalan, from [VilaWeb](https://www.vilaweb.cat/) newswire text.
This dataset contains 2095 of Catalan language news articles along with 1 to 5 questions referring to each fragment (or context).
VilaQuad articles are extracted from the daily [VilaWeb](https://www.vilaweb.cat/) and used under [CC-by-nc-sa-nd](https://creativecommons.org/licenses/by-nc-nd/3.0/deed.ca) licence.
This dataset can be used to build extractive-QA and Language Models.
### Supported Tasks and Leaderboards
Extractive-QA, Language Model.
### Languages
The dataset is in Catalan (`ca-CA`).
## Dataset Structure
### Data Instances
```
{
'id': 'P_556_C_556_Q1',
'title': "El Macba posa en qüestió l'eufòria amnèsica dels anys vuitanta a l'estat espanyol",
'context': "El Macba ha obert una nova exposició, 'Gelatina dura. Històries escamotejades dels 80', dedicada a revisar el discurs hegemònic que es va instaurar en aquella dècada a l'estat espanyol, concretament des del començament de la transició, el 1977, fins a la fita de Barcelona 92. És una mirada en clau espanyola, però també centralista, perquè més enllà dels esdeveniments ocorreguts a Catalunya i els artistes que els van combatre, pràcticament només s'hi mostren fets polítics i culturals generats des de Madrid. No es parla del País Basc, per exemple. Però, dit això, l'exposició revisa aquesta dècada de la història recent tot qüestionant un triomfalisme homogeneïtzador, que ja se sap que va arrasar una gran quantitat de sectors crítics i radicals de l'àmbit social, polític i cultural. Com diu la comissària, Teresa Grandas, de l'equip del Macba: 'El relat oficial dels anys vuitanta a l'estat espanyol va prioritzar la necessitat per damunt de la raó i va consolidar una mirada que privilegiava el futur abans que l'anàlisi del passat recent, obviant qualsevol consideració crítica respecte de la filiació amb el poder franquista.",
'question': 'Com es diu la nova exposició que ha obert el Macba?',
'answers': [
{
'text': "'Gelatina dura. Històries escamotejades dels 80'",
'answer_start': 38
}
]
}
```
### Data Fields
Follows [Rajpurkar, Pranav et al., (2016)](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets.
- `id` (str): Unique ID assigned to the question.
- `title` (str): Title of the VilaWeb article.
- `context` (str): VilaWeb section text.
- `question` (str): Question.
- `answers` (list): List of answers to the question, each containing:
- `text` (str): Span text answering to the question.
- `answer_start` Starting offset of the span text answering to the question.
### Data Splits
- train.json: 1295 contexts, 3882 questions
- dev.json: 400 contexts, 1200 questions
- test.json: 400 contexts, 1200 questions
## Dataset Creation
### Curation Rationale
We created this dataset to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
- [VilaWeb site](https://www.vilaweb.cat/)
#### Initial Data Collection and Normalization
The source data are scraped articles from archives of Catalan newspaper website [Vilaweb](https://www.vilaweb.cat).
From a the online edition of the newspaper [VilaWeb](https://www.vilaweb.cat), 2095 articles were randomnly selected. These headlines were also used to create a Textual Entailment dataset. For the extractive QA dataset, creation of between 1 and 5 questions for each news context was commissioned, following an adaptation of the guidelines from SQuAD 1.0 ([Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250)). In total, 6282 pairs of a question and an extracted fragment that contains the answer were created.
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines. We also created [another QA dataset with wikipedia](https://huggingface.co/datasets/projecte-aina/viquiquad) to ensure thematic and stylistic variety.
#### Who are the source language producers?
Professional journalists from the Catalan newspaper [VilaWeb](https://www.vilaweb.cat/).
### Annotations
#### Annotation process
We comissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 ([Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250)).
#### Who are the annotators?
Annotation was commissioned to an specialized company that hired a team of native language speakers.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/en/inici/index.html) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4562337)
### Contributions
[N/A] |
false |
# Dataset Card for Snacks (Detection)
## Dataset Summary
This is a dataset of 20 different types of snack foods that accompanies the book [Machine Learning by Tutorials](https://www.raywenderlich.com/books/machine-learning-by-tutorials/v2.0).
The images were taken from the [Google Open Images dataset](https://storage.googleapis.com/openimages/web/index.html), release 2017_11.
## Dataset Structure
Included in the **data** folder are three CSV files with bounding box annotations for the images in the dataset, although not all images have annotations and some images have multiple annotations.
The columns in the CSV files are:
- `image_id`: the filename of the image without the .jpg extension
- `x_min, x_max, y_min, y_max`: normalized bounding box coordinates, i.e. in the range [0, 1]
- `class_name`: the class that belongs to the bounding box
- `folder`: the class that belongs to the image as a whole, which is also the name of the folder that contains the image
The class names are:
```nohighlight
apple
banana
cake
candy
carrot
cookie
doughnut
grape
hot dog
ice cream
juice
muffin
orange
pineapple
popcorn
pretzel
salad
strawberry
waffle
watermelon
```
**Note:** The image files are not part of this repo but [can be found here](https://huggingface.co/datasets/Matthijs/snacks).
### Data Splits
Train, Test, Validation
## Licensing Information
Just like the images from Google Open Images, the snacks dataset is licensed under the terms of the Creative Commons license.
The images are listed as having a [CC BY 2.0](https://creativecommons.org/licenses/by/2.0/) license.
The annotations are licensed by Google Inc. under a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
false |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net](https://github.com/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net)
- **Repository:** [https://github.com/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net](https://github.com/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net)
- **Paper:** [Tooth Instance Segmentation on Panoramic Dental Radiographs Using U-Nets and Morphological Processing](https://dergipark.org.tr/tr/pub/dubited/issue/68307/950568)
- **Leaderboard:**
- **Point of Contact:** S.Serdar Helli
### Dataset Summary
# Semantic-Segmentation-of-Teeth-in-Panoramic-X-ray-Image
The aim of this study is automatic semantic segmentation and measurement total length of teeth in one-shot panoramic x-ray image by using deep learning method with U-Net Model and binary image analysis in order to provide diagnostic information for the management of dental disorders, diseases, and conditions.
[***Github Link***](https://github.com/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net)
***Original Dataset For Only Images***
DATASET ref - H. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015
[Link DATASET for only original images.](https://data.mendeley.com/datasets/hxt48yk462/1)
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"image": X-ray Image (Image),
"label": Binary Image Segmentation Map (Image)
}
```
## Dataset Creation
### Source Data
***Original Dataset For Only Images***
DATASET ref - H. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015
[Link DATASET for only original images.](https://data.mendeley.com/datasets/hxt48yk462/1)
### Annotations
#### Annotation process
The annotation was made manually.
#### Who are the annotators?
S.Serdar Helli
### Other Known Limitations
The X-Ray Images files associated with this dataset are licensed under a Creative Commons Attribution 4.0 International license.
To Check Out For More Information:
***Original Dataset For Only Images***
DATASET ref - H. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging, vol. 2, no. 4, p. 44003, 2015
[Link DATASET for only original images.](https://data.mendeley.com/datasets/hxt48yk462/1)
## Additional Information
### Citation Information
For Labelling
```
@article{helli10tooth,
title={Tooth Instance Segmentation on Panoramic Dental Radiographs Using U-Nets and Morphological Processing},
author={HELL{\.I}, Serdar and HAMAMCI, Anda{\c{c}}},
journal={D{\"u}zce {\"U}niversitesi Bilim ve Teknoloji Dergisi},
volume={10},
number={1},
pages={39--50}
}
```
For Original Images
```
@article{abdi2015automatic,
title={Automatic segmentation of mandible in panoramic x-ray},
author={Abdi, Amir Hossein and Kasaei, Shohreh and Mehdizadeh, Mojdeh},
journal={Journal of Medical Imaging},
volume={2},
number={4},
pages={044003},
year={2015},
publisher={SPIE}
}
```
### Contributions
Thanks to [@SerdarHelli](https://github.com/SerdarHelli) for adding this dataset. |
true |
# Dataset Card for GLUE
## Table of Contents
- [Dataset Card for GLUE](#dataset-card-for-glue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [ax](#ax)
- [cola](#cola)
- [mnli](#mnli)
- [mnli_matched](#mnli_matched)
- [mnli_mismatched](#mnli_mismatched)
- [mrpc](#mrpc)
- [qnli](#qnli)
- [qqp](#qqp)
- [rte](#rte)
- [sst2](#sst2)
- [stsb](#stsb)
- [wnli](#wnli)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [ax](#ax-1)
- [cola](#cola-1)
- [mnli](#mnli-1)
- [mnli_matched](#mnli_matched-1)
- [mnli_mismatched](#mnli_mismatched-1)
- [mrpc](#mrpc-1)
- [qnli](#qnli-1)
- [qqp](#qqp-1)
- [rte](#rte-1)
- [sst2](#sst2-1)
- [stsb](#stsb-1)
- [wnli](#wnli-1)
- [Data Fields](#data-fields)
- [ax](#ax-2)
- [cola](#cola-2)
- [mnli](#mnli-2)
- [mnli_matched](#mnli_matched-2)
- [mnli_mismatched](#mnli_mismatched-2)
- [mrpc](#mrpc-2)
- [qnli](#qnli-2)
- [qqp](#qqp-2)
- [rte](#rte-2)
- [sst2](#sst2-2)
- [stsb](#stsb-2)
- [wnli](#wnli-2)
- [Data Splits](#data-splits)
- [ax](#ax-3)
- [cola](#cola-3)
- [mnli](#mnli-3)
- [mnli_matched](#mnli_matched-3)
- [mnli_mismatched](#mnli_mismatched-3)
- [mrpc](#mrpc-3)
- [qnli](#qnli-3)
- [qqp](#qqp-3)
- [rte](#rte-3)
- [sst2](#sst2-3)
- [stsb](#stsb-3)
- [wnli](#wnli-3)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 955.33 MB
- **Size of the generated dataset:** 229.68 MB
- **Total amount of disk used:** 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### ax
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.44 MB
An example of 'test' looks as follows.
```
{
"premise": "The cat sat on the mat.",
"hypothesis": "The cat did not sit on the mat.",
"label": -1,
"idx: 0
}
```
#### cola
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.58 MB
- **Total amount of disk used:** 0.94 MB
An example of 'train' looks as follows.
```
{
"sentence": "Our friends won't buy this analysis, let alone the next one we propose.",
"label": 1,
"id": 0
}
```
#### mnli
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 78.65 MB
- **Total amount of disk used:** 376.95 MB
An example of 'train' looks as follows.
```
{
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"hypothesis": "Product and geography are what make cream skimming work.",
"label": 1,
"idx": 0
}
```
#### mnli_matched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.52 MB
- **Total amount of disk used:** 301.82 MB
An example of 'test' looks as follows.
```
{
"premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.",
"hypothesis": "Hierbas is a name worth looking out for.",
"label": -1,
"idx": 0
}
```
#### mnli_mismatched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.73 MB
- **Total amount of disk used:** 302.02 MB
An example of 'test' looks as follows.
```
{
"premise": "What have you decided, what are you going to do?",
"hypothesis": "So what's your decision?,
"label": -1,
"idx": 0
}
```
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
#### ax
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### cola
- `sentence`: a `string` feature.
- `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
- `idx`: a `int32` feature.
#### mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
#### ax
| |test|
|---|---:|
|ax |1104|
#### cola
| |train|validation|test|
|----|----:|---------:|---:|
|cola| 8551| 1043|1063|
#### mnli
| |train |validation_matched|validation_mismatched|test_matched|test_mismatched|
|----|-----:|-----------------:|--------------------:|-----------:|--------------:|
|mnli|392702| 9815| 9832| 9796| 9847|
#### mnli_matched
| |validation|test|
|------------|---------:|---:|
|mnli_matched| 9815|9796|
#### mnli_mismatched
| |validation|test|
|---------------|---------:|---:|
|mnli_mismatched| 9832|9847|
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{warstadt2018neural,
title={Neural Network Acceptability Judgments},
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
journal={arXiv preprint arXiv:1805.12471},
year={2018}
}
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
Note that each GLUE dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
|
false | # Dataset Card for sova_rudevices
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [SOVA RuDevices](https://github.com/sovaai/sova-dataset)
- **Repository:** [SOVA Dataset](https://github.com/sovaai/sova-dataset)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [SOVA.ai](mailto:support@sova.ai)
### Dataset Summary
SOVA Dataset is free public STT/ASR dataset. It consists of several parts, one of them is SOVA RuDevices. This part is an acoustic corpus of approximately 100 hours of 16kHz Russian live speech with manual annotating, prepared by [SOVA.ai team](https://github.com/sovaai).
Authors do not divide the dataset into train, validation and test subsets. Therefore, I was compelled to prepare this splitting. The training subset includes more than 82 hours, the validation subset includes approximately 6 hours, and the test subset includes approximately 6 hours too.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER.
### Languages
The audio is in Russian.
## Dataset Structure
### Data Instances
A typical data point comprises the audio data, usually called `audio` and its transcription, called `transcription`. Any additional information about the speaker and the passage which contains the transcription is not provided.
```
{'audio': {'path': '/home/bond005/datasets/sova_rudevices/data/train/00003ec0-1257-42d1-b475-db1cd548092e.wav',
'array': array([ 0.00787354, 0.00735474, 0.00714111, ...,
-0.00018311, -0.00015259, -0.00018311]), dtype=float32),
'sampling_rate': 16000},
'transcription': 'мне получше стало'}
```
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- transcription: the transcription of the audio file.
### Data Splits
This dataset consists of three splits: training, validation, and test. This splitting was realized with accounting of internal structure of SOVA RuDevices (the validation split is based on the subdirectory `0`, and the test split is based on the subdirectory `1` of the original dataset), but audio recordings of the same speakers can be in different splits at the same time (the opposite is not guaranteed).
| | Train | Validation | Test |
| ----- | ------ | ---------- | ----- |
| examples | 81607 | 5835 | 5799 |
| hours | 82.4h | 5.9h | 5.8h |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
All recorded audio files were manually annotated.
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Egor Zubarev, Timofey Moskalets, and SOVA.ai team.
### Licensing Information
[Creative Commons BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@misc{sova2021rudevices,
author = {Zubarev, Egor and Moskalets, Timofey and SOVA.ai},
title = {SOVA RuDevices Dataset: free public STT/ASR dataset with manually annotated live speech},
publisher = {GitHub},
journal = {GitHub repository},
year = {2021},
howpublished = {\url{https://github.com/sovaai/sova-dataset}},
}
```
### Contributions
Thanks to [@bond005](https://github.com/bond005) for adding this dataset. |
true |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Size:** 163MB
- **Repository:** https://github.com/jpwahle/emnlp22-transforming
- **Paper:** https://arxiv.org/abs/2210.03568
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
false |
# Dataset Card for MT-GenEval
## Table of Contents
- [Dataset Card for MT-GenEval](#dataset-card-for-mt-geneval)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Machine Translation](#machine-translation)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Github](https://github.com/amazon-science/machine-translation-gender-eval)
- **Paper:** [EMNLP 2022](https://arxiv.org/abs/2211.01355)
- **Point of Contact:** [Anna Currey](mailto:ancurrey@amazon.com)
### Dataset Summary
The MT-GenEval benchmark evaluates gender translation accuracy on English -> {Arabic, French, German, Hindi, Italian, Portuguese, Russian, Spanish}. The dataset contains individual sentences with annotations on the gendered target words, and contrastive original-invertend translations with additional preceding context.
**Disclaimer**: *The MT-GenEval benchmark was released in the EMNLP 2022 paper [MT-GenEval: A Counterfactual and Contextual Dataset for Evaluating Gender Accuracy in Machine Translation](https://arxiv.org/abs/2211.01355) by Anna Currey, Maria Nadejde, Raghavendra Pappagari, Mia Mayer, Stanislas Lauly, Xing Niu, Benjamin Hsu, and Georgiana Dinu and is hosted through Github by the [Amazon Science](https://github.com/amazon-science?type=source) organization. The dataset is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/).*
### Supported Tasks and Leaderboards
#### Machine Translation
Refer to the [original paper](https://arxiv.org/abs/2211.01355) for additional details on gender accuracy evaluation with MT-GenEval.
### Languages
The dataset contains source English sentences extracted from Wikipedia translated into the following languages: Arabic (`ar`), French (`fr`), German (`de`), Hindi (`hi`), Italian (`it`), Portuguese (`pt`), Russian (`ru`), and Spanish (`es`).
## Dataset Structure
### Data Instances
The dataset contains two configuration types, `sentences` and `context`, mirroring the original repository structure, with source and target language specified in the configuration name (e.g. `sentences_en_ar`, `context_en_it`) The `sentences` configurations contains masculine and feminine versions of individual sentences with gendered word annotations. Here is an example entry of the `sentences_en_it` split (all `sentences_en_XX` splits have the same structure):
```json
{
{
"orig_id": 0,
"source_feminine": "Pagratidis quickly recanted her confession, claiming she was psychologically pressured and beaten, and until the moment of her execution, she remained firm in her innocence.",
"reference_feminine": "Pagratidis subito ritrattò la sua confessione, affermando che era aveva subito pressioni psicologiche e era stata picchiata, e fino al momento della sua esecuzione, rimase ferma sulla sua innocenza.",
"source_masculine": "Pagratidis quickly recanted his confession, claiming he was psychologically pressured and beaten, and until the moment of his execution, he remained firm in his innocence.",
"reference_masculine": "Pagratidis subito ritrattò la sua confessione, affermando che era aveva subito pressioni psicologiche e era stato picchiato, e fino al momento della sua esecuzione, rimase fermo sulla sua innocenza.",
"source_feminine_annotated": "Pagratidis quickly recanted <F>her</F> confession, claiming <F>she</F> was psychologically pressured and beaten, and until the moment of <F>her</F> execution, <F>she</F> remained firm in <F>her</F> innocence.",
"reference_feminine_annotated": "Pagratidis subito ritrattò la sua confessione, affermando che era aveva subito pressioni psicologiche e era <F>stata picchiata</F>, e fino al momento della sua esecuzione, rimase <F>ferma</F> sulla sua innocenza.",
"source_masculine_annotated": "Pagratidis quickly recanted <M>his</M> confession, claiming <M>he</M> was psychologically pressured and beaten, and until the moment of <M>his</M> execution, <M>he</M> remained firm in <M>his</M> innocence.",
"reference_masculine_annotated": "Pagratidis subito ritrattò la sua confessione, affermando che era aveva subito pressioni psicologiche e era <M>stato picchiato</M>, e fino al momento della sua esecuzione, rimase <M>fermo</M> sulla sua innocenza.",
"source_feminine_keywords": "her;she;her;she;her",
"reference_feminine_keywords": "stata picchiata;ferma",
"source_masculine_keywords": "his;he;his;he;his",
"reference_masculine_keywords": "stato picchiato;fermo",
}
}
```
The `context` configuration contains instead different English sources related to stereotypical professional roles with additional preceding context and contrastive original-inverted translations. Here is an example entry of the `context_en_it` split (all `context_en_XX` splits have the same structure):
```json
{
"orig_id": 0,
"context": "Pierpont told of entering and holding up the bank and then fleeing to Fort Wayne, where the loot was divided between him and three others.",
"source": "However, Pierpont stated that Skeer was the planner of the robbery.",
"reference_original": "Comunque, Pierpont disse che Skeer era il pianificatore della rapina.",
"reference_flipped": "Comunque, Pierpont disse che Skeer era la pianificatrice della rapina."
}
```
### Data Splits
All `sentences_en_XX` configurations have 1200 examples in the `train` split and 300 in the `test` split. For the `context_en_XX` configurations, the number of example depends on the language pair:
| Configuration | # Train | # Test |
| :-----------: | :--------: | :-----: |
| `context_en_ar` | 792 | 1100 |
| `context_en_fr` | 477 | 1099 |
| `context_en_de` | 598 | 1100 |
| `context_en_hi` | 397 | 1098 |
| `context_en_it` | 465 | 1904 |
| `context_en_pt` | 574 | 1089 |
| `context_en_ru` | 583 | 1100 |
| `context_en_es` | 534 | 1096 |
### Dataset Creation
From the original paper:
>In developing MT-GenEval, our goal was to create a realistic, gender-balanced dataset that naturally incorporates a diverse range of gender phenomena. To this end, we extracted English source sentences from Wikipedia as the basis for our dataset. We automatically pre-selected relevant sentences using EN gender-referring words based on the list provided by [Zhao et al. (2018)](https://doi.org/10.18653/v1/N18-2003).
Please refer to the original article [MT-GenEval: A Counterfactual and Contextual Dataset for Evaluating Gender Accuracy in Machine Translation](https://arxiv.org/abs/2211.01355) for additional information on dataset creation.
## Additional Information
### Dataset Curators
The original authors of MT-GenEval are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).
### Licensing Information
The dataset is licensed under the [Creative Commons Attribution-ShareAlike 3.0 International License](https://creativecommons.org/licenses/by-sa/3.0/).
### Citation Information
Please cite the authors if you use these corpora in your work.
```bibtex
@inproceedings{currey-etal-2022-mtgeneval,
title = "{MT-GenEval}: {A} Counterfactual and Contextual Dataset for Evaluating Gender Accuracy in Machine Translation",
author = "Currey, Anna and
Nadejde, Maria and
Pappagari, Raghavendra and
Mayer, Mia and
Lauly, Stanislas, and
Niu, Xing and
Hsu, Benjamin and
Dinu, Georgiana",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2211.01355",
}
``` |
false |
# Dataset Card for "lmqg/qg_tweetqa"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the [tweet_qa](https://huggingface.co/datasets/tweet_qa). The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```
{
'answer': 'vine',
'paragraph_question': 'question: what site does the link take you to?, context:5 years in 5 seconds. Darren Booth (@darbooth) January 25, 2013',
'question': 'what site does the link take you to?',
'paragraph': '5 years in 5 seconds. Darren Booth (@darbooth) January 25, 2013'
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `question_answer`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|9489 | 1086| 1203|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
false | # Higgs
The [Higgs dataset](https://www.nature.com/articles/ncomms5308/) from "[Searching for exotic particles in high-energy physics with deep learning](https://www.nature.com/articles/ncomms5308/)".
Try to classify particles as Higgs bosons.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| higgs | Binary classification | Is the particle a Higgs boson? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/higgs")["train"]
```
# Features
|**Feature** |**Type** |
|---------------------------|-----------|
|`lepton_pT` |`[float64]`|
|`lepton_eta` |`[float64]`|
|`lepton_phi` |`[float64]`|
|`missing_energy_magnitude` |`[float64]`|
|`missing_energy_phi` |`[float64]`|
|`jet1pt` |`[float64]`|
|`jet1eta` |`[float64]`|
|`jet1phi` |`[float64]`|
|`jet1b` |`[float64]`|
|`jet2pt` |`[float64]`|
|`jet2eta` |`[float64]`|
|`jet2phi` |`[float64]`|
|`jet2b` |`[float64]`|
|`jet3pt` |`[float64]`|
|`jet3eta` |`[float64]`|
|`jet3phi` |`[float64]`|
|`jet3b` |`[float64]`|
|`jet4pt` |`[float64]`|
|`jet4eta` |`[float64]`|
|`jet4phi` |`[float64]`|
|`jet4b` |`[float64]`|
|`m_jj` |`[float64]`|
|`m_jjj` |`[float64]`|
|`m_lv` |`[float64]`|
|`m_jlv` |`[float64]`|
|`m_bb` |`[float64]`|
|`m_wbb` |`[float64]`|
|`m_wwbb` |`[float64]`| |
false | # NBFI
The [NBFI dataset](https://www.kaggle.com/datasets/meastanmay/nbfi-vehicle-loan-repayment-dataset) from the [Kaggle](https://www.kaggle.com/datasets).
Client default prediction.
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| default | Binary classification | Has the client defaulted? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/nbfi")["train"]
```
# Features
|**Feature** |**Type** |
|-----------------------------------------------|---------------|
|`income` | `float32` |
|`owns_a_car` | `bool` |
|`owns_a_bike` | `bool` |
|`has_an_active_loan` | `bool` |
|`owns_a_house` | `bool` |
|`nr_children` | `int8` |
|`credit` | `float32` |
|`loan_annuity` | `float32` |
|`accompanied_by` | `string` |
|`income_type` | `string` |
|`education_level` | `float32` |
|`marital_status` | `float32` |
|`is_male` | `bool` |
|`type_of_contract` | `string` |
|`type_of_housing` | `string` |
|`residence_density` | `float32` |
|`age_in_days` | `int32` |
|`consecutive_days_of_employment` | `int16` |
|`nr_days_since_last_registration_change` | `int32` |
|`nr_days_since_last_document_change` | `int32` |
|`owned_a_house_for_nr_days` | `int32` |
|`has_provided_a_mobile_number` | `bool` |
|`has_provided_a_home_number` | `bool` |
|`was_reachable_at_work` | `bool` |
|`job` | `string` |
|`nr_family_members` | `int8` |
|`city_rating` | `int8` |
|`weekday_of_application` | `int8` |
|`hour_of_application` | `float32` |
|`same_residence_and_home` | `bool` |
|`same_work_and_home` | `bool` |
|`score_1` | `float32` |
|`score_2` | `float32` |
|`score_3` | `float32` |
|`nr_defaults_in_social_circle` | `int8` |
|`inquiries_in_last_year` | `float32` | |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.