id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
dim/bugurt_completion_prompts | 2023-09-01T23:28:27.000Z | [
"region:us"
] | dim | null | null | null | 0 | 33 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: bugurt
dtype: string
splits:
- name: train
num_bytes: 5451066
num_examples: 5000
download_size: 2806557
dataset_size: 5451066
---
# Dataset Card for "bugurt_completion_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wasertech/OneOS | 2023-09-28T17:50:05.000Z | [
"size_categories:10K<n<100K",
"language:en",
"language:fr",
"license:cc0-1.0",
"code",
"bash",
"python",
"Web Search",
"Wikipedia",
"NLU",
"region:us"
] | wasertech | null | null | null | 1 | 33 | ---
language:
- en
- fr
license: cc0-1.0
size_categories:
- 10K<n<100K
pretty_name: OneOS
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 17876480
num_examples: 13068
download_size: 1924878
dataset_size: 17876480
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- code
- bash
- python
- Web Search
- Wikipedia
- NLU
---
# OneOS Dataset
The OneOS dataset is a collection of text data for the OneOS project. It consists of a large number of text samples that can be used for training and evaluating natural language processing models.
## Dataset Details
- Number of Samples: 13,068
- License: CC0*
- Language: English, French
\* Only unlicensed sentences generated manually fall under CreativeCommon-0. Sentences already licensed under different terms, such as [nl2bash](https://github.com/TellinaTool/nl2bash) or [samantha-data](https://huggingface.co/datasets/ehartford/samantha-data), remain subject to their respective licenses. The same applies to sentences produced using language models operating under special licenses, like LLama or the GPT series.
## Dataset Format
Comming soon. |
manu/bnf_gallica | 2023-09-05T10:04:11.000Z | [
"region:us"
] | manu | null | null | null | 0 | 33 | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2628706901
num_examples: 5907
download_size: 1521206509
dataset_size: 2628706901
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "bnf_gallica"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shunk031/COCOA | 2023-09-16T11:59:03.000Z | [
"language:en",
"license:cc-by-4.0",
"computer-vision",
"instance-segmentation",
"ms-coco",
"bsds",
"arxiv:1509.01329",
"region:us"
] | shunk031 | COCOA dataset targets amodal segmentation, which aims to recognize and segment objects beyond their visible parts. This dataset includes labels not only for the visible parts of objects, but also for their occluded parts hidden by other objects. This enables learning to understand the full shape and position of objects. | @inproceedings{zhu2017semantic,
title={Semantic amodal segmentation},
author={Zhu, Yan and Tian, Yuandong and Metaxas, Dimitris and Doll{\'a}r, Piotr},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={1464--1472},
year={2017}
}
@inproceedings{lin2014microsoft,
title={Microsoft coco: Common objects in context},
author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
booktitle={Computer Vision--ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13},
pages={740--755},
year={2014},
organization={Springer}
}
@article{arbelaez2010contour,
title={Contour detection and hierarchical image segmentation},
author={Arbelaez, Pablo and Maire, Michael and Fowlkes, Charless and Malik, Jitendra},
journal={IEEE transactions on pattern analysis and machine intelligence},
volume={33},
number={5},
pages={898--916},
year={2010},
publisher={IEEE}
} | null | 0 | 33 | ---
language:
- en
license: cc-by-4.0
tags:
- computer-vision
- instance-segmentation
- ms-coco
- bsds
datasets:
- COCO
- BSDS
metrics:
- iou
---
# Dataset Card for COCOA
[](https://github.com/shunk031/huggingface-datasets_COCOA/actions/workflows/ci.yaml)
[](https://colab.research.google.com/github/shunk031/huggingface-datasets_COCOA/blob/main/notebooks/COCOA_demo.ipynb)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- Homepage: https://github.com/Wakeupbuddy/amodalAPI
- Repository: https://github.com/shunk031/huggingface-datasets_COCOA
- Paper (preprint): https://arxiv.org/abs/1509.01329
- Paper (CVPR2017): https://openaccess.thecvf.com/content_cvpr_2017/html/Zhu_Semantic_Amodal_Segmentation_CVPR_2017_paper.html
### Dataset Summary
COCOA dataset targets amodal segmentation, which aims to recognize and segment objects beyond their visible parts. This dataset includes labels not only for the visible parts of objects, but also for their occluded parts hidden by other objects. This enables learning to understand the full shape and position of objects.
From the paper:
> We propose a detailed image annotation that captures information beyond the visible pixels and requires complex reasoning about full scene structure. Specifically, we create an amodal segmentation of each image: the full extent of each region is marked, not just the visible pixels. Annotators outline and name all salient regions in the image and specify a partial depth order. The result is a rich scene structure, including visible and occluded portions of each region, figure-ground edge information, semantic labels, and object overlap. We create two datasets for semantic amodal segmentation. First, we label 500 images in the BSDS dataset with multiple annotators per image, allowing us to study the statistics of human annotations. We show that the proposed full scene annotation is surprisingly consistent between annotators, including for regions and edges. Second, we annotate 5000 images from COCO. This larger dataset allows us to explore a number of algorithmic ideas for amodal segmentation and depth ordering.
### Dataset Preprocessing
### Supported Tasks and Leaderboards
### Languages
All of annotations use English as primary language.
## Dataset Structure
### Data Instances
To use COCOA, you need to download the annotations from [the google drive](https://drive.google.com/open?id=0B8e3LNo7STslZURoTzhhMFpCelE) in the official repositories (https://github.com/Wakeupbuddy/amodalAPI#setup). Downloading of annotations currently appears to be restricted, but the author will allow us to download them if we request access privileges.
When loading a specific configuration, users has to append a version dependent suffix:
```python
import datasets as ds
dataset = ds.load_dataset(
path="shunk031/COCOA",
name="COCO",
data_dir="/path/to/cocoa_annotation.tar.gz",
decode_rle=True, # True if Run-length Encoding (RLE) is to be decoded and converted to binary mask.
)
```
#### COCO
An example of looks as follows.
```json
{
"image_id": 321,
"license_id": 1,
"file_name": "COCO_train2014_000000000321.jpg",
"height": 480,
"width": 640,
"date_captured": "2013-11-20 12: 36: 25",
"flickr_url": "http: //farm5.staticflickr.com/4096/4750559893_49fb0baf7f_z.jpg",
"image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x480 at 0x7FD21970F5E0>,
"coco_url": "http://mscoco.org/images/321",
"annotations": {
"author": ["ash2"],
"url": ["https://s3-us-west-1.amazonaws.com/coco-ann/coco-train/COCO_train2014_000000000321.jpg"],
"regions": [
{
"segmentation": [
<PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FD21970FBE0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FD21970F8E0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FD21970F400>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FD21970F790>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FD21970FCA0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FD21970FF40>
],
"name": ["sandwich", "container", "hot dog", "hot dog", "container", "table"],
"area": [63328.0, 141246.0, 31232.0, 28735.0, 265844.0, 307200.0],
"is_stuff": [False, False, False, False, False, True],
"occlude_rate": [0.0, 0.44835251569747925, 0.0, 0.022307291626930237, 0.7122523188591003, 0.9019140601158142],
"order": [1, 2, 3, 4, 5, 6],
"visible_mask": [
None,
<PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FD21970FD90>,
None,
<PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FD21970FB50>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FD21970FE80>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FD219479460>
],
"invisible_mask": [
None,
<PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FD219479160>,
None,
<PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FD2194793A0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FD219479490>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FD219479130>
]
}
],
"image_id": [321],
"depth_constraint": ["1-2,1-5,1-6,2-5,2-6,3-4,3-5,3-6,4-5,4-6,5-6"],
"size": [6]
}
}
```
#### BSDS
An example of looks as follows.
```json
{
"image_id": 100075,
"license_id": -100,
"file_name": "100075.jpg",
"height": 321,
"width": 481,
"date_captured": "?",
"flickr_url": "https://s3-us-west-1.amazonaws.com/coco-ann/BSDS/BSDS_train_100075.jpg",
"image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=481x321 at 0x7FD22A328CA0>,
"bsds_url": "https://s3-us-west-1.amazonaws.com/coco-ann/BSDS/BSDS_train_100075.jpg",
"annotations": {
"author": ["acherian", "amorgan", "dromero", "jdayal", "kjyou", "ttouneh"],
"url": [
"https://s3-us-west-1.amazonaws.com/coco-ann/BSDS/BSDS_train_100075.jpg",
"https://s3-us-west-1.amazonaws.com/coco-ann/BSDS/BSDS_train_100075.jpg"
],
"regions": [
{
"segmentation": [
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A3288E0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328430>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328070>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328610>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A3280D0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328BE0>
],
"name": ["rocks", "bear", "bear", "bear", "sand", "water"],
"area": [31872.0, 5603.0, 38819.0, 12869.0, 27883.0, 124695.0],
"is_stuff": [False, False, False, False, False, False],
"occlude_rate": [0.0, 0.0, 0.0, 0.3645193874835968, 0.13043789565563202, 0.6487349271774292],
"order": [1, 2, 3, 4, 5, 6],
"visible_mask": [
None,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328AF0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328A30>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328220>
],
"invisible_mask": [
None,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A3282E0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328400>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328310>
]
},
{
"segmentation": [
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328340>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328B80>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328670>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328520>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328460>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328D00>
],
"name": ["bear", "bear", "bear", "shore line", "water", "shore line"],
"area": [38772.0, 5178.0, 13575.0, 31977.0, 84224.0, 37418.0],
"is_stuff": [False, False, False, False, False, False],
"occlude_rate": [0.0, 0.0, 0.35889503359794617, 0.1458861082792282, 0.5715591907501221, 0.0],
"order": [1, 2, 3, 4, 5, 6],
"visible_mask": [
None,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328A00>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328D60>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A3285E0>,
None
],
"invisible_mask": [
None,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A3286A0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328490>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328100>,
None
]
},
{
"segmentation": [
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A3282B0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328EE0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A3284C0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A3285B0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328C40>
],
"name": ["bear", "bear", "bear", "beach", "ocean"],
"area": [38522.0, 5496.0, 12581.0, 27216.0, 126090.0],
"is_stuff": [False, False, False, False, False],
"occlude_rate": [0.0, 0.0, 0.3449646234512329, 0.11258083581924438, 0.39141881465911865],
"order": [1, 2, 3, 4, 5],
"visible_mask": [
None,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328940>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD22A328880>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD219830A00>
],
"invisible_mask": [
None,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD219830CD0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD219830BB0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD219830940>
]
},
{
"segmentation": [
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD219830910>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2198308E0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD219830C70>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD219830970>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD219830CA0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2198309A0>
],
"name": ["Bear", "Bear", "Bear", "Water", "ground", "Ground"],
"area": [39133.0, 7120.0, 13053.0, 97052.0, 33441.0, 26313.0],
"is_stuff": [False, False, False, False, False, False],
"occlude_rate": [0.0, 0.0, 0.4422737956047058, 0.5332708358764648, 0.007117012050002813, 0.1584388017654419],
"order": [1, 2, 3, 4, 5, 6],
"visible_mask": [
None,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD219830A30>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD219830C40>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD219830B80>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2191A6820>
],
"invisible_mask": [
None,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2191A68B0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2191A6610>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2191A69D0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2191A6730>
]
},
{
"segmentation": [
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2191A6790>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2191A6550>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2191A6850>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2191A6940>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2191A66D0>
],
"name": ["bear", "bear", "bear", "water", "rock beach"],
"area": [38378.0, 6130.0, 12649.0, 98377.0, 153118.0],
"is_stuff": [False, False, False, False, False],
"occlude_rate": [0.0, 0.0, 0.41094157099723816, 0.5013265013694763, 0.65973299741745],
"order": [1, 2, 3, 4, 5],
"visible_mask": [
None,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD268700F10>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2687004F0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2687002B0>
],
"invisible_mask": [
None,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2191A64C0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD28805FB50>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD28805F580>
]
},
{
"segmentation": [
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2191A6880>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2480FB190>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2480FB8E0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2480FB070>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2480FB610>
],
"name": ["bear", "bear", "bear", "sand", "water"],
"area": [38802.0, 5926.0, 12248.0, 27857.0, 126748.0],
"is_stuff": [False, False, False, False, False],
"occlude_rate": [0.0, 0.0, 0.37026453018188477, 0.13170836865901947, 0.3872092664241791],
"order": [1, 2, 3, 4, 5],
"visible_mask": [
None,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD219479DC0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD219479C70>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD219479A90>
],
"invisible_mask": [
None,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD219479AF0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD2194795B0>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=481x321 at 0x7FD219479670>
]
}
],
"image_id": [100075, 100075, 100075, 100075, 100075, 100075],
"depth_constraint": [
"1-6,2-4,2-5,2-6,3-4,3-5,3-6,4-5,4-6,5-6",
"1-3,1-4,1-5,2-3,2-4,2-5,3-4,3-5,4-5",
"1-3,1-4,1-6,2-3,2-4,2-6,3-4,3-6,4-5,4-6",
"1-4,1-5,2-3,2-4,2-5,3-4,3-5,4-5",
"1-3,1-4,1-5,2-3,2-4,2-5,3-4,3-5,4-5"
],
"size": [6, 6, 5, 6, 5, 5]
}
}
```
### Data Fields
#### COCO
- `image_id`: Unique numeric ID of the image.
- `license_id`: Unique numeric ID of the image license.
- `file_name`: File name of the image.
- `width`: Image width.
- `height`: Image height.
- `date_captured`: Date of capturing data
- `flickr_url`: Original flickr url of the image.
- `image`: A `PIL.Image.Image` object containing the image.
- `coco_url`: COCO url of the image.
- `annotations`: Holds a list of `Annotation` data classes:
- `author`: TBD
- `url`: TBD
- `image_id`: TBD
- `depth_constraint`: TBD
- `size`: TBD
- `regions`: TBD
- `segmentation`: TBD
- `name`: TBD
- `area`: TBD
- `is_stuff`: TBD
- `occlude_rate`: TBD
- `order`: TBD
- `visible_mask`: TBD
- `invisible_mask`: TBD
#### BSDS
- `image_id`: Unique numeric ID of the image.
- `license_id`: Unique numeric ID of the image license.
- `file_name`: File name of the image.
- `width`: Image width.
- `height`: Image height.
- `date_captured`: Date of capturing data
- `flickr_url`: Original flickr url of the image.
- `image`: A `PIL.Image.Image` object containing the image.
- `bsds_url`: BSDS url of the image.
- `annotations`: Holds a list of `Annotation` data classes:
- `author`: TBD
- `url`: TBD
- `image_id`: TBD
- `depth_constraint`: TBD
- `size`: TBD
- `regions`: TBD
- `segmentation`: TBD
- `name`: TBD
- `area`: TBD
- `is_stuff`: TBD
- `occlude_rate`: TBD
- `order`: TBD
- `visible_mask`: TBD
- `invisible_mask`: TBD
### Data Splits
| name | train | validation | test |
|------|------:|-----------:|------:|
| COCO | 2,500 | 1,323 | 1,250 |
| BSDS | 200 | 100 | 200 |
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
COCOA is a derivative work of the COCO dataset. The authors of COCO do not in any form endorse this work. Different licenses apply:
- COCO images: [Flickr Terms of use](http://cocodataset.org/#termsofuse)
- COCO annotations: [Creative Commons Attribution 4.0 License](http://cocodataset.org/#termsofuse)
### Citation Information
```bibtex
@inproceedings{zhu2017semantic,
title={Semantic amodal segmentation},
author={Zhu, Yan and Tian, Yuandong and Metaxas, Dimitris and Doll{\'a}r, Piotr},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={1464--1472},
year={2017}
}
@inproceedings{lin2014microsoft,
title={Microsoft coco: Common objects in context},
author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
booktitle={Computer Vision--ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13},
pages={740--755},
year={2014},
organization={Springer}
}
@article{arbelaez2010contour,
title={Contour detection and hierarchical image segmentation},
author={Arbelaez, Pablo and Maire, Michael and Fowlkes, Charless and Malik, Jitendra},
journal={IEEE transactions on pattern analysis and machine intelligence},
volume={33},
number={5},
pages={898--916},
year={2010},
publisher={IEEE}
}
```
### Contributions
Thanks to [@Wakeupbuddy](https://github.com/Wakeupbuddy) for publishing the COCOA dataset.
|
itsskofficial/llama-2-linkedin-data | 2023-09-10T14:27:29.000Z | [
"license:cc0-1.0",
"region:us"
] | itsskofficial | null | null | null | 0 | 33 | ---
license: cc0-1.0
---
|
ArkaAcharya/SML | 2023-09-13T04:40:42.000Z | [
"region:us"
] | ArkaAcharya | null | null | null | 0 | 33 | ---
dataset_info:
features:
- name: output
sequence: string
- name: instruction
sequence: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 11243483
num_examples: 126
download_size: 2563731
dataset_size: 11243483
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SML"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
linhqyy/soict_train | 2023-09-11T01:40:05.000Z | [
"region:us"
] | linhqyy | null | null | null | 0 | 33 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: intent
dtype: string
- name: sentence_annotation
dtype: string
- name: entities
list:
- name: type
dtype: string
- name: filler
dtype: string
- name: file
dtype: string
splits:
- name: train
num_bytes: 2155619.7
num_examples: 6741
- name: test
num_bytes: 239513.3
num_examples: 749
download_size: 848782
dataset_size: 2395133.0
---
# Dataset Card for "soict_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pszemraj/simplepile-lite | 2023-10-04T07:50:40.000Z | [
"task_categories:fill-mask",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"source_datasets:pszemraj/simple_wikipedia_LM",
"source_datasets:JeanKaddour/minipile",
"language:en",
"license:apache-2.0",
"region:us"
] | pszemraj | null | null | null | 0 | 33 | ---
license: apache-2.0
size_categories:
- 100K<n<1M
source_datasets:
- pszemraj/simple_wikipedia_LM
- JeanKaddour/minipile
task_categories:
- fill-mask
- text-generation
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1552622685
num_examples: 452432
- name: validation
num_bytes: 3202346
num_examples: 1000
- name: test
num_bytes: 41145686
num_examples: 11908
download_size: 867798625
dataset_size: 1596970717
language:
- en
---
# Dataset Card for "simplepile-lite"
Interleaved dataset using 'first exhausted' strategy. Counts:
```python
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 452432
})
validation: Dataset({
features: ['text'],
num_rows: 1000
})
test: Dataset({
features: ['text'],
num_rows: 11908
})
})
```
## token counts - train
using GPTNeoX Tokenizer:
| | token_count |
|:------|-----------------:|
| count | 452432 |
| mean | 868.642 |
| std | 4791.71 |
| min | 3 |
| 25% | 88 |
| 50% | 232 |
| 75% | 590 |
| max | 1.39747e+06 |
--- |
mapama247/wikihow_es | 2023-09-19T12:48:50.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:conversational",
"task_categories:summarization",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:es",
"license:cc-by-nc-sa-3.0",
"Spanish",
"WikiHow",
"Wiki Articles",
"Tutorials... | mapama247 | null | null | null | 0 | 33 | ---
pretty_name: WikiHow-ES
license: cc-by-nc-sa-3.0
size_categories: 1K<n<10K
language: es
multilinguality: monolingual
task_categories:
- text-classification
- question-answering
- conversational
- summarization
tags:
- Spanish
- WikiHow
- Wiki Articles
- Tutorials
- Step-By-Step
- Instruction Tuning
---
### Dataset Summary
Articles retrieved from the [Spanish WikiHow website](https://es.wikihow.com) on September 2023.
Each article contains a tutorial about a specific topic. The format is always a "How to" question
followed by a detailed step-by-step explanation. In some cases, the response includes several methods.
The main idea is to use this data for instruction tuning of Spanish LLMs, but given its nature it
could also be used for other tasks such as text classification or summarization.
### Languages
- Spanish (ES)
### Usage
To load the full dataset:
```python
from datasets import load_dataset
all_articles = load_dataset("mapama247/wikihow_es")
print(all_articles.num_rows) # output: {'train': 7380}
```
To load only examples from a specific category:
```python
from datasets import load_dataset
sports_articles = load_dataset("mapama247/wikihow_es", "deportes")
print(sports_articles.num_rows) # output: {'train': 201}
```
List of available categories, with the repective number of examples:
```
computadoras-y-electrónica 821
salud 804
pasatiempos 729
cuidado-y-estilo-personal 724
carreras-y-educación 564
en-la-casa-y-el-jardín 496
finanzas-y-negocios 459
comida-y-diversión 454
relaciones 388
mascotas-y-animales 338
filosofía-y-religión 264
arte-y-entretenimiento 254
en-el-trabajo 211
adolescentes 201
deportes 201
vida-familiar 147
viajes 139
automóviles-y-otros-vehículos 100
días-de-fiesta-y-tradiciones 86
```
### Supported Tasks
This dataset can be used to train a model for...
- `instruction-tuning`
- `text-classification`
- `question-answering`
- `conversational`
- `summarization`
## Dataset Structure
### Data Instances
```python
{
'category': str,
'question': str,
'introduction': str,
'answers': List[str],
'short_answers': List[str],
'url': str,
'num_answers': int,
'num_refs': int,
'expert_author': bool,
}
```
### Data Fields
- `category`: The category (from [this list](https://es.wikihow.com/Especial:CategoryListing)) to which the example belongs to.
- `label`: Numerical representation of the category, for text classification purposes.
- `question`: The article's title, which always starts with "¿Cómo ...".
- `introduction`: Introductory text that precedes the step-by-step explanation.
- `answers`: List of complete answers, with the full explanation of each step.
- `short_answers`: List of shorter answers that only contain one-sentence steps.
- `num_answers`: The number of alternative answers provided (e.g. length of `answers`).
- `num_ref`: Number of references provided in the article.
- `expert_authors`: Whether the article's author claims to be an expert on the topic or not.
- `url`: The URL address of the original article.
### Data Splits
There is only one split (`train`) that contains a total of 7,380 examples.
## Dataset Creation
### Curation Rationale
This dataset was created for language model alignment to end tasks and user preferences.
### Source Data
How-To questions with detailed step-by-step answers, retrieved from the WikiHow website.
#### Data Collection and Normalization
All articles available in September 2023 were extracted, no filters applied.
Along with the article's content, some metadata was retrieved as well.
#### Source language producers
WikiHow users. All the content is human-generated.
### Personal and Sensitive Information
The data does not include personal or sensitive information.
## Considerations
### Social Impact
The Spanish community can benefit from the high-quality data provided by this dataset.
### Bias
No post-processing steps have been applied to mitigate potential social biases.
## Additional Information
### Curators
Marc Pàmes @ Barcelona Supercomputing Center.
### License
This dataset is licensed under a **Creative Commons CC BY-NC-SA 3.0** license.
Quote from [WikiHow's Terms of Use](https://www.wikihow.com/wikiHow:Terms-of-Use):
> All text posted by Users to the Service is sub-licensed by wikiHow to other Users under a Creative Commons license as
> provided herein. The Creative Commons license allows such user generated text content to be used freely for personal,
> non-commercial purposes, so long as it is used and attributed to the original author as specified under the terms of
> the license. Allowing free republication of our articles helps wikiHow achieve its mission by providing instruction
> on solving the problems of everyday life to more people for free. In order to support this goal, wikiHow hereby grants
> each User of the Service a license to all text content that Users contribute to the Service under the terms and
> conditions of a Creative Commons CC BY-NC-SA 3.0 License. Please be sure to read the terms of the license carefully.
> You continue to own all right, title, and interest in and to your User Content, and you are free to distribute it as
> you wish, whether for commercial or non-commercial purposes.
|
Hyder12/LLM_Bootcamp_Fine_tune_QnA | 2023-09-25T21:53:39.000Z | [
"region:us"
] | Hyder12 | null | null | null | 0 | 33 | Entry not found |
changjacHp/lol_champion_top3_tips | 2023-09-21T06:24:53.000Z | [
"region:us"
] | changjacHp | null | null | null | 0 | 33 | Entry not found |
p1atdev/jexchange | 2023-09-24T17:50:16.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"language:en",
"language:ja",
"license:cc-by-sa-4.0",
"region:us"
] | p1atdev | null | null | null | 0 | 33 | ---
language:
- en
- ja
license: cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
task_categories:
- text-generation
- question-answering
pretty_name: Japanese Stack Echange Dataset
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: question_score
dtype: int64
- name: answer_score
dtype: int64
- name: tags
sequence: string
splits:
- name: train
num_bytes: 45741049
num_examples: 26944
download_size: 28117047
dataset_size: 45741049
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Japanese Stack Exchange Dataset
日本語に関する質問を英語で行うことができる [Japanese Stack Exchange](https://japanese.stackexchange.com/) の[データダンプ](https://archive.org/download/stackexchange) をもとにデータを加工し、質問文と回答文のペアになるように調整した QA データセット。
## 使い方
datasets ライブラリを用いて簡単に利用できます。
```py
from datasets import load_dataset, load_from_disk
dataset = load_dataset("p1atdev/jexchange", split="train")
print(dataset)
#Dataset({
# features: ['id', 'title', 'question', 'answer', 'question_score', 'answer_score', 'tags'],
# num_rows: 26944
#})
```
## データ構造
- id: 質問投稿の ID
- title: 質問タイトル
- question: 質問の本文
- answer: 回答の本文
- question_score: 質問文に対する評価
- answer_score: 回答に対する評価 (ただし、質問者が良いと選んだ場合は10点加算されています)
- tags: 質問に付与されているタグ
`question` と `answer` については、 `html2text` を用いてマークダウン形式に変換されています。ただし、コードブロックは `[code][/code]` ではなく ``` で囲まれています。
データの例:
```json
{
"id":"222",
"title":"What's the difference between はずがない, わけがない, and しょうがない?",
"question":"\n\nA slight expansion of the existing thread [What is the difference between\n「はずがない」 and 「わけがない」?](https://japanese.stackexchange.com/questions/171/what-\nis-the-difference-between-hazu-and-wake), but what is the difference or use\ncases for when to use はずがない, わけがない, しょうがない?\n\n",
"answer":"\n\nOn はず and わけ, answers in the original question explain it better than I would\nso I\\'ll leave it to them.\n\nBut on しょうがない, it\\'s totally different from the other two. しょうがない is used when\nyou don\\'t have other choice but to do it. It can also mean \"there\\'s nothing\nelse you can do\"\n\np/s: thanks for expanding my question\n\n",
"question_score":9,
"answer_score":15,
"tags":[
"word-choice",
"formal-nouns"
]
}
```
## ライセンス
[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.ja) |
tyzhu/squad_for_gpt_train_1000_100 | 2023-09-25T09:48:13.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 33 | ---
dataset_info:
features:
- name: text
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
splits:
- name: train
num_bytes: 3564228.0
num_examples: 1000
- name: validation
num_bytes: 371624
num_examples: 100
download_size: 2479909
dataset_size: 3935852.0
---
# Dataset Card for "squad_for_gpt_train_1000_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_wrong_title_v4_train_10_eval_10 | 2023-09-26T14:59:09.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 33 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 203084
num_examples: 138
- name: validation
num_bytes: 50820
num_examples: 50
download_size: 65070
dataset_size: 253904
---
# Dataset Card for "squad_wrong_title_v4_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
webimmunization/COVID-19-conspiracy-theories-tweets | 2023-09-29T09:51:53.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"license:cc-by-4.0",
"twitter",
"social_science",
"misinformation",
"fake_news",
"conspiracy_theory",
"region:us"
] | webimmunization | null | null | null | 0 | 33 | ---
license: cc-by-4.0
size_categories:
- 1K<n<10K
task_categories:
- text-classification
tags:
- twitter
- social_science
- misinformation
- fake_news
- conspiracy_theory
---
## Dataset Description
- **Paper:** [More Information Needed]
- **Point of Contact:** izabela.krysinska@doctorate.put.poznan.pl
### Dataset Summary
This dataset consists of 6591 tweets generated by GPT-3.5 model. The tweets are juxtaposed with a conspiracy theory related to COVID-19 pandemic. Each item consists of a label that represents the item's output class. The possible labels are support/deny/neutral.
- **support**: the tweet suggests support for the conspiracy theory
- **deny**: the tweet contradicts the conspiracy theory
- **neutral**: the tweet is mostly informative, and does not show emotions against the conspiracy theory
The dataset can be used to train a classification model.
### Languages
English
## Dataset Structure
### Data Instances
```
{
'tweet': 'Is the Chinese government exploiting the pandemic to gain an economic advantage? #COVIDEconomy #ChineseTradeWar',
'conspiracy_theory': 'CT_5',
'label': 'support'
}
```
### Data Fields
- `tweet`: a text generated by GPT-3.5 (input)
- `conspiracy theory`: a conspiracy theory identifier
- `label`: label, support/deny/neutral
Conspiracy theories mapping:
1. **CT1** The coronavirus or the government's response to it is a deliberate strategy to create economic instability or to benefit large corporations over small businesses.
2. **CT2** The public is being intentionally misled about the true nature of the Coronavirus, its risks, or the efficacy of certain treatments or prevention methods.
3. **CT3** The Coronavirus was created intentionally, made by humans, or as a bioweapon.
4. **CT4** Politicians or government agencies are intentionally spreading false information, or they have some other motive for the way they are responding to the coronavirus.
5. **CT5** The Chinese government intentionally created or spread the coronavirus to harm other countries.
6. **CT6** The coronavirus vaccine is either unsafe or part of a larger plot to control people or reduce the population.
### Data Splits
The dataset contains training split only which consists of 6591 items.
## Dataset Creation
The dataset was generated with GPT-3.5 with the following prompts for support, deny, and neutral class respectively:
**support** Consider the following conspiracy theory: X. Generate 50 tweets that support this conspiracy theory. Try to use hashtags that might promote this particular conspiracy theory. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list.
**deny** Consider the following conspiracy theory: X. Generate 50 tweets that contradict this conspiracy theory. Try to use hashtags that might debunk this particular conspiracy theory. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list.
**neutral** Consider the following conspiracy theory: X. Generate 50 tweets that are about COVID-19 but unrelated to the conspiracy theory. Try to use hashtags that might be used in such a tweet. Try to use words and terms related to the COVID pandemic. Do not quote the conspiracy theory verbatim. Do not repeat tweets and try to make them diversified. Keep each tweet below the 280 character length limit. Present the tweets as a list.
### Known Limitations
The generated tweets are sometimes formulaic and lack of diversity.
### Citation Information
```
@article{article_id,
author = {Author List},
title = {Dataset Paper Title},
journal = {Publication Venue},
year = {2525}
}
```
|
nlewins/LSK_ceb_en | 2023-10-03T11:13:08.000Z | [
"region:us"
] | nlewins | null | null | null | 0 | 33 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int64
- name: en
dtype: string
- name: ceb
dtype: string
splits:
- name: train
num_bytes: 540208.9233699634
num_examples: 6142
- name: test
num_bytes: 60072.07663003663
num_examples: 683
download_size: 401703
dataset_size: 600281.0
---
# Dataset Card for "LSK_ceb_to_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sakshat98/mistral_data | 2023-10-06T02:43:43.000Z | [
"license:apache-2.0",
"region:us"
] | sakshat98 | null | null | null | 0 | 33 | ---
license: apache-2.0
---
|
kowndinya23/flan2022-mistral-512-150K-random | 2023-10-08T17:17:59.000Z | [
"region:us"
] | kowndinya23 | null | null | null | 0 | 33 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 110033732.20294853
num_examples: 150000
- name: validation
num_bytes: 11581708.0
num_examples: 15000
download_size: 79053078
dataset_size: 121615440.20294853
---
# Dataset Card for "flan2022-mistral-512-150K-random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
minh21/COVID-QA-Chunk-64-testset-biencoder-data-65_25_10-v2 | 2023-10-09T11:18:22.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 33 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 13595044
num_examples: 203
download_size: 0
dataset_size: 13595044
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "COVID-QA-Chunk-64-testset-biencoder-data-65_25_10-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-kand2-sdxl-wuerst-karlo/a19a65d2 | 2023-10-09T05:20:58.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | null | 0 | 33 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 174
num_examples: 10
download_size: 1323
dataset_size: 174
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "a19a65d2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YuyangHuang/amazonReviewSummary | 2023-10-10T09:15:14.000Z | [
"region:us"
] | YuyangHuang | null | null | null | 0 | 33 | Entry not found |
dengue_filipino | 2023-01-25T14:29:21.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:tl",
"lice... | null | Benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets. | @INPROCEEDINGS{8459963,
author={E. D. {Livelo} and C. {Cheng}},
booktitle={2018 IEEE International Conference on Agents (ICA)},
title={Intelligent Dengue Infoveillance Using Gated Recurrent Neural Learning and Cross-Label Frequencies},
year={2018},
volume={},
number={},
pages={2-7},
doi={10.1109/AGENTS.2018.8459963}}
} | null | 1 | 32 | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
language:
- tl
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: dengue
pretty_name: Dengue Dataset in Filipino
dataset_info:
features:
- name: text
dtype: string
- name: absent
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: dengue
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: health
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: mosquito
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: sick
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 428553
num_examples: 4015
- name: test
num_bytes: 428553
num_examples: 4015
- name: validation
num_bytes: 54384
num_examples: 500
download_size: 156014
dataset_size: 911490
---
# Dataset Card for Dengue Dataset in Filipino
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Dengue Dataset in Filipino homepage](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Repository:** [Dengue Dataset in Filipino repository](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Paper:** [IEEE paper](https://ieeexplore.ieee.org/document/8459963)
- **Leaderboard:**
- **Point of Contact:** [Jan Christian Cruz](mailto:jan_christian_cruz@dlsu.edu.ph)
### Dataset Summary
Benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular.
## Dataset Structure
### Data Instances
Sample data:
```
{
"text": "Tapos ang dami pang lamok.",
"absent": "0",
"dengue": "0",
"health": "0",
"mosquito": "1",
"sick": "0"
}
```
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Jan Christian Cruz](mailto:jan_christian_cruz@dlsu.edu.ph)
### Licensing Information
[More Information Needed]
### Citation Information
@INPROCEEDINGS{8459963,
author={E. D. {Livelo} and C. {Cheng}},
booktitle={2018 IEEE International Conference on Agents (ICA)},
title={Intelligent Dengue Infoveillance Using Gated Recurrent Neural Learning and Cross-Label Frequencies},
year={2018},
volume={},
number={},
pages={2-7},
doi={10.1109/AGENTS.2018.8459963}}
}
### Contributions
Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset. |
emotone_ar | 2023-01-25T14:29:56.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ar",
"license:unknown",
"region:us"
] | null | Dataset of 10065 tweets in Arabic for Emotion detection in Arabic text | @inbook{inbook,
author = {Al-Khatib, Amr and El-Beltagy, Samhaa},
year = {2018},
month = {01},
pages = {105-114},
title = {Emotional Tone Detection in Arabic Tweets: 18th International Conference, CICLing 2017, Budapest, Hungary, April 17–23, 2017, Revised Selected Papers, Part II},
isbn = {978-3-319-77115-1},
doi = {10.1007/978-3-319-77116-8_8}
} | null | 5 | 32 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Emotional Tone in Arabic
dataset_info:
features:
- name: tweet
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': anger
'2': joy
'3': sadness
'4': love
'5': sympathy
'6': surprise
'7': fear
splits:
- name: train
num_bytes: 1541746
num_examples: 10065
download_size: 1563138
dataset_size: 1541746
---
# Dataset Card for Emotional Tone in Arabic
## Table of Contents
- [Dataset Card for Emotional Tone in Arabic](#dataset-card-for-emotional-tone-in-arabic)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [|split|num examples|](#splitnum-examples)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Repository](https://github.com/AmrMehasseb/Emotional-Tone)
- **Paper:** [Emotional Tone Detection in Arabic Tweets](https://www.researchgate.net/publication/328164296_Emotional_Tone_Detection_in_Arabic_Tweets_18th_International_Conference_CICLing_2017_Budapest_Hungary_April_17-23_2017_Revised_Selected_Papers_Part_II)
- **Point of Contact:** [Amr Al-Khatib](https://github.com/AmrMehasseb)
### Dataset Summary
Dataset of 10065 tweets in Arabic for Emotion detection in Arabic text
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is based on Arabic.
## Dataset Structure
### Data Instances
example:
```
>>> {'label': 0, 'tweet': 'الاوليمبياد الجايه هكون لسه ف الكليه ..'}
```
### Data Fields
- "tweet": plain text tweet in Arabic
- "label": emotion class label
the dataset distribution and balance for each class looks like the following
|label||Label description | Count |
|---------|---------| ------- |
|0 |none | 1550 |
|1 |anger | 1444 |
|2 |joy | 1281 |
|3 |sadness | 1256 |
|4 |love | 1220 |
|5 |sympathy | 1062 |
|6 |surprise | 1045 |
|7 |fear | 1207 |
### Data Splits
The dataset is not split.
| | train |
|----------|--------:|
| no split | 10,065 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inbook{inbook,
author = {Al-Khatib, Amr and El-Beltagy, Samhaa},
year = {2018},
month = {01},
pages = {105-114},
title = {Emotional Tone Detection in Arabic Tweets: 18th International Conference, CICLing 2017, Budapest, Hungary, April 17–23, 2017, Revised Selected Papers, Part II},
isbn = {978-3-319-77115-1},
doi = {10.1007/978-3-319-77116-8_8}
}
```
### Contributions
Thanks to [@abdulelahsm](https://github.com/abdulelahsm) for adding this dataset. |
newsph | 2022-11-03T16:07:51.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:fil",... | null | Large-scale dataset of Filipino news articles. Sourced for the NewsPH-NLI Project (Cruz et al., 2020). | @article{cruz2020investigating,
title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation},
author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng},
journal={arXiv preprint arXiv:2010.11574},
year={2020}
} | null | 1 | 32 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- fil
- tl
license:
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: newsph-nli
pretty_name: NewsPH-NLI
dataset_info:
features:
- name: text
dtype: string
config_name: newsph
splits:
- name: train
num_bytes: 298833914
num_examples: 2190465
download_size: 104086466
dataset_size: 298833914
---
# Dataset Card for NewsPH
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Filipino Text Benchmarks](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Repository:**
- **Paper:** [Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation](https://arxiv.org/abs/2010.11574)
- **Leaderboard:**
- **Point of Contact:** [Jan Christian Blaise Cruz](jan_christian_cruz@dlsu.edu.ph)
### Dataset Summary
Raw collection of news articles in Filipino. Used to produce the NewsPH-NLI dataset in Cruz et al. (2020)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Tagalog/Filipino
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `text` (`str`)
The dataset is in plaintext and only has one field ("text"). It can be used for language modeling.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@jcblaisecruz02](https://github.com/jcblaisecruz02) for adding this dataset. |
per_sent | 2023-01-25T14:42:26.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|other-MPQA-KBP Challenge-MediaRank",
"language:en",
"license:unknown",
"a... | null | Person SenTiment (PerSenT) is a crowd-sourced dataset that captures the sentiment of an author towards the main entity in a news article. This dataset contains annotation for 5.3k documents and 38k paragraphs covering 3.2k unique entities.
The dataset consists of sentiment annotations on news articles about people. For each article, annotators judge what the author’s sentiment is towards the main (target) entity of the article. The annotations also include similar judgments on paragraphs within the article.
To split the dataset, entities into 4 mutually exclusive sets. Due to the nature of news collections, some entities tend to dominate the collection. In the collection, there were four entities which were the main entity in nearly 800 articles. To avoid these entities from dominating the train or test splits, we moved them to a separate test collection. We split the remaining into a training, dev, and test sets at random. Thus our collection includes one standard test set consisting of articles drawn at random (Test Standard -- `test_random`), while the other is a test set which contains multiple articles about a small number of popular entities (Test Frequent -- `test_fixed`). | @inproceedings{bastan2020authors,
title={Author's Sentiment Prediction},
author={Mohaddeseh Bastan and Mahnaz Koupaee and Youngseo Son and Richard Sicoli and Niranjan Balasubramanian},
year={2020},
eprint={2011.06128},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 0 | 32 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-MPQA-KBP Challenge-MediaRank
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: persent
pretty_name: PerSenT
dataset_info:
features:
- name: DOCUMENT_INDEX
dtype: int64
- name: TITLE
dtype: string
- name: TARGET_ENTITY
dtype: string
- name: DOCUMENT
dtype: string
- name: MASKED_DOCUMENT
dtype: string
- name: TRUE_SENTIMENT
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph0
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph1
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph2
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph3
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph4
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph5
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph6
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph7
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph8
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph9
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph10
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph11
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph12
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph13
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph14
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph15
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
splits:
- name: train
num_bytes: 14595163
num_examples: 3355
- name: test_random
num_bytes: 2629500
num_examples: 579
- name: test_fixed
num_bytes: 3881800
num_examples: 827
- name: validation
num_bytes: 2322922
num_examples: 578
download_size: 23117196
dataset_size: 23429385
---
# Dataset Card for PerSenT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PerSenT](https://stonybrooknlp.github.io/PerSenT/)
- **Repository:** [https://github.com/MHDBST/PerSenT](https://github.com/MHDBST/PerSenT)
- **Paper:** [arXiv](https://arxiv.org/abs/2011.06128)
- **Leaderboard:** NA
- **Point of Contact:** [Mohaddeseh Bastan](mbastan@cs.stonybrook.edu)
### Dataset Summary
PerSenT is a crowd-sourced dataset that captures the sentiment of an author towards the main entity in a news article. This dataset contains annotations for 5.3k documents and 38k paragraphs covering 3.2k unique entities. For each article, annotators judge what the author’s sentiment is towards the main
(target) entity of the article. The annotations also include similar judgments on paragraphs within the article.
### Supported Tasks and Leaderboards
Sentiment Classification: Each document consists of multiple paragraphs. Each paragraph is labeled separately (Positive, Neutral, Negative) and the author’s sentiment towards the whole document is included as a document-level label.
### Languages
English
## Dataset Structure
### Data Instances
```json
{'DOCUMENT': "Germany's Landesbank Baden Wuertemberg won EU approval Tuesday for a state bailout after it promised to shrink its balance sheet by 40 percent and refocus on lending to companies.\n The bank was several state-owned German institutions to run into trouble last year after it ran up more huge losses from investing in high-risk proprietary trading and capital market activities -- a business the EU has now told it to shun.\n Seven current and former managers of the bank are also being investigated by German authorities for risking or damaging the bank's capital by carrying out or failing to block investments in high-risk deals worth hundreds of millions from 2006.\n The European Commission said its Tuesday approval for the state rescue of the bank and its new restructuring plan would allow it become a viable business again -- and that the cutbacks would help limit the unfair advantage over rivals that the bank would get from the state aid.\n Stuttgart-based LBBW earlier this year received a capital injection of (EURO)5 billion from the bank's shareholders all of them public authorities or state-owned including the state of Baden-Wuerttemberg the region's savings bank association and the city of Stuttgart.",
'DOCUMENT_INDEX': 1,
'MASKED_DOCUMENT': "[TGT] won EU approval Tuesday for a state bailout after it promised to shrink its balance sheet by 40 percent and refocus on lending to companies.\n [TGT] was several state-owned German institutions to run into trouble last year after [TGT] ran up more huge losses from investing in high-risk proprietary trading and capital market activities -- a business the EU has now told it to shun.\n Seven current and former managers of [TGT] are also being investigated by German authorities for risking or damaging [TGT]'s capital by carrying out or failing to block investments in high-risk deals worth hundreds of millions from 2006.\n The European Commission said its Tuesday approval for the state rescue of [TGT] and its new restructuring plan would allow it become a viable business again -- and that the cutbacks would help limit the unfair advantage over rivals that [TGT] would get from the state aid.\n Stuttgart-based LBBW earlier this year received a capital injection of (EURO)5 billion from [TGT]'s shareholders all of them public authorities or state-owned including the state of Baden-Wuerttemberg the region's savings bank association and the city of Stuttgart.",
'Paragraph0': 2,
'Paragraph1': 0,
'Paragraph10': -1,
'Paragraph11': -1,
'Paragraph12': -1,
'Paragraph13': -1,
'Paragraph14': -1,
'Paragraph15': -1,
'Paragraph2': 0,
'Paragraph3': 1,
'Paragraph4': 1,
'Paragraph5': -1,
'Paragraph6': -1,
'Paragraph7': -1,
'Paragraph8': -1,
'Paragraph9': -1,
'TARGET_ENTITY': 'Landesbank Baden Wuertemberg',
'TITLE': 'German bank LBBW wins EU bailout approval',
'TRUE_SENTIMENT': 0}
```
### Data Fields
- DOCUMENT_INDEX: ID of the document per original dataset
- TITLE: Title of the article
- DOCUMENT: Text of the article
- MASKED_DOCUMENT: Text of the article with the target entity masked with `[TGT]` token
- TARGET_ENTITY: The entity that the author is expressing opinion about
- TRUE_SENTIMENT: Label for entire article
- Paragraph{0..15}: Label for each paragraph in the article
**Note**: Labels are one of `[Negative, Neutral, Positive]`. Missing labels were replaced with `-1`.
### Data Splits
To split the dataset, entities were split into 4 mutually exclusive sets. Due to the nature of news collections, some entities tend to dominate the collection. In the collection, there were four entities which were the main entity in nearly 800 articles. To avoid these entities from dominating the train or test splits, these were moved them to a separate test collection. The remaining was split into a training, dev, and test sets at random. Thus the collection includes one standard test set consisting of articles drawn at random (Test Standard), while the other is a test set which contains multiple articles about a small number of popular entities (Test Frequent).
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Articles were selected from 3 sources:
1. MPQA (Deng and Wiebe, 2015; Wiebe et al., 2005): This dataset contains news articles manually annotated for opinions, beliefs, emotions, sentiments, speculations, etc. It also has target annotations which are entities and event anchored to the heads of noun or verb phrases. All decisions on this dataset are made on sentence-level and over short spans.
2. KBP Challenge (Ellis et al., 2014): This resource contains TAC 2014 KBP English sentiment slot filling challenge dataset. This is a document-level sentiment filling dataset. In this task, given an entity and a sentiment (positive/negative) from the document, the goal is to find entities toward which
the original entity holds the given sentimental view. We selected documents from this resource which have been used in the following similar work in sentiment analysis task (Choi et al., 2016).
3. Media Rank (Ye and Skiena, 2019): This dataset ranks about 50k news sources along different aspects. It is also used for classifying political ideology of news articles (Kulkarni et al., 2018).
Pre-processing steps:
- First we find all the person entities in each article, using Stanford NER (Name Entity Resolution) tagger (Finkel et al., 2005) and all mentions of them using co-reference resolution (Clark and Manning, 2016; Co, 2017).
- We removed articles which are not likely to have a main entity of focus. We used a simple heuristic of removing articles in which the most frequent person entity is mentioned only three times or less (even when counting co-referent mentions).
- For the articles that remain we deemed the most frequent entity to be the main entity of the article. We also filtered out extremely long and extremely short articles to keep the articles which have at least 3 paragraphs and at most 16 paragraphs.
Documents are randomly separated into train, dev, and two test sets. We ensure that each entity appears in only one of the sets. Our goal here is to avoid easy to learn biases over entities. To avoid the most frequent entities from dominating the training or the test sets, we remove articles that covered the most frequent entities and use them as a separate test set (referred to as frequent test set) in addition to the randomly drawn standard test set.
### Annotations
#### Annotation process
We obtained document and paragraph level annotations with the help of Amazon Mechanical Turk workers. The workers first verified if the target entity we provide is indeed the main entity in the document. Then, they rated each paragraph in a document that contained a direct mention or a reference to the target
entity. Last, they rated the sentiment towards the entity based on the entire document. In both cases, the workers made assessments about the authors view based on what they said about the target entity. For both paragraph and document level sentiment, the workers chose from five rating categories: Negative,
Slightly Negative, Neutral, Slightly Positive, or Positive. We then combine the fine-grained annotations to obtain three coarse-grained classes Negative, Neutral, or Positive.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{bastan2020authors,
title={Author's Sentiment Prediction},
author={Mohaddeseh Bastan and Mahnaz Koupaee and Youngseo Son and Richard Sicoli and Niranjan Balasubramanian},
year={2020},
eprint={2011.06128},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@jeromeku](https://github.com/jeromeku) for adding this dataset. |
allegro/klej-cbd | 2021-11-29T19:14:20.000Z | [
"region:us"
] | allegro | null | null | null | 0 | 32 | Entry not found |
tner/tweebank_ner | 2022-11-27T20:59:13.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1k<10K",
"language:en",
"license:other",
"arxiv:2201.07281",
"region:us"
] | tner | [Tweebank NER](https://arxiv.org/abs/2201.07281) | @article{DBLP:journals/corr/abs-2201-07281,
author = {Hang Jiang and
Yining Hua and
Doug Beeferman and
Deb Roy},
title = {Annotating the Tweebank Corpus on Named Entity Recognition and Building
{NLP} Models for Social Media Analysis},
journal = {CoRR},
volume = {abs/2201.07281},
year = {2022},
url = {https://arxiv.org/abs/2201.07281},
eprinttype = {arXiv},
eprint = {2201.07281},
timestamp = {Fri, 21 Jan 2022 13:57:15 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-07281.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 3 | 32 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1k<10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: TweeBank NER
---
# Dataset Card for "tner/tweebank_ner"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://arxiv.org/abs/2201.07281](https://arxiv.org/abs/2201.07281)
- **Dataset:** TweeBank NER
- **Domain:** Twitter
- **Number of Entity:** 4
### Dataset Summary
TweeBank NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `LOC`, `MISC`, `PER`, `ORG`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tokens': ['RT', '@USER2362', ':', 'Farmall', 'Heart', 'Of', 'The', 'Holidays', 'Tabletop', 'Christmas', 'Tree', 'With', 'Lights', 'And', 'Motion', 'URL1087', '#Holiday', '#Gifts'],
'tags': [8, 8, 8, 2, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/tweebank_ner/raw/main/dataset/label.json).
```python
{
"B-LOC": 0,
"B-MISC": 1,
"B-ORG": 2,
"B-PER": 3,
"I-LOC": 4,
"I-MISC": 5,
"I-ORG": 6,
"I-PER": 7,
"O": 8
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|tweebank_ner | 1639| 710 |1201|
### Citation Information
```
@article{DBLP:journals/corr/abs-2201-07281,
author = {Hang Jiang and
Yining Hua and
Doug Beeferman and
Deb Roy},
title = {Annotating the Tweebank Corpus on Named Entity Recognition and Building
{NLP} Models for Social Media Analysis},
journal = {CoRR},
volume = {abs/2201.07281},
year = {2022},
url = {https://arxiv.org/abs/2201.07281},
eprinttype = {arXiv},
eprint = {2201.07281},
timestamp = {Fri, 21 Jan 2022 13:57:15 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-07281.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
gigant/oldbookillustrations | 2022-08-03T17:35:37.000Z | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:image-to-image",
"task_ids:image-captioning",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"l... | gigant | null | null | null | 12 | 32 | ---
annotations_creators:
- expert-generated
language:
- en
- fr
- de
language_creators:
- expert-generated
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
pretty_name: Old Book Illustrations
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- lam
- 1800-1900
task_categories:
- text-to-image
- image-to-text
- image-to-image
task_ids:
- image-captioning
---
# Dataset Card for Old Book Illustrations
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://www.oldbookillustrations.com/)**
### Dataset Summary
The Old Book Illustrations contains 4172 illustrations scanned from old books, this collection was collected & curated by the team of the website [Old Book Illustrations](https://www.oldbookillustrations.com/).
The webmaster of Old Book Illustrations kindly allowed us to scrap these information in order to create this dataset for the [BigLAM initiative](https://huggingface.co/biglam).
### Languages
The captions and descriptions are mostly in English but can contain some sentences from other languages such as French or German.
For instance you can find this description that contains a French sentence:
>The caption reads in the original French: Vue de l’aqueduc de Salones qui conduisait l’eau à Spalatro.
## Dataset Structure
Each row contains information gathered from the page of an illustration on the website [Old Book Illustrations](https://www.oldbookillustrations.com/). As of July 2022, there are 4172 illustrations in this dataset.
### Data Fields
* `rawscan`: the image as originally scanned from the book, without further processing
* `1600px`: the cleaned image, resized to a width of 1600 pixels (height can vary)
* `info_url`: URL to the illustration page on oldbookillustrations.com
* `ìnfo_src`: URL to an icon-sized version of the image
* `info_alt`: short description of the image
* `artist_name`: artist name
* `artist_date`: birth date of the artist
* `artist_countries`: list of the countries the artist is from
* `book_title`: original title of the book the illustration is extracted from
* `book_authors`: list of the authors of the book
* `book_publishers`: list of the publishers of the book
* `openlibrary-url`: URL to the openlibrary entry for the book
* `tags`: list of keywords for this illustration on oldbookillustrations.com
* `illustration_source_name`: list of the sources for this illustration
* `illustration_source_url`: list of the URL for these sources
* `illustration_subject`: category of the subject represented in the illustration
* `illustration_format`: category of the format of the illustration
* `image_title`: title of the image
* `image_caption`: caption of the image. Seems to be the caption that appears next to the image in the book, translated to English if in another language
* `image_description`: longer description of the image. If there is one, it also quotes the caption in the original language
* `rawscan_url`: URL to the rawscan image on oldbookillustration.com
* `1600px_url`: URL to the cleaned image on oldbookillustration.com
## Dataset Creation
### Curation Rationale
This collection was collected & curated by the team of the website [Old Book Illustrations](https://www.oldbookillustrations.com/).
This version contains all the data that was available on the website as of July 2022, but the website is being actively maintained so if you want more old book illustrations, make sure to check [Old Book Illustrations](https://www.oldbookillustrations.com/).
### Source Data
#### Initial Data Collection and Normalization
Initial data is gathered from the website [Old Book Illustrations](https://www.oldbookillustrations.com/). The sources of the illustration scans are specified for each entry in the columns `illustration_source_name` and `illustration_source_url`.
### Personal and Sensitive Information
The Old Book Illustrations' Terms and conditions reads:
>OBI [Old Book Illustrations] explores the art of book illustrations within boundaries defined by time and age, not by subject, treatment, or intent. This means that some illustrations might be deemed offensive, disturbing, misleading, or otherwise objectionable. We do not endorse views or opinions the Illustrations may express, neither do we guarantee that the information conveyed by any Illustration is accurate.
## Considerations for Using the Data
### Discussion of Biases
The Old Book Illustrations' Terms and conditions reads:
>OBI [Old Book Illustrations] explores the art of book illustrations within boundaries defined by time and age, not by subject, treatment, or intent. This means that some illustrations might be deemed offensive, disturbing, misleading, or otherwise objectionable. We do not endorse views or opinions the Illustrations may express, neither do we guarantee that the information conveyed by any Illustration is accurate.
## Additional Information
### Dataset Curators
The Old Book Illustrations collection is curated and maintained by the team of the [Old Book Illustrations website](https://www.oldbookillustrations.com/).
### Licensing Information
[Old Book Illustrations](https://www.oldbookillustrations.com/) website reads:
>We don’t limit the use of the illustrations available on our site, but we accept no responsibility regarding any problem, legal or otherwise, which might result from such use. More specifically, we leave it up to users to make sure that their project complies with the copyright laws of their country of residence. Text content (descriptions, translations, etc.) is published under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The Old Book Illustrations webmaster mentioned that most images are public domain in the US and Europe, but there can be some exceptions. An example are the illustrations from [*Early poems of William Morris*](https://www.oldbookillustrations.com/titles/early-poems-of-william-morris/) as the illustrator died 1955, so her work is not public domain in Europe as of 2022, or [*Under the hill*](https://www.oldbookillustrations.com/titles/under-the-hill/) which was published in the US in 1928 and therefore is not public domain there.
### Citation Information
```bibtex
@misc{old book illustrations_2007,
url={https://www.oldbookillustrations.com/},
journal={Old Book Illustrations}, year={2007}}
```
### Contributions
Thanks to [@gigant](https://huggingface.co/gigant) ([@giganttheo](https://github.com/giganttheo)) for adding this dataset. |
imodels/diabetes-readmission | 2022-08-14T15:38:59.000Z | [
"task_categories:tabular-classification",
"size_categories:100K<n<1M",
"interpretability",
"fairness",
"medicine",
"region:us"
] | imodels | null | null | null | 1 | 32 | ---
annotations_creators: []
language: []
language_creators: []
license: []
multilinguality: []
pretty_name: diabetes-readmission
size_categories:
- 100K<n<1M
source_datasets: []
tags:
- interpretability
- fairness
- medicine
task_categories:
- tabular-classification
task_ids: []
---
Port of the diabetes-readmission dataset from UCI (link [here](https://archive.ics.uci.edu/ml/datasets/diabetes+130-us+hospitals+for+years+1999-2008)). See details there and use carefully.
Basic preprocessing done by the [imodels team](https://github.com/csinva/imodels) in [this notebook](https://github.com/csinva/imodels-data/blob/master/notebooks_fetch_data/00_get_datasets_custom.ipynb).
The target is the binary outcome `readmitted`.
### Sample usage
Load the data:
```
from datasets import load_dataset
dataset = load_dataset("imodels/diabetes-readmission")
df = pd.DataFrame(dataset['train'])
X = df.drop(columns=['readmitted'])
y = df['readmitted'].values
```
Fit a model:
```
import imodels
import numpy as np
m = imodels.FIGSClassifier(max_rules=5)
m.fit(X, y)
print(m)
```
Evaluate:
```
df_test = pd.DataFrame(dataset['test'])
X_test = df.drop(columns=['readmitted'])
y_test = df['readmitted'].values
print('accuracy', np.mean(m.predict(X_test) == y_test))
``` |
valhalla/emoji-dataset | 2022-10-05T11:39:52.000Z | [
"region:us"
] | valhalla | null | null | null | 2 | 32 | Entry not found |
allenai/wcep_dense_oracle | 2022-11-06T21:49:24.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | allenai | null | null | null | 0 | 32 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: WCEP-10
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: wcep
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of the `train`, `validation`, and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8590 | 0.6490 | 0.6490 | 0.6490 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8578 | 0.6326 | 0.6326 | 0.6326 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8678 | 0.6631 | 0.6631 | 0.6631 | |
laion/laion2B-en-md5 | 2023-01-07T00:35:33.000Z | [
"license:cc-by-4.0",
"region:us"
] | laion | null | null | null | 2 | 32 | ---
license: cc-by-4.0
---
|
instruction-tuning-sd/cartoonization | 2023-05-11T15:16:08.000Z | [
"task_categories:image-to-image",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | instruction-tuning-sd | null | null | null | 4 | 32 | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: cartoonized_image
dtype: image
splits:
- name: train
num_bytes: 3257571330
num_examples: 5000
download_size: 3296272284
dataset_size: 3257571330
size_categories:
- 1K<n<10K
language:
- en
task_categories:
- image-to-image
---
# Instruction-prompted cartoonization dataset
This dataset was created from 5000 images randomly sampled from the [Imagenette dataset](https://github.com/fastai/imagenette). For more
details on how the dataset was created, check out [this directory](https://github.com/sayakpaul/instruction-tuned-sd/tree/main/data_preparation).
Following figure depicts the data preparation workflow:
<p align="center">
<img src="https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/cartoonization_data_wheel.png" width=600/>
</p>
## Known limitations and biases
The dataset was derived from Imagenette, which, in turn, was derived from [ImageNet](https://www.image-net.org/). So, naturally, this
dataset inherits the limitations and biases of ImageNet.
## Licensing
The dataset was derived from Imagenette, which, in turn, was derived from [ImageNet](https://www.image-net.org/). So, this dataset's license
is the same as ImageNet. |
koutch/staqc | 2023-03-27T14:53:22.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"code",
"arxiv:1803.09371",
"region:us"
] | koutch | StaQC (Stack Overflow Question-Code pairs) is a dataset of around 148K Python and 120K SQL domain question-code pairs,
which are automatically mined from Stack Overflow using a Bi-View Hierarchical Neural Network,
as described in the paper "StaQC: A Systematically Mined Question-Code Dataset from Stack Overflow" (WWW'18). | @inproceedings{yao2018staqc,
title={StaQC: A Systematically Mined Question-Code Dataset from Stack Overflow},
author={Yao, Ziyu and Weld, Daniel S and Chen, Wei-Peng and Sun, Huan},
booktitle={Proceedings of the 2018 World Wide Web Conference on World Wide Web},
pages={1693--1703},
year={2018},
organization={International World Wide Web Conferences Steering Committee}
} | null | 3 | 32 | ---
dataset_info:
- config_name: mca_python
features:
- name: id
dtype: int32
- name: question_id
dtype: int32
- name: question
dtype: string
- name: snippet
sequence: string
splits:
- name: train
num_bytes: 23286786
num_examples: 40391
download_size: 72054260
dataset_size: 23286786
- config_name: mca_sql
features:
- name: id
dtype: int32
- name: question_id
dtype: int32
- name: question
dtype: string
- name: snippet
sequence: string
splits:
- name: train
num_bytes: 15164206
num_examples: 26052
download_size: 50304531
dataset_size: 15164206
- config_name: sca_python
features:
- name: id
dtype: int32
- name: question_id
dtype: int32
- name: question
dtype: string
- name: snippet
dtype: string
splits:
- name: train
num_bytes: 39678168
num_examples: 85294
download_size: 47378850
dataset_size: 39678168
- config_name: sca_sql
features:
- name: id
dtype: int32
- name: question_id
dtype: int32
- name: question
dtype: string
- name: snippet
dtype: string
splits:
- name: train
num_bytes: 28656467
num_examples: 75637
download_size: 34194025
dataset_size: 28656467
- config_name: man_python
features:
- name: id
dtype: int32
- name: question_id
dtype: int32
- name: question
dtype: string
- name: snippet
sequence:
- name: text
dtype: string
- name: is_sda
dtype: bool
splits:
- name: train
num_bytes: 1445103
num_examples: 2052
download_size: 71250225
dataset_size: 1445103
- config_name: man_sql
features:
- name: id
dtype: int32
- name: question_id
dtype: int32
- name: question
dtype: string
- name: snippet
sequence:
- name: text
dtype: string
- name: is_sda
dtype: bool
splits:
- name: train
num_bytes: 1123721
num_examples: 1587
download_size: 49745860
dataset_size: 1123721
license: cc-by-4.0
task_categories:
- question-answering
language:
- en
tags:
- code
pretty_name: staqc
size_categories:
- 10K<n<100K
---
# Dataset Card for StaQC (A Systematically Mined Question-Code Dataset from Stack Overflow)
## Dataset Description
- **Homepage: [GitHub](https://github.com/LittleYUYU/StackOverflow-Question-Code-Dataset)**
- **Paper: [StaQC: A Systematically Mined Question-Code Dataset from Stack Overflow](https://arxiv.org/abs/1803.09371)**
### Dataset Summary
StaQC (Stack Overflow Question-Code pairs) is a large dataset of around 148K Python and 120K SQL domain question-code pairs,
which are automatically mined from Stack Overflow using a Bi-View Hierarchical Neural Network. StaQC is collected from three sources: multi-code answer posts, single-code answer posts, and manual annotations on multi-code answer posts.
The dataset was originally released by the main authors on [GitHub](https://github.com/LittleYUYU/StackOverflow-Question-Code-Dataset). This version is a *non-modified* redistributed copy (under the [license](#licensing-information) permission) made available on the hub for easier access.
#### Standalone solutions
As noted in the paper, the authors *define a code snippet as a code solution when the
questioner can solve the problem solely based on it (also named as
“standalone” solution).*
#### Manual annotations
The manual annotations are the collection of multi-code answer posts for which each code snippet was annotated with a boolean indicating whether or not the snippet is a *standalone solution* to the question.
#### Multi-code answer posts
A *Multi-code answer post* is an (accepted) answer post that contains multiple code snippets,
some of which may not be a *standalone* code solution to the question (see Section 1 in [paper](http://web.cse.ohio-state.edu/~sun.397/docs/StaQC-www18.pdf)).
For example, in [this multi-code answer post](https://stackoverflow.com/a/5996949),
the third code snippet is not a code solution to the question "How to limit a number to be within a specified range? (Python)".
Note: the multi-code answer posts contain also the manual annotations.
#### Single-code answer posts
A *Single-code answer post* is an (accepted) answer post that contains only one code snippet.
We pair such code snippets with the question title as a question-code pair.
### Supported Tasks and Leaderboards
This dataset can be used for Natural Language to Code Generation tasks.
### Languages
Python, SQL, English
## Dataset Structure
### Data Instances
Each configuration correspond to one of the three parts, in a given programming language.
There are three parts for the dataset:
- mca (Multi-code answer posts)
- sca (Single-code answer posts)
- man (Manual annotations)
And two programming/query languages:
- python
- sql
One can obtain obtain a configuration as a combination of a part in a programing language. For instance, one can obtain the automatically mined multi-code answers in python using:
```python
dataset = load_dataset("koutch/staqc", 'mca_python')
DatasetDict({
train: Dataset({
features: ['id', 'question_id', 'question', 'snippet'],
num_rows: 40391
})
})
```
or the manual annotations using:
```python
dataset = load_dataset("koutch/staqc", 'man_sql')
DatasetDict({
train: Dataset({
features: ['id', 'question_id', 'question', 'snippet'],
num_rows: 1587
})
})
```
#### Manual annotations
The manual annotations contain, for a given stackoverflow questions, for each individual code block in the accepted answer of that post, information on whether or not the given code block is a *standalone* solution to the question asked (the question title).
```
{
'question_id': 5947137,
'question': 'How can I use a list comprehension to extend a list in python?',
'snippet': {'text': ['import itertools as it\n\nreturn sum(it.imap(doSomething, originalList), [])\n',
'return sum(map(doSomething, originalList), [])\n',
'return sum((doSomething(x) for x in originalList), [])\n',
'accumulationList = []\nfor x in originalList:\n accumulationList.extend(doSomething(x))\nreturn accumulationList\n'],
'is_sda': [True, True, True, True]}
}
```
#### Multi-code answer posts
```
{
'question_id': 35349290,
'question': 'Python: Generating YYMM string between two dates',
'snippet': ['start_year = 2005\nend_year = 2007\nstart_month = 3\nend_month = 2\nyymm = [(yy, mm) for yy in range(start_year, end_year + 1) for mm in range(1, 13)\n if (start_year, start_month) <= (yy, mm) <= (end_year, end_month)]\n',
"formatted_yymm = ['{:>02}{:>02}.mat'.format(yy % 100, mm) for yy, mm in yymm]\n"]
}
```
#### Single-code answer posts
```
{
'question_id': 19387200,
'question': 'Python: get OS language',
'snippet': "import locale\nloc = locale.getlocale() # get current locale\nlocale.getdefaultlocale() # Tries to determine the default locale settings and returns them as a tuple of the form (language code, encoding); e.g, ('en_US', 'UTF-8').\n"
}
```
### Data Fields
- `question_id`: id of the stackoverflow question
- `question`: title of the stackoverflow question repurposed as the natural language intent
- `snippet`: mined or annotated standalone solution(s) (potentially) answerring the question
- `is_sda`: for the manual annotations, whether or not the given code snippet is a standalone solution to the question.
### Data Splits
Each configuration of the dataset contains only a training split.
## Dataset Creation
### Source Data
StackOverflow data dump.
### Annotations
See section 2.3 "Annotating QC Pairs for Model Training" of the [paper](https://arxiv.org/abs/1803.09371)
## Additional Information
### Licensing Information
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
### Citation Information
If you use the dataset or the code in your research, please cite the following paper:
```
@inproceedings{yao2018staqc,
title={StaQC: A Systematically Mined Question-Code Dataset from Stack Overflow},
author={Yao, Ziyu and Weld, Daniel S and Chen, Wei-Peng and Sun, Huan},
booktitle={Proceedings of the 2018 World Wide Web Conference on World Wide Web},
pages={1693--1703},
year={2018},
organization={International World Wide Web Conferences Steering Committee}
}
```
### Contributions information
I did *not* contribute to the *creation* of this dataset, only to the redistribution. All credits should be attributed to the original authors. |
datacrunch/finnish_alpaca | 2023-07-20T14:13:12.000Z | [
"license:mit",
"region:us"
] | datacrunch | null | null | null | 0 | 32 | ---
license: mit
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 20402896
num_examples: 51715
download_size: 13168174
dataset_size: 20402896
---
|
nyuuzyou/AnimeHeadsv3 | 2023-07-02T23:24:38.000Z | [
"task_categories:object-detection",
"license:wtfpl",
"region:us"
] | nyuuzyou | null | null | null | 2 | 32 | ---
task_categories:
- object-detection
license: wtfpl
dataset_info:
- config_name: With augmentation
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype: string
splits:
- name: train
num_bytes: 2817954
num_examples: 8037
- name: validation
num_bytes: 37647
num_examples: 100
- name: test
num_bytes: 8425
num_examples: 20
download_size: 590150250
dataset_size: 2864026
- config_name: Without augmentation
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype: string
splits:
- name: train
num_bytes: 932413
num_examples: 2659
- name: validation
num_bytes: 37647
num_examples: 100
- name: test
num_bytes: 7393
num_examples: 18
download_size: 512953012
dataset_size: 977453
---
# AnimeHeadsv3 Object Detection Dataset
The AnimeHeadsv3 Object Detection Dataset is a collection of anime and art images, including manga pages, that have been annotated with object bounding boxes for use in object detection tasks.
## Contents
There are two versions of the dataset available:
The dataset contains a total of 8157 images, split into training, validation, and testing sets. The images were collected from various sources and include a variety of anime and art styles, including manga.
- Dataset with augmentation: Contains 8157 images.
- Dataset without augmentation: Contains 2777 images.
The images were collected from various sources and include a variety of anime and art styles, including manga. The annotations were created using the COCO format, with each annotation file containing the bounding box coordinates and label for each object in the corresponding image. The dataset has only one class named "head".
## Preprocessing
The dataset with augmentation has the following preprocessing parameters:
Resize: Fit within 640x640
The dataset without augmentation does not have any preprocessing applied.
## Augmentation Parameters
The following augmentation parameters were applied to the dataset with augmentation:
Outputs per training example: 3
Flip: Horizontal
Saturation: Between -40% and +40%
Blur: Up to 4px
Noise: Up to 4% of pixels
|
ruanchaves/assin2_por_Latn_to_eng_Latn | 2023-04-22T19:12:21.000Z | [
"region:us"
] | ruanchaves | null | null | null | 1 | 32 | ---
dataset_info:
features:
- name: sentence_pair_id
dtype: int64
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: relatedness_score
dtype: float32
- name: entailment_judgment
dtype:
class_label:
names:
'0': NONE
'1': ENTAILMENT
- name: __language__
dtype: string
splits:
- name: train
num_bytes: 802897
num_examples: 6500
- name: test
num_bytes: 313661
num_examples: 2448
- name: validation
num_bytes: 62531
num_examples: 500
download_size: 0
dataset_size: 1179089
---
# Dataset Card for "assin2_por_Latn_to_eng_Latn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anhdungitvn/vi-general-64g | 2023-04-24T02:41:17.000Z | [
"region:us"
] | anhdungitvn | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 69680883709
num_examples: 241461581
- name: test
num_bytes: 6612740
num_examples: 24157
- name: validation
num_bytes: 6278123
num_examples: 22710
download_size: 36565651699
dataset_size: 69693774572
---
# Dataset Card for "vi-general-64g"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lucasmccabe-lmi/oig_small_chip2_python | 2023-04-25T22:30:03.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"code",
"python",
"code-generation",
"region:us"
] | lucasmccabe-lmi | null | null | null | 2 | 32 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1930175
num_examples: 4742
download_size: 741759
dataset_size: 1930175
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- code
- python
- code-generation
size_categories:
- 1K<n<10K
---
# Dataset Card for "oig_small_chip2_python"
### Dataset Summary
From [LAION's Open Instruction Generalist (OIG) dataset](https://huggingface.co/datasets/laion/OIG), we use a 4775-prompt segment pertaining to Python code generation. OIG text elements are formatted as dialogue exerpts between a "human" and "bot" agent. The code generation prompt is parsed from the initial "human" agent's statement and the resultant response from the "bot" agent's statement. We then reformat the text/response pairs according to the format of the original Alpaca dataset; that is, instruction/input/output triplets. In cases where the instruction field does not specify the code language, we provide "Write the code in Python" in the input field. Otherwise, the input field is left blank.
The OIG dataset was prepared by LAION, and released under the Apache 2.0 license.
Numbers:
- **Prompts**: 4775
- **Tokens**: 578083 using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer (counting instruction+input+output) |
merve/turkish_instructions | 2023-04-27T17:21:11.000Z | [
"license:apache-2.0",
"region:us"
] | merve | null | null | null | 4 | 32 | ---
license: apache-2.0
---
|
philschmid/sql-create-context-copy | 2023-05-01T10:37:47.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:table-question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"SQL",
"code",
"NLP",
"text-to-sql",
"context-sql",
"spider",
"wikisql",
"sqlglot",
"region:us"
] | philschmid | null | null | null | 2 | 32 | ---
license: cc-by-4.0
task_categories:
- text-generation
- question-answering
- table-question-answering
language:
- en
tags:
- SQL
- code
- NLP
- text-to-sql
- context-sql
- spider
- wikisql
- sqlglot
pretty_name: sql-create-context
size_categories:
- 10K<n<100K
duplicated_from: b-mc2/sql-create-context
---
# Fork of [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context)
#### Overview
This dataset builds from [WikiSQL](https://huggingface.co/datasets/wikisql) and [Spider](https://huggingface.co/datasets/spider).
There are 78,577 examples of natural language queries, SQL CREATE TABLE statements, and SQL Query answering the question using the CREATE statement as context. This dataset was built with text-to-sql LLMs in mind, intending to prevent hallucination of column and table names often seen when trained on text-to-sql datasets. The CREATE TABLE statement can often be copy and pasted from different DBMS and provides table names, column names and their data types. By providing just the CREATE TABLE statement as context, we can hopefully provide better grounding for models without having to provide actual rows of data, limiting token usage and exposure to private, sensitive, or proprietary data.
#### Cleansing and Augmentation
Cleansing and data augmentation has been done on the combined WikiSQL and Spider data. I used [SQLGlot](https://github.com/tobymao/sqlglot) on queries from Spider and WikiSQL and parsed them into different tables and columns, I then inferred column data types based on usage of `>` `<` operators as well as the use of `MIN()` `MAX()` `AVG()` `SUM()` on columns. While this isn't perfect, it increases the likelihood of inferring the correct datatype for a column, the columns otherwise default to VARCHAR type. These tables and columns are then used to generate CREATE TABLE statements using the inferred types. SQLGlot is used again to ensure both the SQL queries and CREATE TABLE statements parse without errors.
Some queries that do not have column names, e.g. SELECT * FROM table, have a default Id column added to the CREATE TABLE statement. Some other queries which use the generic `table` as the FROM table have instead been changed to a variation of `table_name_1` or some other number which is also reflected in the CREATE TABLE statement.
#### TODO
- Further augment the data by converting queries and CREATE TABLE statements into different SQL dialects, this can be done with SQLGlot. Reference to the dialect might also be added to the question.
- Support other informative contexts beyond CREATE TABLE
Random sample:
```json
{
"question": "Please show the themes of competitions with host cities having populations larger than 1000.",
"context": "CREATE TABLE city (City_ID VARCHAR, Population INTEGER); CREATE TABLE farm_competition (Theme VARCHAR, Host_city_ID VARCHAR)",
"answer": "SELECT T2.Theme FROM city AS T1 JOIN farm_competition AS T2 ON T1.City_ID = T2.Host_city_ID WHERE T1.Population > 1000"
},
{
"question": "Please show the different statuses of cities and the average population of cities with each status.",
"context": "CREATE TABLE city (Status VARCHAR, Population INTEGER)",
"answer": "SELECT Status, AVG(Population) FROM city GROUP BY Status"
},
``` |
gretelai/symptom_to_diagnosis | 2023-05-24T17:58:04.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"medical",
"region:us"
] | gretelai | null | null | null | 4 | 32 | ---
license: apache-2.0
task_categories:
- text-classification
task_ids:
- multi-class-classification
language:
- en
tags:
- medical
pretty_name: Gretel/symptoms_to_diagnosis
size_categories:
- 10K<n<100K
---
# Dataset Summary
This dataset contains natural language descriptions of symptoms labeled with 22 corresponding diagnoses. `Gretel/symptom_to_diagnosis` provides 1065 symptom descriptions in the English language labeled with 22 diagnoses, focusing on fine-grained single-domain diagnosis.
## Data Fields
Each row contains the following fields:
* `input_text` : A string field containing symptoms
* `output_text` : A string field containing a diagnosis
Example:
```
{
"output_text": "drug reaction",
"input_text": "I've been having headaches and migraines, and I can't sleep. My whole body shakes and twitches. Sometimes I feel lightheaded."
}
```
## Diagnoses
This table contains the count of each diagnosis in the train and test splits.
| | Diagnosis | train.jsonl | test.jsonl |
|---:|:--------------------------------|--------------:|-------------:|
| 0 | drug reaction | 40 | 8 |
| 1 | allergy | 40 | 10 |
| 2 | chicken pox | 40 | 10 |
| 3 | diabetes | 40 | 10 |
| 4 | psoriasis | 40 | 10 |
| 5 | hypertension | 40 | 10 |
| 6 | cervical spondylosis | 40 | 10 |
| 7 | bronchial asthma | 40 | 10 |
| 8 | varicose veins | 40 | 10 |
| 9 | malaria | 40 | 10 |
| 10 | dengue | 40 | 10 |
| 11 | arthritis | 40 | 10 |
| 12 | impetigo | 40 | 10 |
| 13 | fungal infection | 39 | 9 |
| 14 | common cold | 39 | 10 |
| 15 | gastroesophageal reflux disease | 39 | 10 |
| 16 | urinary tract infection | 39 | 9 |
| 17 | typhoid | 38 | 9 |
| 18 | pneumonia | 37 | 10 |
| 19 | peptic ulcer disease | 37 | 10 |
| 20 | jaundice | 33 | 7 |
| 21 | migraine | 32 | 10 |
## Data Splits
The data is split to 80% train (853 examples, 167kb) and 20% test (212 examples, 42kb).
## Dataset Creation
Data was filtered to remove unwanted categories and updated using an LLM to create language more consistent with how a patient would describe symptoms in natural language to a doctor.
## Source Data
This dataset was adapted based on the [Symptom2Disease](https://www.kaggle.com/datasets/niyarrbarman/symptom2disease) dataset from Kaggle.
## Personal and Sensitive Information
The symptoms in this dataset were modified from their original format using an LLM and do not contain personal data.
## Limitations
This dataset is licensed Apache 2.0 and free for use. |
Patt/HellaSwag_thai | 2023-06-13T23:15:58.000Z | [
"arxiv:1907.04307",
"region:us"
] | Patt | null | null | null | 0 | 32 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for HellaSwag_TH
### Dataset Description
This dataset is Thai translated version of [hellaswag](https://huggingface.co/datasets/hellaswag) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
### Languages
- EN
- TH
|
diyarhamedi/HowTo100M-subtitles-small | 2023-06-05T05:43:47.000Z | [
"region:us"
] | diyarhamedi | null | null | null | 2 | 32 | ---
dataset_info:
features:
- name: video_id
dtype: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: rank
dtype: int64
- name: task_id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 71867294
num_examples: 16015
download_size: 39671033
dataset_size: 71867294
---
# HowTo100M-subtitles-small
The subtitles from a subset of the HowTo100M dataset. |
d0rj/alpaca-cleaned-ru | 2023-07-13T07:25:01.000Z | [
"task_categories:text-generation",
"language_creators:translated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:yahma/alpaca-cleaned",
"language:ru",
"license:cc-by-4.0",
"instruction-finetuning",
"region:us"
] | d0rj | null | null | null | 2 | 32 | ---
dataset_info:
features:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 74829755.0
num_examples: 51760
download_size: 36596664
dataset_size: 74829755.0
license: cc-by-4.0
language:
- ru
multilinguality:
- monolingual
tags:
- instruction-finetuning
pretty_name: Alpaca-Cleaned (ru)
task_categories:
- text-generation
size_categories:
- 10K<n<100K
source_datasets:
- yahma/alpaca-cleaned
language_creators:
- translated
---
# alpaca-cleaned-ru
Translated version of [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) into Russian.
## Dataset Description
- **Repository:** https://github.com/gururise/AlpacaDataCleaned |
KaiLv/UDR_SST-2 | 2023-06-21T12:49:13.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: sentence
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 853094
num_examples: 6911
- name: test
num_bytes: 224519
num_examples: 1821
- name: debug
num_bytes: 617046
num_examples: 5000
download_size: 1109867
dataset_size: 1694659
---
# Dataset Card for "UDR_SST-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dongyoung4091/shp-generated_flan_t5_large | 2023-06-22T07:46:50.000Z | [
"region:us"
] | dongyoung4091 | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
sequence: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 6358460
num_examples: 100
download_size: 1586813
dataset_size: 6358460
---
# Dataset Card for "shp-generated_flan_t5_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kjj0/4chanpol-openaimod | 2023-06-23T21:28:11.000Z | [
"arxiv:2001.07487",
"region:us"
] | kjj0 | null | null | null | 1 | 32 | ---
dataset_info:
features:
- name: text
dtype: string
- name: sexual
dtype: float64
- name: hate
dtype: float64
- name: violence
dtype: float64
- name: self-harm
dtype: float64
- name: sexual/minors
dtype: float64
- name: hate/threatening
dtype: float64
- name: violence/graphic
dtype: float64
splits:
- name: train
num_bytes: 23614214277
num_examples: 114647404
download_size: 14061193653
dataset_size: 23614214277
---
# Dataset Card for "kjj0/4chanpol-openaimod"
This dataset contains 114M unique posts made between June 2016 and November 2019.
This is a variant of the dataset provided by [Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board](https://arxiv.org/abs/2001.07487).
We have deduplicated posts and stripped metadata to create an easily accessible collection of unique texts.
We have also provided OpenAI moderation scores. A variant without these scores can be found at [kjj0/4chanpol](https://huggingface.co/datasets/kjj0/4chanpol).
```
@inproceedings{papasavva2020raiders,
title={Raiders of the lost kek: 3.5 years of augmented 4chan posts from the politically incorrect board},
author={Papasavva, Antonis and Zannettou, Savvas and De Cristofaro, Emiliano and Stringhini, Gianluca and Blackburn, Jeremy},
booktitle={Proceedings of the International AAAI Conference on Web and Social Media},
volume={14},
pages={885--894},
year={2020}
}
``` |
causal-lm/finance | 2023-06-25T02:49:02.000Z | [
"region:us"
] | causal-lm | null | null | null | 6 | 32 | Entry not found |
hezarai/lscp-pos-500k | 2023-09-02T08:41:54.000Z | [
"task_categories:token-classification",
"language:fa",
"region:us"
] | hezarai | Language recognition has been significantly advanced in recent years by means of modern machine learning methods such as deep learning
and benchmarks with rich annotations. However, research is still limited in low-resource formal languages. This consists of a significant
gap in describing the colloquial language especially for low-resourced ones such as Persian. In order to target this gap for low resource languages,
we propose a “Large Scale Colloquial Persian Dataset” (LSCP). LSCP is hierarchically organized in a semantic taxonomy that focuses on
multi-task informal Persian language understanding as a comprehensive problem. This encompasses the recognition of multiple semantic aspects in the human-level sentences,
which naturally captures from the real-world sentences. We believe that further investigations and processing, as well as the application of novel algorithms and methods,
can strengthen enriching computerized understanding and processing of low resource languages. The proposed corpus consists of 120M sentences resulted from 27M tweets
annotated with parsing tree, part-of-speech tags, sentiment polarity and translation in five different languages. | @inproceedings{abdi-khojasteh-etal-2020-lscp,
title = "{LSCP}: Enhanced Large Scale Colloquial {P}ersian Language Understanding",
author = "Abdi Khojasteh, Hadi and
Ansari, Ebrahim and
Bohlouli, Mahdi",
booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.776",
pages = "6323--6327",
abstract = "Language recognition has been significantly advanced in recent years by means of modern machine learning methods such as deep learning and benchmarks with rich annotations. However, research is still limited in low-resource formal languages. This consists of a significant gap in describing the colloquial language especially for low-resourced ones such as Persian. In order to target this gap for low resource languages, we propose a {``}Large Scale Colloquial Persian Dataset{''} (LSCP). LSCP is hierarchically organized in a semantic taxonomy that focuses on multi-task informal Persian language understanding as a comprehensive problem. This encompasses the recognition of multiple semantic aspects in the human-level sentences, which naturally captures from the real-world sentences. We believe that further investigations and processing, as well as the application of novel algorithms and methods, can strengthen enriching computerized understanding and processing of low resource languages. The proposed corpus consists of 120M sentences resulted from 27M tweets annotated with parsing tree, part-of-speech tags, sentiment polarity and translation in five different languages.",
language = "English",
ISBN = "979-10-95546-34-4",
} | null | 0 | 32 | ---
task_categories:
- token-classification
language:
- fa
pretty_name: LSCP Dataset (500k samples version)
---
This is a 500 thousand sample version of the original [LSCP dataset](https://iasbs.ac.ir/~ansari/lscp/) that only contains the text and part-of-speech tags and is used for sequence labeling.
### Citation
```bibtex
@InProceedings{abdikhojasteh:2020:LREC,
author = {Abdi Khojasteh, Hadi and Ansari, Ebrahim and Bohlouli, Mahdi},
title = {LSCP: Enhanced Large Scale Colloquial Persian Language Understanding},
booktitle = {Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020)},
year = {2020}
address = {Marseille, France},
publisher = {European Language Resources Association}
pages = {6323--6327},
url = {https://www.aclweb.org/anthology/2020.lrec-1.776}
}
``` |
ssbuild/alpaca_prosocial-dialog | 2023-07-09T06:53:06.000Z | [
"license:apache-2.0",
"region:us"
] | ssbuild | null | null | null | 0 | 32 | ---
license: apache-2.0
---
|
TrainingDataPro/facial-emotion-recognition-dataset | 2023-09-14T16:40:22.000Z | [
"task_categories:image-classification",
"task_categories:image-to-image",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"region:us"
] | TrainingDataPro | The dataset consists of images capturing people displaying 7 distinct emotions
(anger, contempt, disgust, fear, happiness, sadness and surprise).
Each image in the dataset represents one of these specific emotions,
enabling researchers and machine learning practitioners to study and develop
models for emotion recognition and analysis.
The images encompass a diverse range of individuals, including different
genders, ethnicities, and age groups*. The dataset aims to provide
a comprehensive representation of human emotions, allowing for a wide range of
use cases. | @InProceedings{huggingface:dataset,
title = {facial-emotion-recognition-dataset},
author = {TrainingDataPro},
year = {2023}
} | null | 1 | 32 | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- image-classification
- image-to-image
tags:
- code
dataset_info:
features:
- name: set_id
dtype: int32
- name: neutral
dtype: image
- name: anger
dtype: image
- name: contempt
dtype: image
- name: disgust
dtype: image
- name: fear
dtype: image
- name: happy
dtype: image
- name: sad
dtype: image
- name: surprised
dtype: image
- name: age
dtype: int8
- name: gender
dtype: string
- name: country
dtype: string
splits:
- name: train
num_bytes: 22981
num_examples: 19
download_size: 453786356
dataset_size: 22981
---
# Facial Emotion Recognition Dataset
The dataset consists of images capturing people displaying **7 distinct emotions** (*anger, contempt, disgust, fear, happiness, sadness and surprise*). Each image in the dataset represents one of these specific emotions, enabling researchers and machine learning practitioners to study and develop models for emotion recognition and analysis.
The images encompass a diverse range of individuals, including different *genders, ethnicities, and age groups*. The dataset aims to provide a comprehensive representation of human emotions, allowing for a wide range of use cases.
### The dataset's possible applications:
- automatic emotion detection
- mental health analysis
- artificial intelligence (AI) and computer vision
- entertainment industries
- advertising and market research
- security and surveillance

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=facial-emotion-recognition-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Content
- **images**: includes folders corresponding to people and containing images with 8 different impersonated emotions, each file is named according to the expressed emotion
- **.csv** file: contains information about people in the dataset
### Emotions in the dataset:
- anger
- contempt
- disgust
- fear
- happy
- sad
- surprised
### File with the extension .csv
includes the following information for each set of media files:
- **set_id**: id of the set of images,
- **gender**: gender of the person,
- **age**: age of the person,
- **country**: country of the person
# Images for facial emotion recognition might be collected in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=facial-emotion-recognition-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
yestaehyung/llama_fashiongen | 2023-07-21T05:45:59.000Z | [
"license:openrail",
"region:us"
] | yestaehyung | null | null | null | 0 | 32 | ---
license: openrail
---
|
sarahpann/PRM800K | 2023-07-25T05:22:33.000Z | [
"region:us"
] | sarahpann | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: label
struct:
- name: finish_reason
dtype: string
- name: steps
list:
- name: chosen_completion
dtype: int64
- name: completions
list:
- name: flagged
dtype: bool
- name: rating
dtype: int64
- name: text
dtype: string
- name: human_completion
struct:
- name: corrected_rating
dtype: int64
- name: flagged
dtype: bool
- name: rating
dtype: 'null'
- name: source
dtype: string
- name: text
dtype: string
- name: total_time
dtype: int64
- name: is_initial_screening_question
dtype: bool
- name: generation
dtype: int64
- name: timestamp
dtype: string
- name: labeler
dtype: string
- name: question
struct:
- name: ground_truth_answer
dtype: string
- name: ground_truth_solution
dtype: string
- name: pre_generated_answer
dtype: string
- name: pre_generated_steps
sequence: string
- name: pre_generated_verifier_score
dtype: float64
- name: problem
dtype: string
- name: is_quality_control_question
dtype: bool
splits:
- name: train
num_bytes: 343127415.4610406
num_examples: 93794
- name: test
num_bytes: 18061070.538959395
num_examples: 4937
download_size: 149151492
dataset_size: 361188486.0
---
# Dataset Card for "PRM800K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ImagenHub/Text_Guided_Image_Editing | 2023-10-05T18:34:28.000Z | [
"task_categories:image-to-image",
"size_categories:n<1K",
"language:en",
"license:cc-by-4.0",
"arxiv:2310.01596",
"region:us"
] | ImagenHub | null | null | null | 1 | 32 | ---
language:
- en
license: cc-by-4.0
size_categories:
- n<1K
task_categories:
- image-to-image
dataset_info:
features:
- name: img_id
dtype: string
- name: turn_index
dtype: int32
- name: source_img
dtype: image
- name: mask_img
dtype: image
- name: instruction
dtype: string
- name: source_global_caption
dtype: string
- name: target_global_caption
dtype: string
- name: target_local_caption
dtype: string
- name: target_img
dtype: image
splits:
- name: dev
num_bytes: 1521276668.0
num_examples: 528
- name: filtered
num_bytes: 504007147.0
num_examples: 179
- name: extra
num_bytes: 709468665.0
num_examples: 249
download_size: 2734685875
dataset_size: 2734752480.0
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: filtered
path: data/filtered-*
- split: extra
path: data/extra-*
---
# Dataset Card
Dataset in [ImagenHub](arxiv.org/abs/2310.01596).
# Citation
Please kindly cite our paper if you use our code, data, models or results:
```
@article{ku2023imagenhub,
title={ImagenHub: Standardizing the evaluation of conditional image generation models},
author={Max Ku, Tianle Li, Kai Zhang, Yujie Lu, Xingyu Fu, Wenwen Zhuang, Wenhu Chen},
journal={arXiv preprint arXiv:2310.01596},
year={2023}
}
``` |
Aznor/MeetingBank-original | 2023-08-07T09:50:07.000Z | [
"task_categories:summarization",
"license:cc-by-nc-sa-4.0",
"arxiv:2305.17529",
"region:us"
] | Aznor | null | null | null | 0 | 32 | ---
license: cc-by-nc-sa-4.0
task_categories:
- summarization
---
This dataset is the original train-validation-test split from the [MeetingBank dataset](https://meetingbank.github.io/) used to train and evaluate the summarisation models in the original paper cited below.
**Overview**
MeetingBank, a benchmark dataset created from the city councils of 6 major U.S. cities to supplement existing datasets. It contains 1,366 meetings with over 3,579 hours of video, as well as transcripts, PDF documents of meeting minutes, agenda, and other metadata. On average, a council meeting is 2.6 hours long and its transcript contains over 28k tokens, making it a valuable testbed for meeting summarizers and for extracting structure from meeting videos. The datasets contains 6,892 segment-level summarization instances for training and evaluating of performance.
**Acknowledgement**
Please cite the following paper in work that makes use of this dataset:
[MeetingBank: A Benchmark Dataset for Meeting Summarization](https://arxiv.org/abs/2305.17529) \
Yebowen Hu, Tim Ganter, Hanieh Deilamsalehy, Franck Dernoncourt, Hassan Foroosh, Fei Liu \
In main conference of Association for Computational Linguistics (ACL’23), Toronto, Canada.
**Bibtex**
```
@inproceedings{hu-etal-2023-meetingbank,
title = "MeetingBank: A Benchmark Dataset for Meeting Summarization",
author = "Yebowen Hu and Tim Ganter and Hanieh Deilamsalehy and Franck Dernoncourt and Hassan Foroosh and Fei Liu",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL)",
month = July,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
}
```
**Resources**
MeetingBank dataset will be hosted at Zenodo. The audio files of each meeting will be hosted individually on Huggingface. All resources will includes meeting audio, transcripts, meetingbank main JSON file, summaries from 6 systems and human annotations.
**Summary, Segments Transcripts and VideoList:** [zenodo](https://zenodo.org/record/7989108)
**Meeting Audios:** [HuggingFace](https://huggingface.co/datasets/huuuyeah/MeetingBank_Audio)
**Meeting Transcripts:** [HuggingFace](https://huggingface.co/datasets/lytang/MeetingBank-transcript)
Some scripts can be found in github repo [MeetingBank_Utils](https://github.com/YebowenHu/MeetingBank-utils) |
andreaskoepf/megacode2-min100 | 2023-08-13T16:16:47.000Z | [
"license:other",
"region:us"
] | andreaskoepf | null | null | null | 1 | 32 | ---
license: other
---
|
google/dreambooth | 2023-08-15T16:46:24.000Z | [
"license:cc-by-4.0",
"arxiv:2208.12242",
"region:us"
] | google | null | null | null | 34 | 32 | ---
configs:
- config_name: default
data_files:
- split: train
path: "dataset/backpack/*.jpg"
- config_name: backpack
data_files:
- split: train
path: "dataset/backpack/*.jpg"
- config_name: backpack_dog
data_files:
- split: train
path: "dataset/backpack_dog/*.jpg"
- config_name: bear_plushie
data_files:
- split: train
path: "dataset/bear_plushie/*.jpg"
- config_name: berry_bowl
data_files:
- split: train
path: "dataset/berry_bowl/*.jpg"
- config_name: can
data_files:
- split: train
path: "dataset/can/*.jpg"
- config_name: candle
data_files:
- split: train
path: "dataset/candle/*.jpg"
- config_name: cat
data_files:
- split: train
path: "dataset/cat/*.jpg"
- config_name: cat2
data_files:
- split: train
path: "dataset/cat2/*.jpg"
- config_name: clock
data_files:
- split: train
path: "dataset/clock/*.jpg"
- config_name: colorful_sneaker
data_files:
- split: train
path: "dataset/colorful_sneaker/*.jpg"
- config_name: dog
data_files:
- split: train
path: "dataset/dog/*.jpg"
- config_name: dog2
data_files:
- split: train
path: "dataset/dog2/*.jpg"
- config_name: dog3
data_files:
- split: train
path: "dataset/dog3/*.jpg"
- config_name: dog5
data_files:
- split: train
path: "dataset/dog5/*.jpg"
- config_name: dog6
data_files:
- split: train
path: "dataset/dog6/*.jpg"
- config_name: dog7
data_files:
- split: train
path: "dataset/dog7/*.jpg"
- config_name: dog8
data_files:
- split: train
path: "dataset/dog8/*.jpg"
- config_name: duck_toy
data_files:
- split: train
path: "dataset/duck_toy/*.jpg"
- config_name: fancy_boot
data_files:
- split: train
path: "dataset/fancy_boot/*.jpg"
- config_name: grey_sloth_plushie
data_files:
- split: train
path: "dataset/grey_sloth_plushie/*.jpg"
- config_name: monster_toy
data_files:
- split: train
path: "dataset/monster_toy/*.jpg"
- config_name: pink_sunglasses
data_files:
- split: train
path: "dataset/pink_sunglasses/*.jpg"
- config_name: poop_emoji
data_files:
- split: train
path: "dataset/poop_emoji/*.jpg"
- config_name: rc_car
data_files:
- split: train
path: "dataset/rc_car/*.jpg"
- config_name: red_cartoon
data_files:
- split: train
path: "dataset/red_cartoon/*.jpg"
- config_name: robot_toy
data_files:
- split: train
path: "dataset/robot_toy/*.jpg"
- config_name: shiny_sneaker
data_files:
- split: train
path: "dataset/shiny_sneaker/*.jpg"
- config_name: teapot
data_files:
- split: train
path: "dataset/teapot/*.jpg"
- config_name: vase
data_files:
- split: train
path: "dataset/vase/*.jpg"
- config_name: wolf_plushie
data_files:
- split: train
path: "dataset/wolf_plushie/*.jpg"
license: cc-by-4.0
---
# Dataset Card for "dreambooth"
## Dataset of the Google paper DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation
The dataset includes 30 subjects of 15 different classes. 9 out of these subjects are live subjects (dogs and cats) and 21 are objects. The dataset contains a variable number of images per subject (4-6). Images of the subjects are usually captured in different conditions, environments and under different angles.
We include a file dataset/prompts\_and\_classes.txt which contains all of the prompts used in the paper for live subjects and objects, as well as the class name used for the subjects.
The images have either been captured by the paper authors, or sourced from www.unsplash.com
The dataset/references\_and\_licenses.txt file contains a list of all the reference links to the images in www.unsplash.com - and attribution to the photographer, along with the license of the image.
### [project page](https://dreambooth.github.io/) | [arxiv](https://arxiv.org/abs/2208.12242)
## Academic Citation
If you use this work please cite:
```
@inproceedings{ruiz2023dreambooth,
title={Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation},
author={Ruiz, Nataniel and Li, Yuanzhen and Jampani, Varun and Pritch, Yael and Rubinstein, Michael and Aberman, Kfir},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2023}
}
```
## Disclaimer
This is not an officially supported Google product. |
rombodawg/LosslessMegaCodeTrainingV3_1.6m_Evol | 2023-09-07T21:20:51.000Z | [
"license:other",
"region:us"
] | rombodawg | null | null | null | 15 | 32 | ---
license: other
---
This is the ultimate code training data, created to be lossless so the AI model does not lose any other abilities it had previously, such as logical skills, after training on this dataset. The reason why this dataset is so large is to ensure that as the model learns to code, it continues to remember to follow regular instructions so as not to lose previously learned abilities. This is the result of all my work gathering data, testing AI models, and discovering what, why, and how coding models perform well or don't perform well.
The content of this dataset is roughly 50% coding instruction data and 50% non-coding instruction data. Amounting to 1.5 million evol instruction-formatted lines of data.
The outcome of having 50% non coding instruction data in the dataset is to preserve logic and reasoning skills within the model while training on coding. The lack of such skills has been observed to be a major issue with coding models such as Wizardcoder-15b and NewHope, but training models on this dataset alleviates that issue while also giving similar levels of coding knowledge.
This dataset is a combination of the following datasets, along with additional deduping and uncensoring techniques:
Coding:
- https://huggingface.co/datasets/rombodawg/2XUNCENSORED_MegaCodeTraining188k
- https://huggingface.co/datasets/rombodawg/Rombodawgs_commitpackft_Evolinstruct_Converted
Instruction following:
- https://huggingface.co/datasets/rombodawg/2XUNCENSORED_alpaca_840k_Evol_USER_ASSIST
- https://huggingface.co/datasets/garage-bAInd/Open-Platypus
|
sandipanp/public_dataset | 2023-08-16T10:27:26.000Z | [
"region:us"
] | sandipanp | null | null | null | 0 | 32 | Entry not found |
jinaai/code_exercises | 2023-09-07T08:18:18.000Z | [
"task_categories:text-generation",
"size_categories:100M<n<1B",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | jinaai | null | null | null | 10 | 32 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 1121418005
num_examples: 1468146
download_size: 486193162
dataset_size: 1121418005
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
language:
- en
size_categories:
- 100M<n<1B
license: cc-by-nc-sa-4.0
---
# Dataset Card for "code_exercises"
# Code exercise
This dataset is composed of a diverse set of \~120k Python code exercises (~120m total tokens) generated by ChatGPT 3.5. It is designed to distill ChatGPT 3.5 knowledge about Python coding tasks into other (potentially smaller) models. The exercises have been generated by following the steps described in the [related GitHub repository](https://github.com/jina-ai/textbook).
The generated exercises follow the format of the [Human Eval benchmark](https://github.com/openai/human-eval). Each training sample is split into a Python function signature with a descriptive docstring, and a solution to the exercise.
This approach is inspired by several works on synthetic dataset generation, especially by _Textbooks Are All You Need_ [(Gunasekar et al. 2023)](https://doi.org/10.48550/arXiv.2306.11644).
## Disclaimer
* This dataset has been generated using ChatGPT 3.5, and you should check the legal status of AI-generated content in your jurisdiction before use. We cannot guarantee that it is free of IP restrictions. You should also make sure that your usage complies with the [OpenAI Terms of Use](https://openai.com/policies/terms-of-use), in so far as legally applicable.
* This dataset focuses narrowly on improving performance on the kinds of tasks described in the Human Eval benchmark. The Human Eval benchmark has limitations and does not necessarily fully represent the coding abilities of a large language model, and there is no way to guarantee that an improvement on this benchmark represents an overall improvement in programming performance. We present this data as is, without any guarantee of its usefulness in any specific context, to encourage research that might be inspired by our method.
## Synthetic exercise creation
Model distillation is the process of transferring some of the skilled performance of large models on specific classes of tasks to significantly smaller models. The purpose is to get performance comparable to the larger model, but at a fraction of the cost and at vastly quicker speed. The general outline of this strategy is described (without technical implementation details) in [Textbooks Are All You Need](https://doi.org/10.48550/arXiv.2306.11644).
Key to the distillation process is the creation of synthetic data, generated by the larger AI model, to train the smaller model. We have applied this approach to Python programming tasks and are publishing a summary of our methods here along with the synthetic dataset.
For fuller details and implementation code, see the [related GitHub repository](https://github.com/jina-ai/textbook).
### Diversity
The main problem with model-generated synthetic data is its diversity. If we had constructed this dataset by giving ChatGPT 3.5 the same prompt several hundred thousand times, we would get many very similar, if not functionally identical, results. This would reduce the usefulness of the dataset for training. In principle, one might solve the problem by filtering the results for near duplicates, but this is a non-trivial problem, and even if it could be solved, it would be a wasteful and potentially expensive use of the larger model.
And even then, we could not be sure the examples adequately covered the topic. To solve this problem, we introduced a novel scheme for systematically prompting large language models to produce diverse examples.
### Using a topic tree to build diverse prompts
We constructed a hierarchical model of subjects in Python programming, i.e. a topic tree. First, we manually identified 42 general topic areas in Python knowledge, for example, _data structures_ and _sorting algorithms_. We asked an LLM to propose 10 subtopics for each, and then for each of those 420 fine-grained topics, we asked the LLM to generate 5 even more fine-grained sub-subtopics. This resulted in roughly 2000 very fine-grained topics.
We generated prompts by randomly selecting two of those roughly two thousand topics and combining them:
```
Create a code completion exercise on the intersection of {topic 1} and {topic 2}.
```
To increase randomness and diversity in the results, we also constructed a list of 40 professions, like _economist_, _engineer_, and _social worker_, and added them to the prompt:
```
Create a code completion exercise on the intersection of {topic 1} and {topic 2}.
Write it for a {profession}.
```
In principle, there are approximately two million possible pairs of topics, and with 40 possible professions, this yields 80 million unique prompts. If the response to each prompt averages 100 tokens, this means our method can generate an 8 billion token synthetic dataset while maintaining a high degree of diversity. The roughly 120,000 published here is a small random subset of what is possible.
## Credits
This dataset was developed at [Jina.ai](https://jina.ai/) |
saahith/synthetic_with_val | 2023-08-19T20:06:03.000Z | [
"region:us"
] | saahith | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcript
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 316158980.0
num_examples: 405
- name: validation
num_bytes: 67400894.0
num_examples: 86
- name: test
num_bytes: 69350700.0
num_examples: 88
download_size: 347775630
dataset_size: 452910574.0
---
# Dataset Card for "synthetic_with_val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Linhz/qg_vinewsqa | 2023-08-24T16:23:00.000Z | [
"region:us"
] | Linhz | null | null | null | 0 | 32 | Entry not found |
OdiaGenAI/odia_master_data_llama2 | 2023-09-21T18:15:39.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:or",
"license:cc-by-nc-sa-4.0",
"region:us"
] | OdiaGenAI | null | null | null | 0 | 32 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-generation
language:
- or
pretty_name: odia_master_data_llama2
size_categories:
- 100K<n<1M
---
# Dataset Card for odia_master_data_llama2
## Dataset Description
- **Homepage: https://www.odiagenai.org/**
- **Repository: https://github.com/shantipriyap/OdiaGenAI**
- **Point of Contact: Shantipriya Parida, and Sambit Sekhar**
### Dataset Summary
This dataset is a mix of Odia instruction sets translated from open-source instruction sets and Odia domain knowledge instruction sets.
The Odia instruction sets used are:
* odia_domain_context_train_v1
* dolly-odia-15k
* OdiEnCorp_translation_instructions_25k
* gpt-teacher-roleplay-odia-3k
* Odia_Alpaca_instructions_52k
* hardcode_odia_qa_105
In this dataset Odia instruction, input, and output strings are available.
### Supported Tasks and Leaderboards
Large Language Model (LLM)
### Languages
Odia
## Dataset Structure
JSON
### Data Fields
output (string)
instruction (string)
input (string)
### Licensing Information
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
### Citation Information
If you find this repository useful, please consider giving 👏 and citing:
```
@misc{odia_master_data_llama2,
author = {Shantipriya Parida and Sambit Sekhar and Aisha Asif and Subham Pradhan and Guneet Singh Kohli and Swateek Jena},
title = {Large Odia Instruction Set for LlaMA2 Finetuning},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
```
### Contributions
- Shantipriya Parida (Silo AI, Helsinki, Finland)
- Sambit Sekhar (Odia Generative AI, Bhubaneswar, India)
- Aisha Asif (KIIT, University, Bhubaneswar, India)
- Subham Pradhan (Silicon Institute of Technology, Bhubaneswar, India)
- Guneet Singh Kohli (Thapar Institute of Engineering and Technology, India)
- Swateek Jena (RightSense Inc, USA)
|
dim/huggingartists_prompts | 2023-09-01T20:46:14.000Z | [
"region:us"
] | dim | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: song
dtype: string
splits:
- name: train
num_bytes: 121653811
num_examples: 64006
download_size: 57680864
dataset_size: 121653811
---
# Dataset Card for "huggingartists_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Admin08077/Taxonomy | 2023-09-08T16:54:40.000Z | [
"task_categories:token-classification",
"task_categories:text-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:translation",
"task_categories:summarization",
"task_categories:conversational",... | Admin08077 | null | null | null | 0 | 32 | ---
license: other
task_categories:
- token-classification
- text-classification
- table-question-answering
- question-answering
- zero-shot-classification
- translation
- summarization
- conversational
- feature-extraction
- text-generation
- text2text-generation
- sentence-similarity
- audio-classification
- fill-mask
- text-to-speech
- automatic-speech-recognition
- voice-activity-detection
- depth-estimation
- audio-to-audio
- image-classification
- image-segmentation
- object-detection
- text-to-image
- image-to-text
- image-to-image
- unconditional-image-generation
- reinforcement-learning
- robotics
- tabular-classification
- video-classification
- tabular-to-text
- tabular-regression
- multiple-choice
- table-to-text
- text-retrieval
- time-series-forecasting
- text-to-video
- visual-question-answering
- zero-shot-image-classification
- graph-ml
language:
- en
tags:
- finance
- quantum Banking
- '#U'
- XBRL
- 'TAXONOMY '
pretty_name: 'The Private Bank Taxonomy '
size_categories:
- n>1T
---
## API Calls
If you wish to programmatically fetch the Autonomous Private Banking Taxonomy dataset, you can do so via the following curl commands:
```bash
# Fetch rows of the dataset
curl -X GET "https://datasets-server.huggingface.co/rows?dataset=Admin08077%2FTaxonomy&config=default&split=train&offset=0&limit=100"
# Get dataset splits
curl -X GET "https://datasets-server.huggingface.co/splits?dataset=Admin08077%2FTaxonomy"
# Download the dataset in Parquet format
curl -X GET "https://huggingface.co/api/datasets/Admin08077/Taxonomy/parquet/default/train"
```
To clone the dataset repository, make sure you have git-lfs installed. Then run:
```bash
git lfs install
git clone https://huggingface.co/datasets/Admin08077/Taxonomy
```
If you want to clone without large files, you can use:
```bash
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/Admin08077/Taxonomy
```
### Python Code to Load Dataset
If you are using Python, you can easily load the dataset using the Hugging Face `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("Admin08077/Taxonomy")
```
## Citation
If you use this dataset in your research or project, please cite it using the following BibTeX entry:
```bibtex
@misc{james_burvel_o'callaghan_iii_2023,
author = {James Burvel O'Callaghan III},
title = {Taxonomy (Revision 9e2a198)},
year = 2023,
url = {https://huggingface.co/datasets/Admin08077/Taxonomy},
doi = {10.57967/hf/1070},
publisher = {Hugging Face}
}
``` |
shengqin/web-attacks | 2023-09-18T10:52:41.000Z | [
"region:us"
] | shengqin | null | null | null | 1 | 32 | Entry not found |
jxie/epsilon-normalized | 2023-09-05T22:57:16.000Z | [
"region:us"
] | jxie | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: inputs
sequence:
sequence: float64
- name: label
dtype: int64
splits:
- name: train
num_bytes: 9604800000
num_examples: 400000
- name: test
num_bytes: 2401200000
num_examples: 100000
download_size: 6279601264
dataset_size: 12006000000
---
# Dataset Card for "epsilon-normalized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
polymer/dolphin-only-gpt-4 | 2023-09-06T05:10:58.000Z | [
"task_categories:text-generation",
"license:apache-2.0",
"region:us"
] | polymer | null | null | null | 2 | 32 | ---
license: apache-2.0
task_categories:
- text-generation
duplicated_from: ehartford/dolphin
---
Dolphin 🐬
https://erichartford.com/dolphin
## Dataset details
This dataset is an attempt to replicate the results of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
Our dataset consists of:
- ~1 million of FLANv2 augmented with GPT-4 completions (flan1m-alpaca-uncensored.jsonl)
- ~3.5 million of FLANv2 augmented with GPT-3.5 completions (flan5m-alpaca-uncensored.jsonl)
We followed the submix and system prompt distribution outlined in the Orca paper. With a few exceptions. We included all 75k of CoT in the FLAN-1m dataset rather than sampling that. Also, we found that many items were duplicated, so we removed duplicates, resulting in 3.5m instructs in the ChatGPT dataset.
Then we filtered out instances of alignment, refusal, avoidance, and bias, in order to produce an uncensored model upon which can be layered your personalized alignment LoRA.
Token distribution for GPT-3.5 completions

### Loading
```python
## load GPT-4 completions
dataset = load_dataset("ehartford/dolphin",data_files="flan1m-alpaca-uncensored.jsonl")
## load GPT-3.5 completions
dataset = load_dataset("ehartford/dolphin",data_files="flan5m-alpaca-uncensored.jsonl")
```
This dataset is licensed apache-2.0 for commercial or non-commercial use.
We currently plan to release Dolphin on:
- Xgen 7b 8k
- LLaMA 13b (Non-commercial)
- MPT 30b 8k
- LLaMA 33b (Non-commercial)
- Falcon 40b
- LLaMA 65b (Non-commercial)
The Dolphin models that are released will be subject to the license of the foundational model on which it is trained. (LLaMA releases will be non-commercial)
I would like to thank the motley crew of Open Source AI/ML engineers who have worked beside me in this endeavor. Including:
- Wing "Caseus" Lian and NanoBit of OpenAccess AI Collective
- Rohan
- Teknium
- Pankaj Mathur
- Tom "TheBloke" Jobbins for quantizing and amplifying
- Special thanks to EdenCoder and chirper.ai for mentorship and financial sponsorship.
- Special thanks to Kilkonie for his very valued mentorship.
- All the other people in the Open Source AI community who have taught me and helped me along the way.
|
C-MTEB/T2Reranking_en2zh | 2023-09-09T16:11:54.000Z | [
"region:us"
] | C-MTEB | null | null | null | 1 | 32 | ---
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
dataset_info:
features:
- name: query
dtype: string
- name: positive
sequence: string
- name: negative
sequence: string
splits:
- name: dev
num_bytes: 206929387
num_examples: 6129
download_size: 120405829
dataset_size: 206929387
---
# Dataset Card for "T2Reranking_en2zh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceH4/scale_helpful_no_code | 2023-09-11T14:52:35.000Z | [
"region:us"
] | HuggingFaceH4 | null | null | null | 0 | 32 | ---
configs:
- config_name: default
data_files:
- split: test_holdout_rm
path: data/test_holdout_rm-*
- split: test_ift
path: data/test_ift-*
- split: test_rl
path: data/test_rl-*
- split: test_rm
path: data/test_rm-*
- split: train_ift
path: data/train_ift-*
- split: train_rl
path: data/train_rl-*
- split: train_rm
path: data/train_rm-*
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: margin
dtype: int64
- name: meta
struct:
- name: category
dtype: string
splits:
- name: test_holdout_rm
num_bytes: 14943441
num_examples: 1000
- name: test_ift
num_bytes: 12578103
num_examples: 779
- name: test_rl
num_bytes: 11816238
num_examples: 779
- name: test_rm
num_bytes: 15734081
num_examples: 1000
- name: train_ift
num_bytes: 230620014.0
num_examples: 14408
- name: train_rl
num_bytes: 216221093.0
num_examples: 14408
- name: train_rm
num_bytes: 241862703.25951442
num_examples: 15312
download_size: 417974725
dataset_size: 743775673.2595145
---
# Dataset Card for "scale_helpful_no_code"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceH4/scale_helpful_no_code_math | 2023-09-11T14:59:49.000Z | [
"region:us"
] | HuggingFaceH4 | null | null | null | 0 | 32 | ---
configs:
- config_name: default
data_files:
- split: test_holdout_rm
path: data/test_holdout_rm-*
- split: test_ift
path: data/test_ift-*
- split: test_rl
path: data/test_rl-*
- split: test_rm
path: data/test_rm-*
- split: train_ift
path: data/train_ift-*
- split: train_rl
path: data/train_rl-*
- split: train_rm
path: data/train_rm-*
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: margin
dtype: int64
- name: meta
struct:
- name: category
dtype: string
splits:
- name: test_holdout_rm
num_bytes: 14943441
num_examples: 1000
- name: test_ift
num_bytes: 12578103
num_examples: 779
- name: test_rl
num_bytes: 11816238
num_examples: 779
- name: test_rm
num_bytes: 15734081
num_examples: 1000
- name: train_ift
num_bytes: 230620014.0
num_examples: 14408
- name: train_rl
num_bytes: 216221093.0
num_examples: 14408
- name: train_rm
num_bytes: 227583452.7535974
num_examples: 14408
download_size: 413040890
dataset_size: 729496422.7535974
---
# Dataset Card for "scale_helpful_no_code_math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_v2_1000_0.90_id | 2023-09-13T08:14:26.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 122370541.34995362
num_examples: 70448
- name: validation
num_bytes: 1920159
num_examples: 1000
download_size: 5249130
dataset_size: 124290700.34995362
---
# Dataset Card for "squad_v2_1000_0.90_id"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
konverner/fr-address | 2023-09-13T09:36:35.000Z | [
"region:us"
] | konverner | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 1399540
num_examples: 5500
download_size: 208333
dataset_size: 1399540
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "address_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wetdog/TUT-urban-acoustic-scenes-2018-development-16bit | 2023-09-19T21:43:49.000Z | [
"region:us"
] | wetdog | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: label
dtype: string
- name: audio
dtype: audio
- name: city
dtype: string
- name: location_id
dtype: string
splits:
- name: train
num_bytes: 11755015136.34
num_examples: 6122
- name: test
num_bytes: 4834872627.026
num_examples: 2518
download_size: 15955243030
dataset_size: 16589887763.366001
---
# Dataset Card for "TUT-urban-acoustic-scenes-2018-development-16bit"
## Dataset Description
- **Homepage: https://zenodo.org/record/1228142**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: Toni Heittola (toni.heittola@tut.fi, http://www.cs.tut.fi/~heittolt/)**
### Dataset Summary
TUT Urban Acoustic Scenes 2018 development dataset consists of 10-seconds audio segments from 10 acoustic scenes:
Airport - airport
Indoor shopping mall - shopping_mall
Metro station - metro_station
Pedestrian street - street_pedestrian
Public square - public_square
Street with medium level of traffic - street_traffic
Travelling by a tram - tram
Travelling by a bus - bus
Travelling by an underground metro - metro
Urban park - park
Each acoustic scene has 864 segments (144 minutes of audio). The dataset contains in total 24 hours of audio. This is the 16 bit version
of the original dataset.
The dataset was collected in Finland by Tampere University of Technology between 02/2018 - 03/2018.
The data collection has received funding from the European Research Council under the ERC Grant Agreement 637422 EVERYSOUND.
### Supported Tasks and Leaderboards
- `audio-classification`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name).
- The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard
- which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
## Dataset Structure
### Data Instances
```
{'file_name': 'audio/airport-barcelona-0-0-a.wav',
'label': 'airport',
'audio': {'path': 'airport-barcelona-0-0-a.wav',
'array': array([-2.13623047e-04, -1.37329102e-04, -2.13623047e-04, ...,
3.05175781e-05, -6.10351562e-05, -6.10351562e-05]),
'sampling_rate': 48000},
'city': 'barcelona',
'location_id': '0'}
```
### Data Fields
- `file_name`: name of the audio file
- `label`: acoustic scene label from the 10 class set,
- `location_id`: city-location id '0',
- `city`: name of the city where the audio was recorded
Filenames of the dataset have the following pattern:
[scene label]-[city]-[location id]-[segment id]-[device id].wav
### Data Splits
A suggested training/test partitioning of the development set is provided in order to make results reported with this dataset uniform. The partitioning is done such that the segments recorded at the same location are included into the same subset - either training or testing. The partitioning is done aiming for a 70/30 ratio between the number of segments in training and test subsets while taking into account recording locations, and selecting the closest available option.
| Scene class | Train / Segments | Train / Locations | Test / Segments | Test / Locations |
| ------------------ | ---------------- | ----------------- | --------------- | ---------------- |
| Airport | 599 | 15 | 265 | 7 |
| Bus | 622 | 26 | 242 | 10 |
| Metro | 603 | 20 | 261 | 9 |
| Metro station | 605 | 28 | 259 | 12 |
| Park | 622 | 18 | 242 | 7 |
| Public square | 648 | 18 | 216 | 6 |
| Shopping mall | 585 | 16 | 279 | 6 |
| Street, pedestrian | 617 | 20 | 247 | 8 |
| Street, traffic | 618 | 18 | 246 | 7 |
| Tram | 603 | 24 | 261 | 11 |
| **Total** | **6122** | **203** | **2518** | **83** |
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The dataset was recorded in six large European cities: Barcelona, Helsinki, London, Paris, Stockholm, and Vienna. For all acoustic scenes, audio was captured in multiple locations: different streets, different parks, different shopping malls. In each location, multiple 2-3 minute long audio recordings were captured in a few slightly different positions (2-4) within the selected location. Collected audio material was cut into segments of 10 seconds length.
The equipment used for recording consists of a binaural [Soundman OKM II Klassik/studio A3](http://www.soundman.de/en/products/) electret in-ear microphone and a [Zoom F8](https://www.zoom.co.jp/products/handy-recorder/zoom-f8-multitrack-field-recorder) audio recorder using 48 kHz sampling rate and 24 bit resolution. During the recording, the microphones were worn by the recording person in the ears, and head movement was kept to minimum.
### Annotations
#### Annotation process
Post-processing of the recorded audio involves aspects related to privacy of recorded individuals, and possible errors in the recording process. Some interferences from mobile phones are audible, but are considered part of real-world recording process.
#### Who are the annotators?
* Ronal Bejarano Rodriguez
* Eemi Fagerlund
* Aino Koskimies
* Toni Heittola
### Personal and Sensitive Information
The material was screened for content, and segments containing close microphone conversation were eliminated.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Toni Heittola (toni.heittola@tut.fi, http://www.cs.tut.fi/~heittolt/)
Annamaria Mesaros (annamaria.mesaros@tut.fi, http://www.cs.tut.fi/~mesaros/)
Tuomas Virtanen (tuomas.virtanen@tut.fi, http://www.cs.tut.fi/~tuomasv/)
### Licensing Information
Copyright (c) 2018 Tampere University of Technology and its licensors
All rights reserved.
Permission is hereby granted, without written agreement and without license or royalty
fees, to use and copy the TUT Urban Acoustic Scenes 2018 (“Work”) described in this document
and composed of audio and metadata. This grant is only for experimental and non-commercial
purposes, provided that the copyright notice in its entirety appear in all copies of this Work,
and the original source of this Work, (Audio Research Group from Laboratory of Signal
Processing at Tampere University of Technology),
is acknowledged in any publication that reports research using this Work.
Any commercial use of the Work or any part thereof is strictly prohibited.
Commercial use include, but is not limited to:
- selling or reproducing the Work
- selling or distributing the results or content achieved by use of the Work
- providing services by using the Work.
IN NO EVENT SHALL TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS LICENSORS BE LIABLE TO ANY PARTY
FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE
OF THIS WORK AND ITS DOCUMENTATION, EVEN IF TAMPERE UNIVERSITY OF TECHNOLOGY OR ITS
LICENSORS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
TAMPERE UNIVERSITY OF TECHNOLOGY AND ALL ITS LICENSORS SPECIFICALLY DISCLAIMS ANY
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS FOR A PARTICULAR PURPOSE. THE WORK PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND
THE TAMPERE UNIVERSITY OF TECHNOLOGY HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT,
UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
### Citation Information
[](https://doi.org/10.5281/zenodo.1228142)
### Contributions
Thanks to [@wtdog](https://github.com/wetdog) for adding this dataset.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
minh21/COVID-QA-for-sentence-transformer | 2023-09-18T07:25:51.000Z | [
"region:us"
] | minh21 | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer_text
dtype: string
- name: answer_start
dtype: int64
- name: is_impossible
dtype: bool
- name: document_id
dtype: int64
- name: id
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 1184569
num_examples: 1615
- name: test
num_bytes: 144867
num_examples: 202
- name: validation
num_bytes: 147532
num_examples: 202
download_size: 808259
dataset_size: 1476968
---
# Dataset Card for "COVID-QA-for-sentence-transformer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_context_train_10_eval_10 | 2023-09-19T09:29:46.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 351990
num_examples: 150
- name: validation
num_bytes: 101044
num_examples: 48
download_size: 101367
dataset_size: 453034
---
# Dataset Card for "squad_context_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
isashap/resume-dataset-w-context | 2023-09-21T21:21:11.000Z | [
"region:us"
] | isashap | null | null | null | 0 | 32 | |
kewu93/three_styles_prompted_500 | 2023-09-21T06:27:41.000Z | [
"region:us"
] | kewu93 | null | null | null | 0 | 32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 34576478.8
num_examples: 1200
- name: val
num_bytes: 8468533.6
num_examples: 300
download_size: 42069788
dataset_size: 43045012.4
---
# Dataset Card for "three_styles_prompted_500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Captluke/llama2-wiki-v3 | 2023-09-21T10:50:19.000Z | [
"language:en",
"region:us"
] | Captluke | null | null | null | 0 | 32 | ---
language:
- en
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
JAYASWAROOP/trail1 | 2023-09-22T05:03:51.000Z | [
"task_categories:question-answering",
"region:us"
] | JAYASWAROOP | null | null | null | 0 | 32 | ---
task_categories:
- question-answering
--- |
luqman8001/Hourly_London_Bexley_01-01-2010_TO_01-01-2015 | 2023-09-21T15:55:11.000Z | [
"region:us"
] | luqman8001 | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: Ozone
dtype: float32
- name: Nitric oxide
dtype: float32
- name: Nitrogen dioxide
dtype: float32
- name: Nitrogen oxides as nitrogen dioxide
dtype: float32
- name: Sulphur dioxide
dtype: float32
- name: Carbon monoxide
dtype: float32
- name: PM10 particulate matter (Hourly measured)
dtype: float32
- name: Non-volatile PM10 (Hourly measured)
dtype: float32
- name: Volatile PM10 (Hourly measured)
dtype: float32
- name: PM2.5 particulate matter (Hourly measured)
dtype: float32
- name: Non-volatile PM2.5 (Hourly measured)
dtype: float32
- name: Volatile PM2.5 (Hourly measured)
dtype: float32
- name: Modelled Wind Direction
dtype: float32
- name: Modelled Wind Speed
dtype: float32
- name: Modelled Temperature
dtype: float32
- name: Datetime
dtype: timestamp[ns]
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1152084
num_examples: 15159
download_size: 565446
dataset_size: 1152084
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Hourly_London_Bexley_01-01-2010_TO_01-01-2015"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mtc/swisstext23-20min-annotation-data-with-train-set | 2023-09-26T17:57:13.000Z | [
"region:us"
] | mtc | null | null | null | 0 | 32 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: int64
- name: titleHeader
dtype: string
- name: title
dtype: string
- name: lead
dtype: string
- name: article
dtype: string
- name: summary
dtype: string
- name: article_sentence_count
dtype: int64
- name: summary_sentence_count
dtype: int64
- name: url
dtype: string
- name: paragraphs
sequence: string
splits:
- name: test
num_bytes: 997331
num_examples: 200
- name: train
num_bytes: 11963733
num_examples: 2331
download_size: 8342038
dataset_size: 12961064
---
# Dataset Card for "swisstext23-20min-annotation-data-with-train-set"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_rare_v4_train_30_eval_10 | 2023-09-27T16:18:06.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 546548
num_examples: 368
- name: validation
num_bytes: 49683
num_examples: 50
download_size: 104892
dataset_size: 596231
---
# Dataset Card for "squad_rare_v4_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_no_rare_strict_v4_train_10_eval_10 | 2023-09-28T15:08:56.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 199078
num_examples: 138
- name: validation
num_bytes: 48145
num_examples: 50
download_size: 63640
dataset_size: 247223
---
# Dataset Card for "squad_no_rare_strict_v4_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/eval_tag_squad_v8 | 2023-10-05T16:55:19.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 13020105
num_examples: 10570
- name: validation
num_bytes: 13020105
num_examples: 10570
download_size: 5664930
dataset_size: 26040210
---
# Dataset Card for "eval_tag_squad_v8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/eval_tag_squad_v9 | 2023-10-05T16:55:32.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 32 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 13273785
num_examples: 10570
- name: validation
num_bytes: 13273785
num_examples: 10570
download_size: 5722530
dataset_size: 26547570
---
# Dataset Card for "eval_tag_squad_v9"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vishnupriyavr/wiki-movie-plots-with-summaries | 2023-10-08T11:58:09.000Z | [
"license:cc-by-sa-4.0",
"region:us"
] | vishnupriyavr | null | null | null | 0 | 32 | ---
license:
- cc-by-sa-4.0
converted_from: kaggle
kaggle_id: gabrieltardochi/wikipedia-movie-plots-with-plot-summaries
---
# Dataset Card for Wikipedia Movie Plots with AI Plot Summaries
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/gabrieltardochi/wikipedia-movie-plots-with-plot-summaries
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Context
Wikipedia Movies Plots dataset by JustinR ( https://www.kaggle.com/jrobischon/wikipedia-movie-plots )
### Content
Everything is the same as in https://www.kaggle.com/jrobischon/wikipedia-movie-plots
### Acknowledgements
Please, go upvote https://www.kaggle.com/jrobischon/wikipedia-movie-plots dataset, since this is 100% based on that.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@gabrieltardochi](https://kaggle.com/gabrieltardochi)
### Licensing Information
The license for this dataset is cc-by-sa-4.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
facat/sci-llm-part-rev | 2023-10-07T13:55:46.000Z | [
"region:us"
] | facat | null | null | null | 0 | 32 | ---
configs:
- config_name: default
data_files:
- split: gpt1
path: data/gpt1-*
- split: gpt2
path: data/gpt2-*
- split: gpt3
path: data/gpt3-*
- split: gpt4
path: data/gpt4-*
- split: gpt5
path: data/gpt5-*
- split: gpt6
path: data/gpt6-*
- split: han_40k
path: data/han_40k-*
- split: test
path: data/test-*
- split: test2
path: data/test2-*
dataset_info:
features:
- name: prompt
dtype: string
- name: context
dtype: string
- name: chosen
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
splits:
- name: gpt1
num_bytes: 130420316
num_examples: 22113
- name: gpt2
num_bytes: 264545680
num_examples: 44859
- name: gpt3
num_bytes: 98018603
num_examples: 16648
- name: gpt4
num_bytes: 309111447
num_examples: 52813
- name: gpt5
num_bytes: 99277151
num_examples: 16795
- name: gpt6
num_bytes: 110054529
num_examples: 18325
- name: han_40k
num_bytes: 236235210
num_examples: 40807
- name: test
num_bytes: 2214599
num_examples: 500
- name: test2
num_bytes: 1111116
num_examples: 200
download_size: 608607150
dataset_size: 1250988651
---
# Dataset Card for "sci-llm-part-rev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
consumer-finance-complaints | 2023-01-25T14:28:37.000Z | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"region:us"
] | null | null | \ | null | 10 | 31 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- topic-classification
pretty_name: consumer-finance-complaints
dataset_info:
features:
- name: Date Received
dtype: timestamp[s]
- name: Product
dtype:
class_label:
names:
'0': Credit reporting, credit repair services, or other personal consumer
reports
'1': Debt collection
'2': Mortgage
'3': Credit card or prepaid card
'4': Checking or savings account
'5': Credit reporting
'6': Student loan
'7': Money transfer, virtual currency, or money service
'8': Credit card
'9': Vehicle loan or lease
'10': Bank account or service
'11': Payday loan, title loan, or personal loan
'12': Consumer Loan
'13': Payday loan
'14': Money transfers
'15': Prepaid card
'16': Other financial service
'17': Virtual currency
- name: Sub Product
dtype:
class_label:
names:
'0': Credit reporting
'1': General-purpose credit card or charge card
'2': Checking account
'3': Other debt
'4': Second mortgage
'5': Conventional home mortgage
'6': I do not know
'7': Credit card debt
'8': Medical debt
'9': Federal student loan servicing
'10': FHA mortgage
'11': Conventional fixed mortgage
'12': Loan
'13': Other (i.e. phone, health club, etc.)
'14': Store credit card
'15': Installment loan
'16': Credit card
'17': Medical
'18': Mobile or digital wallet
'19': Private student loan
'20': Non-federal student loan
'21': Domestic (US) money transfer
'22': VA mortgage
'23': Vehicle loan
'24': Auto debt
'25': Payday loan
'26': Conventional adjustable mortgage (ARM)
'27': Other personal consumer report
'28': Payday loan debt
'29': Savings account
'30': Virtual currency
'31': Other bank product/service
'32': Other type of mortgage
'33': Other banking product or service
'34': Other mortgage
'35': International money transfer
'36': Lease
'37': General-purpose prepaid card
'38': Home equity loan or line of credit (HELOC)
'39': Government benefit card
'40': Mortgage debt
'41': Personal line of credit
'42': Home equity loan or line of credit
'43': Federal student loan debt
'44': Private student loan debt
'45': Credit repair services
'46': Title loan
'47': Auto
'48': Vehicle lease
'49': Mortgage
'50': Reverse mortgage
'51': General purpose card
'52': CD (Certificate of Deposit)
'53': Federal student loan
'54': Payroll card
'55': Debt settlement
'56': Check cashing service
'57': Traveler's check or cashier's check
'58': Gift card
'59': (CD) Certificate of deposit
'60': Money order
'61': Foreign currency exchange
'62': Refund anticipation check
'63': Gift or merchant card
'64': Cashing a check without an account
'65': ID prepaid card
'66': Mobile wallet
'67': Government benefit payment card
'68': Pawn loan
'69': Other special purpose card
'70': Check cashing
'71': Credit repair
'72': Traveler’s/Cashier’s checks
'73': Transit card
'74': Student prepaid card
'75': Electronic Benefit Transfer / EBT card
'76': ''
- name: Issue
dtype: string
- name: Sub Issue
dtype: string
- name: Complaint Text
dtype: string
- name: Company Public Response
dtype: string
- name: Company
dtype: string
- name: State
dtype: string
- name: Zip Code
dtype: string
- name: Tags
dtype:
class_label:
names:
'0': Servicemember
'1': Older American
'2': Older American, Servicemember
'3': ''
- name: Consumer Consent Provided
dtype: string
- name: Submitted via
dtype: string
- name: Date Sent To Company
dtype: string
- name: Company Response To Consumer
dtype: string
- name: Timely Response
dtype: string
- name: Consumer Disputed
dtype: string
- name: Complaint ID
dtype: string
splits:
- name: train
num_bytes: 1605177353
num_examples: 2455765
download_size: 404187716
dataset_size: 1605177353
---
# Dataset Card for Consumer Finance Complaints
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.consumerfinance.gov/data-research/consumer-complaints/
- **Repository:**
https://github.com/cfpb/consumerfinance.gov
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This database is a collection of complaints about consumer financial products and services that we sent to companies for response.
The Consumer Complaint Database is a collection of complaints about consumer financial products and services that we sent to companies for response. Complaints are published after the company responds, confirming a commercial relationship with the consumer, or after 15 days, whichever comes first. Complaints referred to other regulators, such as complaints about depository institutions with less than $10 billion in assets, are not published in the Consumer Complaint Database. The database generally updates daily.
Complaints can give us insights into problems people are experiencing in the marketplace and help us regulate consumer financial products and services under existing federal consumer financial laws, enforce those laws judiciously, and educate and empower consumers to make informed financial decisions. We also report on complaint trends annually in Consumer Response’s Annual Report to Congress.
### Supported Tasks and Leaderboards
Text Classification Tasks
| Task | Label Name | Description | SOTA |
| ----------- | ----------- |----------- | ----------- |
| Text Classification | Product| Predict the related product of a complaint | N/A |
| Task | Label Name | Description | SOTA |
| ----------- | ----------- |----------- | ----------- |
| Text Classification | Sub-Product| Predict the related sub product of a complaint | N/A |
| Task | Label Name | Description | SOTA |
| ----------- | ----------- |----------- | ----------- |
| Text Classification | Tags | Predict whether a complaint has been made by someone elderly or a service person| N/A |
### Languages
English
## Dataset Structure
### Data Instances
This dataset is a point in time extract of the database, the database increases in size every day
An example of 'train' looks as follows.
```
{
"Complaint ID": "4511031",
"Product": "Credit reporting, credit repair services, or other personal consumer reports",
"Sub Issue": "Credit inquiries on your report that you don't recognize",
"Consumer Disputed": "N/A",
"Sub Product": "Credit reporting",
"State": "TX",
"Tags": "Older American, Servicemember",
"Company Public Response": "",
"Zip Code": "75202",
"Issue": "Improper use of your report",
"Submitted via": "Web",
"Company Response To Consumer": "Closed with explanation",
"Complaint Text": "I am XXXX XXXX and I am submitting this complaint myself and there is no third party involved. Despite the multiple previous written requests, the unverified inquiries listed below still remain on my credit report in violation of Federal Law. The Equifax Credit Bureau failed to comply with Fair Credit Reporting Act, XXXX XXXX sections XXXX within the time set forth by law and continued reporting of erroneous information which now, given all my attempts to address it directly with the creditor, as willful negligence and non-compliance with federal statutes. PLEASE REMOVE THE FOLLOWING INQUIRIES COMPLETELY FROM MY CREDIT REPORT : XXXX CARD-Date of inquiry XX/XX/XXXX XXXX CARD-Date of inquiry XX/XX/XXXX",
"Date Received": "07-02-2021",
"Company": "EQUIFAX, INC.",
"Consumer Consent Provided": "Consent not provided",
"Timely Response": "Yes",
"Date Sent To Company": "2021-07-02"
}
```
### Data Fields
| Field | name | Description | Data Type |
| ----------- | ----------- |----------- | ----------- |
| Date received | The date the CFPB received the complaint | date & time | |
| Product | The type of product the consumer identified in the complaint | plain text | This field is a categorical variable. |
| Sub-product | The type of sub-product the consumer identified in the complaint | plain text | This field is a categorical variable. Not all Products have Sub-products. |
| Issue | The issue the consumer identified in the complaint | plain text | This field is a categorical variable. Possible values are dependent on Product. |
| Sub-issue | The sub-issue the consumer identified in the complaint | plain text | This field is a categorical variable. Possible values are dependent on product and issue. Not all Issues have corresponding Sub-issues. |
| Consumer complaint narrative | Consumer complaint narrative is the consumer-submitted description of "what happened" from the complaint. Consumers must opt-in to share their narrative. We will not publish the narrative unless the consumer consents, and consumers can opt-out at any time. The CFPB takes reasonable steps to scrub personal information from each complaint that could be used to identify the consumer. | plain text | Consumers' descriptions of what happened are included if consumers consent to publishing the description and after we take steps to remove personal information. |
| Company public response | The company's optional, public-facing response to a consumer's complaint. Companies can choose to select a response from a pre-set list of options that will be posted on the public database. For example, "Company believes complaint is the result of an isolated error." | plain text | Companies' public-facing responses to complaints are included if companies choose to publish one. Companies may select a public response from a set list of options as soon as they respond to the complaint, but no later than 180 days after the complaint was sent to the company for response. |
| Company | The complaint is about this company | plain text | This field is a categorical variable. |
| State | The state of the mailing address provided by the consumer | plain text | This field is a categorical variable. |
| ZIP code | The mailing ZIP code provided by the consumer | plain text | Mailing ZIP code provided by the consumer. This field may: i) include the first five digits of a ZIP code; ii) include the first three digits of a ZIP code (if the consumer consented to publication of their complaint narrative); or iii) be blank (if ZIP codes have been submitted with non-numeric values, if there are less than 20,000 people in a given ZIP code, or if the complaint has an address outside of the United States). For example, complaints where the submitter reports the age of the consumer as 62 years or older are tagged, ‘Older American.’ Complaints submitted by or on behalf of a servicemember or the spouse or dependent of a servicemember are tagged, ‘Servicemember.’ Servicemember includes anyone who is active duty, National Guard, or Reservist, as well as anyone who previously served and is a Veteran or retiree. |
| Tags | Data that supports easier searching and sorting of complaints submitted by or on behalf of consumers. | plain text | |
| Consumer consent provided? | Identifies whether the consumer opted in to publish their complaint narrative. We do not publish the narrative unless the consumer consents and consumers can opt-out at any time. | plain text | This field shows whether a consumer provided consent to publish their complaint narrative |
| Submitted via | How the complaint was submitted to the CFPB | plain text | This field is a categorical variable. |
| Date sent to company | The date the CFPB sent the complaint to the company | date & time | |
| Company response to consumer | This is how the company responded. For example, "Closed with explanation." | plain text | This field is a categorical variable. |
| Timely response? | Whether the company gave a timely response | plain text | yes/no |
| Consumer disputed? | Whether the consumer disputed the company’s response | plain text | YES/ NO/ N/A: The Bureau discontinued the consumer dispute option on April 24, 2017. |
| Complaint ID | The unique identification number for a complaint | number | |
### Data Splits
This dataset only contains a TRAIN set - this can be further split into TRAIN, TEST and VALIDATE subsets with the datasets library
## Dataset Creation
### Curation Rationale
Open sourcing customer complaints
### Source Data
https://cfpb.github.io/api/ccdb/
#### Initial Data Collection and Normalization
This database is maintained by the Consumer Financial Protection Bureau
#### Who are the source language producers?
English
### Annotations
#### Annotation process
User submitted to the CFPB
#### Who are the annotators?
N/A
### Personal and Sensitive Information
All PII data has been anonymised
## Considerations for Using the Data
### Social Impact of Dataset
N/A
### Discussion of Biases
This database is not a statistical sample of consumers’ experiences in the marketplace. Complaints are not necessarily representative of all consumers’ experiences and complaints do not constitute “information” for purposes of the Information Quality Act .
Complaint volume should be considered in the context of company size and/or market share. For example, companies with more customers may have more complaints than companies with fewer customers. We encourage you to pair complaint data with public and private data sets for additional context.
The Bureau publishes the consumer’s narrative description of his or her experience if the consumer opts to share it publicly and after the Bureau takes steps to remove personal information. We don’t verify all the allegations in complaint narratives. Unproven allegations in consumer narratives should be regarded as opinion, not fact. We do not adopt the views expressed and make no representation that consumers’ allegations are accurate, clear, complete, or unbiased in substance or presentation. Users should consider what conclusions may be fairly drawn from complaints alone.
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
https://cfpb.github.io/api/ccdb/
### Licensing Information
Creative Commons Zero v1.0 Universal
### Citation Information
N/A
### Contributions
Thanks to [@kayvane1](https://github.com/kayvane1) for adding this dataset and to the [Consumer Financial Protection Bureau](https://cfpb.github.io/) for publishing it. |
nchlt | 2023-01-25T14:41:21.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:af",
"language:nr",
"language:nso",
"langu... | null | The development of linguistic resources for use in natural language processingis of utmost importance for the continued growth of research anddevelopment in the field, especially for resource-scarce languages. In this paper we describe the process and challenges of simultaneouslydevelopingmultiple linguistic resources for ten of the official languages of South Africa. The project focussed on establishing a set of foundational resources that can foster further development of both resources and technologies for the NLP industry in South Africa. The development efforts during the project included creating monolingual unannotated corpora, of which a subset of the corpora for each language was annotated on token, orthographic, morphological and morphosyntactic layers. The annotated subsetsincludes both development and test setsand were used in the creation of five core-technologies, viz. atokeniser, sentenciser,lemmatiser, part of speech tagger and morphological decomposer for each language. We report on the quality of these tools for each language and provide some more context of the importance of the resources within the South African context. | @inproceedings{eiselen2014developing,
title={Developing Text Resources for Ten South African Languages.},
author={Eiselen, Roald and Puttkammer, Martin J},
booktitle={LREC},
pages={3698--3703},
year={2014}
} | null | 4 | 31 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- af
- nr
- nso
- ss
- tn
- ts
- ve
- xh
- zu
license:
- cc-by-2.5
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: NCHLT
dataset_info:
- config_name: af
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3955069
num_examples: 8961
download_size: 25748344
dataset_size: 3955069
- config_name: nr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3188781
num_examples: 9334
download_size: 20040327
dataset_size: 3188781
- config_name: xh
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 2365821
num_examples: 6283
download_size: 14513302
dataset_size: 2365821
- config_name: zu
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3951366
num_examples: 10955
download_size: 25097584
dataset_size: 3951366
- config_name: nso-sepedi
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3322296
num_examples: 7116
download_size: 22077376
dataset_size: 3322296
- config_name: nso-sesotho
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 4427898
num_examples: 9471
download_size: 30421109
dataset_size: 4427898
- config_name: tn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3812339
num_examples: 7943
download_size: 25905236
dataset_size: 3812339
- config_name: ss
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3431063
num_examples: 10797
download_size: 21882224
dataset_size: 3431063
- config_name: ve
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3941041
num_examples: 8477
download_size: 26382457
dataset_size: 3941041
- config_name: ts
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 3941041
num_examples: 8477
download_size: 26382457
dataset_size: 3941041
---
# Dataset Card for NCHLT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [link](https://repo.sadilar.org/handle/20.500.12185/7/discover?filtertype_0=database&filtertype_1=title&filter_relational_operator_1=contains&filter_relational_operator_0=equals&filter_1=&filter_0=Monolingual+Text+Corpora%3A+Annotated&filtertype=project&filter_relational_operator=equals&filter=NCHLT+Text+II)
- **Repository:** []()
- **Paper:** []()
- **Leaderboard:** []()
- **Point of Contact:** []()
### Dataset Summary
The development of linguistic resources for use in natural language processingis of utmost importance for the continued growth of research anddevelopment in the field, especially for resource-scarce languages. In this paper we describe the process and challenges of simultaneouslydevelopingmultiple linguistic resources for ten of the official languages of South Africa. The project focussed on establishing a set of foundational resources that can foster further development of both resources and technologies for the NLP industry in South Africa. The development efforts during the project included creating monolingual unannotated corpora, of which a subset of the corpora for each language was annotated on token, orthographic, morphological and morphosyntactic layers. The annotated subsetsincludes both development and test setsand were used in the creation of five core-technologies, viz. atokeniser, sentenciser,lemmatiser, part of speech tagger and morphological decomposer for each language. We report on the quality of these tools for each language and provide some more context of the importance of the resources within the South African context.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
[More Information Needed]
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Martin.Puttkammer@nwu.ac.za
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{eiselen2014developing,
title={Developing Text Resources for Ten South African Languages.},
author={Eiselen, Roald and Puttkammer, Martin J},
booktitle={LREC},
pages={3698--3703},
year={2014}
}
```
### Contributions
Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset. |
clarin-pl/cst-wikinews | 2021-07-12T18:51:43.000Z | [
"region:us"
] | clarin-pl | CST Wikinews dataset. | null | null | 2 | 31 | Entry not found |
DebateLabKIT/aaac | 2022-10-24T16:25:56.000Z | [
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:text-generation",
"task_ids:parsing",
"task_ids:text-simplification",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolin... | DebateLabKIT | null | null | null | 3 | 31 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- machine-generated
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
- text-retrieval
- text-generation
task_ids:
- parsing
- text-simplification
paperswithcode_id: aaac
pretty_name: Artificial Argument Analysis Corpus
language_bcp47:
- en-US
tags:
- argument-mining
- conditional-text-generation
- structure-prediction
---
# Dataset Card for Artificial Argument Analysis Corpus (AAAC)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Construction of the Synthetic Data](#construction-of-the-synthetic-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://debatelab.github.io/journal/deepa2.html
- **Repository:** None
- **Paper:** G. Betz, K. Richardson. *DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models*. https://arxiv.org/abs/2110.01509
- **Leaderboard:** None
### Dataset Summary
DeepA2 is a modular framework for deep argument analysis. DeepA2 datasets contain comprehensive logical reconstructions of informally presented arguments in short argumentative texts. This document describes two synthetic DeepA2 datasets for artificial argument analysis: AAAC01 and AAAC02.
```sh
# clone
git lfs clone https://huggingface.co/datasets/debatelab/aaac
```
```python
import pandas as pd
from datasets import Dataset
# loading train split as pandas df
df = pd.read_json("aaac/aaac01_train.jsonl", lines=True, orient="records")
# creating dataset from pandas df
Dataset.from_pandas(df)
```
### Supported Tasks and Leaderboards
The multi-dimensional datasets can be used to define various text-2-text tasks (see also [Betz and Richardson 2021](https://arxiv.org/abs/2110.01509)), for example:
* Premise extraction,
* Conclusion extraction,
* Logical formalization,
* Logical reconstrcution.
### Languages
English.
## Dataset Structure
### Data Instances
The following histograms (number of dataset records with given property) describe and compare the two datasets AAAC01 (train split, N=16000) and AAAC02 (dev split, N=4000).
|AAAC01 / train split|AAAC02 / dev split|
|-|-|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
### Data Fields
The following multi-dimensional example record (2-step argument with one implicit premise) illustrates the structure of the AAAC datasets.
#### argument_source
```
If someone was discovered in 'Moonlight', then they won't play the lead in 'Booksmart',
because being a candidate for the lead in 'Booksmart' is sufficient for not being an
Oscar-Nominee for a role in 'Eighth Grade'. Yet every BAFTA-Nominee for a role in 'The
Shape of Water' is a fan-favourite since 'Moonlight' or a supporting actor in 'Black Panther'.
And if someone is a supporting actor in 'Black Panther', then they could never become the
main actor in 'Booksmart'. Consequently, if someone is a BAFTA-Nominee for a role in
'The Shape of Water', then they are not a candidate for the lead in 'Booksmart'.
```
#### reason_statements
```json
[
{"text":"being a candidate for the lead in 'Booksmart' is sufficient for
not being an Oscar-Nominee for a role in 'Eighth Grade'","starts_at":96,
"ref_reco":2},
{"text":"every BAFTA-Nominee for a role in 'The Shape of Water' is a
fan-favourite since 'Moonlight' or a supporting actor in 'Black Panther'",
"starts_at":221,"ref_reco":4},
{"text":"if someone is a supporting actor in 'Black Panther', then they
could never become the main actor in 'Booksmart'","starts_at":359,
"ref_reco":5}
]
```
#### conclusion_statements
```json
[
{"text":"If someone was discovered in 'Moonlight', then they won't play the
lead in 'Booksmart'","starts_at":0,"ref_reco":3},
{"text":"if someone is a BAFTA-Nominee for a role in 'The Shape of Water',
then they are not a candidate for the lead in 'Booksmart'","starts_at":486,
"ref_reco":6}
]
```
#### distractors
`[]`
#### argdown_reconstruction
```
(1) If someone is a fan-favourite since 'Moonlight', then they are an Oscar-Nominee for a role in 'Eighth Grade'.
(2) If someone is a candidate for the lead in 'Booksmart', then they are not an Oscar-Nominee for a role in 'Eighth Grade'.
--
with hypothetical syllogism {variant: ["negation variant", "transposition"], uses: [1,2]}
--
(3) If someone is beloved for their role in 'Moonlight', then they don't audition in
'Booksmart'.
(4) If someone is a BAFTA-Nominee for a role in 'The Shape of Water', then they are a fan-favourite since 'Moonlight' or a supporting actor in 'Black Panther'.
(5) If someone is a supporting actor in 'Black Panther', then they don't audition in
'Booksmart'.
--
with generalized dilemma {variant: ["negation variant"], uses: [3,4,5]}
--
(6) If someone is a BAFTA-Nominee for a role in 'The Shape of Water', then they are not a
candidate for the lead in 'Booksmart'.
```
#### premises
```json
[
{"ref_reco":1,"text":"If someone is a fan-favourite since 'Moonlight', then
they are an Oscar-Nominee for a role in 'Eighth Grade'.","explicit":false},
{"ref_reco":2,"text":"If someone is a candidate for the lead in
'Booksmart', then they are not an Oscar-Nominee for a role in 'Eighth
Grade'.","explicit":true},
{"ref_reco":4,"text":"If someone is a BAFTA-Nominee for a role in 'The
Shape of Water', then they are a fan-favourite since 'Moonlight' or a
supporting actor in 'Black Panther'.","explicit":true},
{"ref_reco":5,"text":"If someone is a supporting actor in 'Black Panther',
then they don't audition in 'Booksmart'.","explicit":true}
]
```
#### premises_formalized
```json
[
{"form":"(x): ${F2}x -> ${F5}x","ref_reco":1},
{"form":"(x): ${F4}x -> ¬${F5}x","ref_reco":2},
{"form":"(x): ${F1}x -> (${F2}x v ${F3}x)","ref_reco":4},
{"form":"(x): ${F3}x -> ¬${F4}x","ref_reco":5}
]
```
#### conclusion
```json
[{"ref_reco":6,"text":"If someone is a BAFTA-Nominee for a role in 'The Shape
of Water', then they are not a candidate for the lead in 'Booksmart'.",
"explicit":true}]
```
#### conclusion_formalized
```json
[{"form":"(x): ${F1}x -> ¬${F4}x","ref_reco":6}]
```
#### intermediary_conclusions
```json
[{"ref_reco":3,"text":"If someone is beloved for their role in 'Moonlight',
then they don't audition in 'Booksmart'.","explicit":true}]
```
#### intermediary_conclusions_formalized
```json
[{"form":"(x): ${F2}x -> ¬${F4}x","ref_reco":3}]
```
#### plcd_subs
```json
{
"F1":"BAFTA-Nominee for a role in 'The Shape of Water'",
"F2":"fan-favourite since 'Moonlight'",
"F3":"supporting actor in 'Black Panther'",
"F4":"candidate for the lead in 'Booksmart'",
"F5":"Oscar-Nominee for a role in 'Eighth Grade'"
}
```
### Data Splits
Number of instances in the various splits:
| Split | AAAC01 | AAAC02 |
| :--- | :---: | :---: |
| TRAIN | 16,000 | 16,000 |
| DEV | 4,000 | 4,000 |
| TEST | 4,000 | 4,000 |
To correctly load a specific split, define `data_files` as follows:
```python
>>> data_files = {"train": "aaac01_train.jsonl", "eval": "aaac01_dev.jsonl", "test": "aaac01_test.jsonl"}
>>> dataset = load_dataset("debatelab/aaac", data_files=data_files)
```
## Dataset Creation
### Curation Rationale
Argument analysis refers to the interpretation and logical reconstruction of argumentative texts. Its goal is to make an argument transparent, so as to understand, appreciate and (possibly) criticize it. Argument analysis is a key critical thinking skill.
Here's a first example of an informally presented argument, **Descartes' Cogito**:
> I have convinced myself that there is absolutely nothing in the world, no sky, no earth, no minds, no bodies. Does it now follow that I too do not exist? No: if I convinced myself of something then I certainly existed. But there is a deceiver of supreme power and cunning who is deliberately and constantly deceiving me. In that case I too undoubtedly exist, if he is deceiving me; and let him deceive me as much as he can, he will never bring it about that I am nothing so long as I think that I am something. So after considering everything very thoroughly, I must finally conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind. (AT 7:25, CSM 2:16f)
And here's a second example, taken from the *Debater's Handbook*, **Pro Censorship**:
> Freedom of speech is never an absolute right but an aspiration. It ceases to be a right when it causes harm to others -- we all recognise the value of, for example, legislating against incitement to racial hatred. Therefore it is not the case that censorship is wrong in principle.
Given such texts, argument analysis aims at answering the following questions:
1. Does the text present an argument?
2. If so, how many?
3. What is the argument supposed to show (conclusion)?
4. What exactly are the premises of the argument?
* Which statements, explicit in the text, are not relevant for the argument?
* Which premises are required, but not explicitly stated?
5. Is the argument deductively valid, inductively strong, or simply fallacious?
To answer these questions, argument analysts **interpret** the text by (re-)constructing its argument in a standardized way (typically as a premise-conclusion list) and by making use of logical streamlining and formalization.
A reconstruction of **Pro Censorship** which answers the above questions is:
```argdown
(1) Freedom of speech is never an absolute right but an aspiration.
(2) Censorship is wrong in principle only if freedom of speech is an
absolute right.
--with modus tollens--
(3) It is not the case that censorship is wrong in principle
```
There are typically multiple, more or less different interpretations and logical reconstructions of an argumentative text. For instance, there exists an [extensive debate](https://plato.stanford.edu/entries/descartes-epistemology/) about how to interpret **Descartes' Cogito**, and scholars have advanced rival interpretation of the argument. An alternative reconstruction of the much simpler **Pro Censorship** might read:
```argdown
(1) Legislating against incitement to racial hatred is valuable.
(2) Legislating against incitement to racial hatred is an instance of censorship.
(3) If some instance of censorship is valuable, censorship is not wrong in
principle.
-----
(4) Censorship is not wrong in principle.
(5) Censorship is wrong in principle only if and only if freedom of speech
is an absolute right.
-----
(4) Freedom of speech is not an absolute right.
(5) Freedom of speech is an absolute right or an aspiration.
--with disjunctive syllogism--
(6) Freedom of speech is an aspiration.
```
What are the main reasons for this kind of underdetermination?
* **Incompleteness.** Many relevant parts of an argument (statements, their function in the argument, inference rules, argumentative goals) are not stated in its informal presentation. The argument analyst must infer the missing parts.
* **Additional material.** Over and above what is strictly part of the argument, informal presentations contain typically further material: relevant premises are repeated in slightly different ways, further examples are added to illustrate a point, statements are contrasted with views by opponents, etc. etc. It's argument analyst to choice which of the presented material is really part of the argument.
* **Errors.** Authors may err in the presentation of an argument, confounding, e.g., necessary and sufficient conditions in stating a premise. Following the principle of charity, benevolent argument analysts correct such errors and have to choose on of the different ways for how to do so.
* **Linguistic indeterminacy.** One and the same statement can be interpreted -- regarding its logical form -- in different ways.
* **Equivalence.** There are different natural language expressions for one and the same proposition.
AAAC datasets provide logical reconstructions of informal argumentative texts: Each record contains a source text to-be-reconstructed and further fields which describe an internally consistent interpretation of the text, notwithstanding the fact that there might be alternative interpretations of this very text.
### Construction of the Synthetic Data
Argument analysis starts with a text and reconstructs its argument (cf. [Motivation and Background](#curation-rationale)). In constructing our synthetic data, we inverse this direction: We start by sampling a complete argument, construct an informal presentation, and provide further info that describes both logical reconstruction and informal presentation. More specifically, the construction of the data involves the following steps:
1. [Generation of valid symbolic inference schemes](#step-1-generation-of-symbolic-inference-schemes)
2. [Assembling complex ("multi-hop") argument schemes from symbolic inference schemes](#step-2-assembling-complex-multi-hop-argument-schemes-from-symbolic-inference-schemes)
3. [Creation of (precise and informal) natural-language argument](#step-3-creation-of-precise-and-informal-natural-language-argument-schemes)
4. [Substitution of placeholders with domain-specific predicates and names](#step-4-substitution-of-placeholders-with-domain-specific-predicates-and-names)
5. [Creation of the argdown-snippet](#step-5-creation-of-the-argdown-snippet)
7. [Paraphrasing](#step-6-paraphrasing)
6. [Construction of a storyline for the argument source text](#step-7-construction-of-a-storyline-for-the-argument-source-text)
8. [Assembling the argument source text](#step-8-assembling-the-argument-source-text)
9. [Linking the precise reconstruction and the informal argumentative text](#step-9-linking-informal-presentation-and-formal-reconstruction)
#### Step 1: Generation of symbolic inference schemes
We construct the set of available inference schemes by systematically transforming the following 12 base schemes (6 from propositional and another 6 from predicate logic):
* modus ponens: `['Fa -> Gb', 'Fa', 'Gb']`
* chain rule: `['Fa -> Gb', 'Gb -> Hc', 'Fa -> Hc']`
* adjunction: `['Fa', 'Gb', 'Fa & Gb']`
* case analysis: `['Fa v Gb', 'Fa -> Hc', 'Gb -> Hc', 'Hc']`
* disjunctive syllogism: `['Fa v Gb', '¬Fa', 'Gb']`
* biconditional elimination: `['Fa <-> Gb', 'Fa -> Gb']`
* instantiation: `['(x): Fx -> Gx', 'Fa -> Ga']`
* hypothetical syllogism: `['(x): Fx -> Gx', '(x): Gx -> Hx', '(x): Fx -> Hx']`
* generalized biconditional elimination: `['(x): Fx <-> Gx', '(x): Fx -> Gx']`
* generalized adjunction: `['(x): Fx -> Gx', '(x): Fx -> Hx', '(x): Fx -> (Gx & Hx)']`
* generalized dilemma: `['(x): Fx -> (Gx v Hx)', '(x): Gx -> Ix', '(x): Hx -> Ix', '(x): Fx -> Ix']`
* generalized disjunctive syllogism: `['(x): Fx -> (Gx v Hx)', '(x): Fx -> ¬Gx', '(x): Fx -> Hx']`
(Regarding the propositional schemes, we allow for `a`=`b`=`c`.)
Further symbolic inference schemes are generated by applying the following transformations to each of these base schemes:
* *negation*: replace all occurrences of an atomic formula by its negation (for any number of such atomic sentences)
* *transposition*: transpose exactly one (generalized) conditional
* *dna*: simplify by applying duplex negatio affirmat
* *complex predicates*: replace all occurrences of a given atomic formula by a complex formula consisting in the conjunction or disjunction of two atomic formulas
* *de morgan*: apply de Morgan's rule once
These transformations are applied to the base schemes in the following order:
> **{base_schemes}** > negation_variants > transposition_variants > dna > **{transposition_variants}** > complex_predicates > negation_variants > dna > **{complex_predicates}** > de_morgan > dna > **{de_morgan}**
All transformations, except *dna*, are monotonic, i.e. simply add further schemes to the ones generated in the previous step. Results of bold steps are added to the list of valid inference schemes. Each inference scheme is stored with information about which transformations were used to create it. All in all, this gives us 5542 schemes.
#### Step 2: Assembling complex ("multi-hop") argument schemes from symbolic inference schemes
The complex argument *scheme*, which consists in multiple inferences, is assembled recursively by adding inferences that support premises of previously added inferences, as described by the following pseudocode:
```
argument = []
intermediary_conclusion = []
inference = randomly choose from list of all schemes
add inference to argument
for i in range(number_of_sub_arguments - 1):
target = randomly choose a premise which is not an intermediary_conclusion
inference = randomly choose a scheme whose conclusion is identical with target
add inference to argument
add target to intermediary_conclusion
return argument
```
The complex arguments we create are hence trees, with a root scheme.
Let's walk through this algorithm by means of an illustrative example and construct a symbolic argument scheme with two sub-arguments. First, we randomly choose some inference scheme (random sampling is controlled by weights that compensate for the fact that the list of schemes mainly contains, for combinatorial reasons, complex inferences), say:
```json
{
"id": "mp",
"base_scheme_group": "modus ponens",
"scheme_variant": ["complex_variant"],
"scheme": [
["${A}${a} -> (${B}${a} & ${C}${a})",
{"A": "${F}", "B": "${G}", "C": "${H}", "a": "${a}"}],
["${A}${a}", {"A": "${F}", "a": "${a}"}],
["${A}${a} & ${B}${a}", {"A": "${G}", "B": "${H}", "a": "${a}"}]
],
"predicate-placeholders": ["F", "G", "H"],
"entity-placeholders": ["a"]
}
```
Now, the target premise (= intermediary conclusion) of the next subargument is chosen, say: premise 1 of the already added root scheme. We filter the list of schemes for schemes whose conclusion structurally matches the target, i.e. has the form `${A}${a} -> (${B}${a} v ${C}${a})`. From this filtered list of suitable schemes, we randomly choose, for example
```json
{
"id": "bicelim",
"base_scheme_group": "biconditional elimination",
"scheme_variant": [complex_variant],
"scheme": [
["${A}${a} <-> (${B}${a} & ${C}${a})",
{"A": "${F}", "B": "${G}", "C": "${H}", "a": "${a}"}],
["${A}${a} -> (${B}${a} & ${C}${a})",
{"A": "${F}", "B": "${G}", "C": "${H}", "a": "${a}"}]
],
"predicate-placeholders": ["F", "G", "H"],
"entity-placeholders": []
}
```
So, we have generated this 2-step symbolic argument scheme with two premises, one intermediary and one final conclusion:
```
(1) Fa <-> Ga & Ha
--
with biconditional elimination (complex variant) from 1
--
(2) Fa -> Ga & Ha
(3) Fa
--
with modus ponens (complex variant) from 2,3
--
(4) Ga & Ha
```
General properties of the argument are now determined and can be stored in the dataset (its `domain` is randomly chosen):
```json
"steps":2, // number of inference steps
"n_premises":2,
"base_scheme_groups":[
"biconditional elimination",
"modus ponens"
],
"scheme_variants":[
"complex variant"
],
"domain_id":"consumers_personalcare",
"domain_type":"persons"
```
#### Step 3: Creation of (precise and informal) natural-language argument schemes
In step 3, the *symbolic and formal* complex argument scheme is transformed into a *natural language* argument scheme by replacing symbolic formulas (e.g., `${A}${a} v ${B}${a}`) with suitable natural language sentence schemes (such as, `${a} is a ${A}, and ${a} is a ${B}` or `${a} is a ${A} and a ${B}`). Natural language sentence schemes which translate symbolic formulas are classified according to whether they are precise, informal, or imprecise.
For each symbolic formula, there are many (partly automatically, partly manually generated) natural-language sentence scheme which render the formula in more or less precise way. Each of these natural-language "translations" of a symbolic formula is labeled according to whether it presents the logical form in a "precise", "informal", or "imprecise" way. e.g.
|type|form|
|-|-|
|symbolic|`(x): ${A}x -> ${B}x`|
|precise|`If someone is a ${A}, then they are a ${B}.`|
|informal|`Every ${A} is a ${B}.`|
|imprecise|`${A} might be a ${B}.`|
The labels "precise", "informal", "imprecise" are used to control the generation of two natural-language versions of the argument scheme, a **precise** one (for creating the argdown snippet) and an **informal** one (for creating the source text). Moreover, the natural-language "translations" are also chosen in view of the domain (see below) of the to-be-generated argument, specifically in view of whether it is quantified over persons ("everyone", "nobody") or objects ("something, nothing").
So, as a **precise** rendition of our symbolic argument scheme, we may obtain:
```
(1) If, and only if, a is a F, then a is G and a is a H.
--
with biconditional elimination (complex variant) from 1
--
(2) If a is a F, then a is a G and a is a H.
(3) a is a F.
--
with modus ponens (complex variant) from 3,2
--
(4) a is G and a is a H.
```
Likewise, an **informal** rendition may be:
```
(1) a is a F if a is both a G and a H -- and vice versa.
--
with biconditional elimination (complex variant) from 1
--
(2) a is a G and a H, provided a is a F.
(3) a is a F.
--
with modus ponens (complex variant) from 3,2
--
(4) a is both a G and a H.
```
#### Step 4: Substitution of placeholders with domain-specific predicates and names
Every argument falls within a domain. A domain provides
* a list of `subject names` (e.g., Peter, Sarah)
* a list of `object names` (e.g., New York, Lille)
* a list of `binary predicates` (e.g., [subject is an] admirer of [object])
These domains are manually created.
Replacements for the placeholders are sampled from the corresponding domain. Substitutes for entity placeholders (`a`, `b` etc.) are simply chosen from the list of `subject names`. Substitutes for predicate placeholders (`F`, `G` etc.) are constructed by combining `binary predicates` with `object names`, which yields unary predicates of the form "___ stands in some relation to some object". This combinatorial construction of unary predicates drastically increases the number of replacements available and hence the variety of generated arguments.
Assuming that we sample our argument from the domain `consumers personal care`, we may choose and construct the following substitutes for placeholders in our argument scheme:
* `F`: regular consumer of Kiss My Face soap
* `G`: regular consumer of Nag Champa soap
* `H`: occasional purchaser of Shield soap
* `a`: Orlando
#### Step 5: Creation of the argdown-snippet
From the **precise rendition** of the natural language argument scheme ([step 3](#step-3-creation-of-precise-and-informal-natural-language-argument-schemes)) and the replacements for its placeholders ([step 4](#step-4-substitution-of-placeholders-with-domain-specific-predicates-and-names)), we construct the `argdown-snippet` by simple substitution and formatting the complex argument in accordance with [argdown syntax](https://argdown.org).
This yields, for our example from above:
```argdown
(1) If, and only if, Orlando is a regular consumer of Kiss My Face soap,
then Orlando is a regular consumer of Nag Champa soap and Orlando is
a occasional purchaser of Shield soap.
--
with biconditional elimination (complex variant) from 1
--
(2) If Orlando is a regular consumer of Kiss My Face soap, then Orlando
is a regular consumer of Nag Champa soap and Orlando is a occasional
purchaser of Shield soap.
(3) Orlando is a regular consumer of Kiss My Face soap.
--
with modus ponens (complex variant) from 3,2
--
(4) Orlando is a regular consumer of Nag Champa soap and Orlando is a
occasional purchaser of Shield soap.
```
That's the `argdown_snippet`. By construction of such a synthetic argument (from formal schemes, see [step 2](#step-2-assembling-complex-multi-hop-argument-schemes-from-symbolic-inference-schemes)), we already know its conclusions and their formalization (the value of the field `explicit` will be determined later).
```json
"conclusion":[
{
"ref_reco":4,
"text":"Orlando is a regular consumer of Nag Champa
soap and Orlando is a occasional purchaser of
Shield soap.",
"explicit": TBD
}
],
"conclusion_formalized":[
{
"ref_reco":4,
"form":"(${F2}${a1} & ${F3}${a1})"
}
],
"intermediary_conclusions":[
{
"ref_reco":2,
"text":"If Orlando is a regular consumer of Kiss My
Face soap, then Orlando is a regular consumer of
Nag Champa soap and Orlando is a occasional
purchaser of Shield soap.",
"explicit": TBD
}
]
"intermediary_conclusions_formalized":[
{
"ref_reco":2,
"text":"${F1}${a1} -> (${F2}${a1} & ${F3}${a1})"
}
],
```
... and the corresponding keys (see [step 4](#step-4-substitution-of-placeholders-with-domain-specific-predicates-and-names))):
```json
"plcd_subs":{
"a1":"Orlando",
"F1":"regular consumer of Kiss My Face soap",
"F2":"regular consumer of Nag Champa soap",
"F3":"occasional purchaser of Shield soap"
}
```
#### Step 6: Paraphrasing
From the **informal rendition** of the natural language argument scheme ([step 3](#step-3-creation-of-precise-and-informal-natural-language-argument-schemes)) and the replacements for its placeholders ([step 4](#step-4-substitution-of-placeholders-with-domain-specific-predicates-and-names)), we construct an informal argument (argument tree) by substitution.
The statements (premises, conclusions) of the informal argument are individually paraphrased in two steps
1. rule-based and in a domain-specific way,
2. automatically by means of a specifically fine-tuned T5 model.
Each domain (see [step 4](#step-4-substitution-of-placeholders-with-domain-specific-predicates-and-names)) provides rules for substituting noun constructs ("is a supporter of X", "is a product made of X") with verb constructs ("supports x", "contains X"). These rules are applied whenever possible.
Next, each sentence is -- with a probability specified by parameter `lm_paraphrasing` -- replaced with an automatically generated paraphrase, using a [T5 model fine-tuned on the Google PAWS dataset](https://huggingface.co/Vamsi/T5_Paraphrase_Paws) and filtering for paraphrases with acceptable _cola_ and sufficiently high _STSB_ value (both as predicted by T5).
| |AAAC01|AAAC02|
|-|-|-|
|`lm_paraphrasing`|0.2|0.|
#### Step 7: Construction of a storyline for the argument source text
The storyline determines in which order the premises, intermediary conclusions and final conclusions are to be presented in the text paragraph to-be-constructed (`argument-source`). The storyline is constructed from the paraphrased informal complex argument (see [step 6](#step-6-paraphrasing))).
Before determining the order of presentation (storyline), the informal argument tree is pre-processed to account for:
* implicit premises,
* implicit intermediary conclusions, and
* implicit final conclusion,
which is documented in the dataset record as
```json
"presentation_parameters":{
"resolve_steps":[1],
"implicit_conclusion":false,
"implicit_premise":true,
"...":"..."
}
```
In order to make an intermediary conclusion *C* implicit, the inference to *C* is "resolved" by re-assigning all premisses *from* which *C* is directly inferred *to* the inference to the (final or intermediary) conclusion which *C* supports.
Original tree:
```
P1 ... Pn
—————————
C Q1 ... Qn
—————————————
C'
```
Tree with resolved inference and implicit intermediary conclusion:
```
P1 ... Pn Q1 ... Qn
———————————————————
C'
```
The original argument tree in our example reads:
```
(1)
———
(2) (3)
———————
(4)
```
This might be pre-processed (by resolving the first inference step and dropping the first premise) to:
```
(3)
———
(4)
```
Given such a pre-processed argument tree, a storyline, which determines the order of presentation, can be constructed by specifying the direction of presentation and a starting point. The **direction** is either
* forward (premise AND ... AND premise THEREFORE conclusion)
* backward (conclusion SINCE premise AND ... AND premise)
Any conclusion in the pre-processed argument tree may serve as starting point. The storyline is now constructed recursively, as illustrated in Figure~1. Integer labels of the nodes represent the order of presentation, i.e. the storyline. (Note that the starting point is not necessarily the statement which is presented first according to the storyline.)

So as to introduce redundancy, the storyline may be post-processed by repeating a premiss that has been stated previously. The likelihood that a single premise is repeated is controlled by the presentation parameters:
```json
"presentation_parameters":{
"redundancy_frequency":0.1,
}
```
Moreover, **distractors**, i.e. arbitrary statements sampled from the argument's very domain, may be inserted in the storyline.
#### Step 8: Assembling the argument source text
The `argument-source` is constructed by concatenating the statements of the informal argument ([step 6](#step-6-paraphrasing)) according to the order of the storyline ([step 7](#step-7-construction-of-a-storyline-for-the-argument-source-text)). In principle, each statement is prepended by a conjunction. There are four types of conjunction:
* THEREFORE: left-to-right inference
* SINCE: right-to-left inference
* AND: joins premises with similar inferential role
* MOREOVER: catch all conjunction
Each statement is assigned a specific conjunction type by the storyline.
For every conjunction type, we provide multiple natural-language terms which may figure as conjunctions when concatenating the statements, e.g. "So, necessarily,", "So", "Thus,", "It follows that", "Therefore,", "Consequently,", "Hence,", "In consequence,", "All this entails that", "From this follows that", "We may conclude that" for THEREFORE. The parameter
```json
"presentation_parameters":{
"drop_conj_frequency":0.1,
"...":"..."
}
```
determines the probability that a conjunction is omitted and a statement is concatenated without prepending a conjunction.
With the parameters given above we obtain the following `argument_source` for our example:
> Orlando is a regular consumer of Nag Champa soap and Orlando is a occasional purchaser of Shield soap, since Orlando is a regular consumer of Kiss My Face soap.
#### Step 9: Linking informal presentation and formal reconstruction
We can identify all statements _in the informal presentation_ (`argument_source`), categorize them according to their argumentative function GIVEN the logical reconstruction and link them to the corresponding statements in the `argdown_snippet`. We distinguish `reason_statement` (AKA REASONS, correspond to premises in the reconstruction) and `conclusion_statement` (AKA CONJECTURES, correspond to conclusion and intermediary conclusion in the reconstruction):
```json
"reason_statements":[ // aka reasons
{
"text":"Orlando is a regular consumer of Kiss My Face soap",
"starts_at":109,
"ref_reco":3
}
],
"conclusion_statements":[ // aka conjectures
{
"text":"Orlando is a regular consumer of Nag Champa soap and
Orlando is a occasional purchaser of Shield soap",
"starts_at":0,
"ref_reco":4
}
]
```
Moreover, we are now able to classify all premises in the formal reconstruction (`argdown_snippet`) according to whether they are implicit or explicit given the informal presentation:
```json
"premises":[
{
"ref_reco":1,
"text":"If, and only if, Orlando is a regular consumer of Kiss
My Face soap, then Orlando is a regular consumer of Nag
Champa soap and Orlando is a occasional purchaser of
Shield soap.",
"explicit":False
},
{
"ref_reco":3,
"text":"Orlando is a regular consumer of Kiss My Face soap. ",
"explicit":True
}
],
"premises_formalized":[
{
"ref_reco":1,
"form":"${F1}${a1} <-> (${F2}${a1} & ${F3}${a1})"
},
{
"ref_reco":3,
"form":"${F1}${a1}"
}
]
```
#### Initial Data Collection and Normalization
N.A.
#### Who are the source language producers?
N.A.
### Annotations
#### Annotation process
N.A.
#### Who are the annotators?
N.A.
### Personal and Sensitive Information
N.A.
## Considerations for Using the Data
### Social Impact of Dataset
None
### Discussion of Biases
None
### Other Known Limitations
See [Betz and Richardson 2021](https://arxiv.org/abs/2110.01509).
## Additional Information
### Dataset Curators
Gregor Betz, Kyle Richardson
### Licensing Information
Creative Commons cc-by-sa-4.0
### Citation Information
```
@misc{betz2021deepa2,
title={DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models},
author={Gregor Betz and Kyle Richardson},
year={2021},
eprint={2110.01509},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
<!--Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.-->
|
DrishtiSharma/Anime-Face-Dataset | 2022-04-11T00:04:37.000Z | [
"region:us"
] | DrishtiSharma | null | null | null | 3 | 31 | Entry not found |
SpeedOfMagic/ontonotes_english | 2022-07-01T16:06:06.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other",
"language:en",
"license:unknown",
"region:us"
] | SpeedOfMagic | null | null | null | 2 | 31 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: ontonotes_english
size_categories:
- 10K<n<100K
source_datasets:
- extended|other
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset Card for ontonotes_english
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CoNLL-2012 Shared Task](https://conll.cemantix.org/2012/data.html), [Author's page](https://cemantix.org/data/ontonotes.html)
- **Repository:**
- **Paper:** [Towards Robust Linguistic Analysis using OntoNotes](https://aclanthology.org/W13-3516/)
- **Leaderboard:** [Papers With Code](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5)
- **Point of Contact:**
### Dataset Summary
This is preprocessed version of what I assume is OntoNotes v5.0.
Instead of having sentences stored in files, files are unpacked and sentences are the rows now. Also, fields were renamed in order to match [conll2003](https://huggingface.co/datasets/conll2003).
The source of data is from private repository, which in turn got data from another public repository, location of which is unknown :)
Since data from all repositories had no license (creator of the private repository told me so), there should be no licensing issues. But bear in mind, I don't give any guarantees that this is real OntoNotes, and may differ as a result.
### Supported Tasks and Leaderboards
- [Named Entity Recognition on Ontonotes v5 (English)](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5)
- [Coreference Resolution on OntoNotes](https://paperswithcode.com/sota/coreference-resolution-on-ontonotes)
- [Semantic Role Labeling on OntoNotes](https://paperswithcode.com/sota/semantic-role-labeling-on-ontonotes)
### Languages
English
## Dataset Structure
### Data Instances
```
{
'tokens': ['Well', ',', 'the', 'Hundred', 'Regiments', 'Offensive', 'was', 'divided', 'into', 'three', 'phases', '.'],
'ner_tags': [0, 0, 29, 30, 30, 30, 0, 0, 0, 27, 0, 0]
}
```
### Data Fields
- **`tokens`** (*`List[str]`*) : **`words`** in original dataset
- **`ner_tags`** (*`List[ClassLabel]`*) : **`named_entities`** in original dataset. The BIO tags for named entities in the sentence.
- tag set : `datasets.ClassLabel(num_classes=37, names=["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE",])`
### Data Splits
_train_, _validation_, and _test_
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
No license
### Citation Information
```
@inproceedings{pradhan-etal-2013-towards,
title = "Towards Robust Linguistic Analysis using {O}nto{N}otes",
author = {Pradhan, Sameer and
Moschitti, Alessandro and
Xue, Nianwen and
Ng, Hwee Tou and
Bj{\"o}rkelund, Anders and
Uryupina, Olga and
Zhang, Yuchen and
Zhong, Zhi},
booktitle = "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-3516",
pages = "143--152",
}
```
### Contributions
Thanks to the author of private repository, that uploaded this dataset. |
MayaGalvez/multilingual_xglue_pos | 2022-08-04T15:03:45.000Z | [
"region:us"
] | MayaGalvez | null | null | null | 0 | 31 | Entry not found |
arbml/SANAD | 2022-10-30T23:09:16.000Z | [
"region:us"
] | arbml | null | null | null | 0 | 31 | Entry not found |
jpwahle/machine-paraphrase-dataset | 2022-11-18T16:54:17.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"spinbot",
"spinn... | jpwahle | null | null | null | 1 | 31 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Machine Paraphrase Dataset (SpinnerChief/SpinBot)
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- spinbot
- spinnerchief
- plagiarism
- paraphrase
- academic integrity
- arxiv
- wikipedia
- theses
task_categories:
- text-classification
- text-generation
task_ids: []
paperswithcode_id: identifying-machine-paraphrased-plagiarism
dataset_info:
- split: train
download_size: 393224
dataset_size: 393224
- split: test
download_size: 655376
dataset_size: 655376
---
# Dataset Card for Machine Paraphrase Dataset (MPC)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/jpwahle/iconf22-paraphrase
- **Paper:** https://link.springer.com/chapter/10.1007/978-3-030-96957-8_34
- **Total size:** 533 MB
- **Train size:** 340 MB
- **Test size:** 193 MB
### Dataset Summary
The Machine Paraphrase Corpus (MPC) consists of ~200k examples of original, and paraphrases using two online paraphrasing tools.
It uses two paraphrasing tools (SpinnerChief, SpinBot) on three source texts (Wikipedia, arXiv, student theses).
The examples are **not** aligned, i.e., we sample different paragraphs for originals and paraphrased versions.
### How to use it
You can load the dataset using the `load_dataset` function:
```python
from datasets import load_dataset
ds = load_dataset("jpwahle/machine-paraphrase-dataset")
print(ds[0])
#OUTPUT:
{
'text': 'The commemoration was revealed on Whit Monday 16 May 1921 by the Prince of Wales later King Edward VIII with Lutyens in participation At the divulging function Lord Fortescue gave a discourse in which he evaluated that 11600 people from Devon had been slaughtered while serving in the war He later expressed that somewhere in the range of 63700 8000 regulars 36700 volunteers and 19000 recruits had served in the military The names of the fallen were recorded on a move of respect of which three duplicates were made one for Exeter Cathedral one to be held by the district chamber and one which the Prince of Wales put in an empty in the base of the war dedication The rulers visit created impressive energy in the zone A large number of individuals lined the road to welcome his motorcade and shops on the High Street hung out pennants with inviting messages After the uncovering Edward went through ten days visiting the neighborhood ',
'label': 1,
'dataset': 'wikipedia',
'method': 'spinbot'
}
```
### Supported Tasks and Leaderboards
Paraphrase Identification
### Languages
English
## Dataset Structure
### Data Instances
```json
{
'text': 'The commemoration was revealed on Whit Monday 16 May 1921 by the Prince of Wales later King Edward VIII with Lutyens in participation At the divulging function Lord Fortescue gave a discourse in which he evaluated that 11600 people from Devon had been slaughtered while serving in the war He later expressed that somewhere in the range of 63700 8000 regulars 36700 volunteers and 19000 recruits had served in the military The names of the fallen were recorded on a move of respect of which three duplicates were made one for Exeter Cathedral one to be held by the district chamber and one which the Prince of Wales put in an empty in the base of the war dedication The rulers visit created impressive energy in the zone A large number of individuals lined the road to welcome his motorcade and shops on the High Street hung out pennants with inviting messages After the uncovering Edward went through ten days visiting the neighborhood ',
'label': 1,
'dataset': 'wikipedia',
'method': 'spinbot'
}
```
### Data Fields
| Feature | Description |
| --- | --- |
| `text` | The unique identifier of the paper. |
| `label` | Whether it is a paraphrase (1) or the original (0). |
| `dataset` | The source dataset (Wikipedia, arXiv, or theses). |
| `method` | The method used (SpinBot, SpinnerChief, original). |
### Data Splits
- train (Wikipedia x Spinbot)
- test ([Wikipedia, arXiv, theses] x [SpinBot, SpinnerChief])
## Dataset Creation
### Curation Rationale
Providing a resource for testing against machine-paraprhased plagiarism.
### Source Data
#### Initial Data Collection and Normalization
- Paragraphs from `featured articles` from the English Wikipedia dump
- Paragraphs from full-text pdfs of arXMLiv
- Paragraphs from full-text pdfs of Czech student thesis (bachelor, master, PhD).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Jan Philip Wahle](https://jpwahle.com/)
### Licensing Information
The Machine Paraphrase Dataset is released under CC BY-NC 4.0. By using this corpus, you agree to its usage terms.
### Citation Information
```bib
@inproceedings{10.1007/978-3-030-96957-8_34,
title = {Identifying Machine-Paraphrased Plagiarism},
author = {Wahle, Jan Philip and Ruas, Terry and Folt{\'y}nek, Tom{\'a}{\v{s}} and Meuschke, Norman and Gipp, Bela},
year = 2022,
booktitle = {Information for a Better World: Shaping the Global Future},
publisher = {Springer International Publishing},
address = {Cham},
pages = {393--413},
isbn = {978-3-030-96957-8},
editor = {Smits, Malte},
abstract = {Employing paraphrasing tools to conceal plagiarized text is a severe threat to academic integrity. To enable the detection of machine-paraphrased text, we evaluate the effectiveness of five pre-trained word embedding models combined with machine learning classifiers and state-of-the-art neural language models. We analyze preprints of research papers, graduation theses, and Wikipedia articles, which we paraphrased using different configurations of the tools SpinBot and SpinnerChief. The best performing technique, Longformer, achieved an average F1 score of 80.99{\%} (F1 = 99.68{\%} for SpinBot and F1 = 71.64{\%} for SpinnerChief cases), while human evaluators achieved F1 = 78.4{\%} for SpinBot and F1 = 65.6{\%} for SpinnerChief cases. We show that the automated classification alleviates shortcomings of widely-used text-matching systems, such as Turnitin and PlagScan.}
}
```
### Contributions
Thanks to [@jpwahle](https://github.com/jpwahle) for adding this dataset. |
bigbio/bionlp_st_2013_gro | 2022-12-22T15:44:01.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | GRO Task: Populating the Gene Regulation Ontology with events and
relations. A data set from the bio NLP shared tasks competition from 2013 | @inproceedings{kim-etal-2013-gro,
title = "{GRO} Task: Populating the Gene Regulation Ontology with events and relations",
author = "Kim, Jung-jae and
Han, Xu and
Lee, Vivian and
Rebholz-Schuhmann, Dietrich",
booktitle = "Proceedings of the {B}io{NLP} Shared Task 2013 Workshop",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-2007",
pages = "50--57",
} | null | 0 | 31 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: BioNLP 2013 GRO
homepage: https://github.com/openbiocorpora/bionlp-st-2013-gro
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- EVENT_EXTRACTION
- NAMED_ENTITY_RECOGNITION
- RELATION_EXTRACTION
---
# Dataset Card for BioNLP 2013 GRO
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2013-gro
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,RE
GRO Task: Populating the Gene Regulation Ontology with events and
relations. A data set from the bio NLP shared tasks competition from 2013
## Citation Information
```
@inproceedings{kim-etal-2013-gro,
title = "{GRO} Task: Populating the Gene Regulation Ontology with events and relations",
author = "Kim, Jung-jae and
Han, Xu and
Lee, Vivian and
Rebholz-Schuhmann, Dietrich",
booktitle = "Proceedings of the {B}io{NLP} Shared Task 2013 Workshop",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-2007",
pages = "50--57",
}
```
|
bigbio/ctebmsp | 2022-12-22T15:44:30.000Z | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-nc-4.0",
"region:us"
] | bigbio | The "abstracts" subset of the Clinical Trials for Evidence-Based Medicine in Spanish
(CT-EBM-SP) corpus contains 500 abstracts of clinical trial studies in Spanish,
published in journals with a Creative Commons license. Most were downloaded from
the SciELO repository and free abstracts in PubMed.
Abstracts were retrieved with the query:
Clinical Trial[ptyp] AND “loattrfree full text”[sb] AND “spanish”[la].
(Information collected from 10.1186/s12911-021-01395-z) | @article{CampillosLlanos2021,
author = {Leonardo Campillos-Llanos and
Ana Valverde-Mateos and
Adri{\'{a}}n Capllonch-Carri{\'{o}}n and
Antonio Moreno-Sandoval},
title = {A clinical trials corpus annotated with {UMLS}
entities to enhance the access to evidence-based medicine},
journal = {{BMC} Medical Informatics and Decision Making},
volume = {21},
year = {2021},
url = {https://doi.org/10.1186/s12911-021-01395-z},
doi = {10.1186/s12911-021-01395-z},
biburl = {},
bibsource = {}
} | null | 0 | 31 |
---
language:
- es
bigbio_language:
- Spanish
license: cc-by-nc-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_NC_4p0
pretty_name: CT-EBM-SP
homepage: http://www.lllf.uam.es/ESP/nlpmedterm_en.html
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for CT-EBM-SP
## Dataset Description
- **Homepage:** http://www.lllf.uam.es/ESP/nlpmedterm_en.html
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
### Ctebmsp Abstracts
The "abstracts" subset of the Clinical Trials for Evidence-Based Medicine in Spanish
(CT-EBM-SP) corpus contains 500 abstracts of clinical trial studies in Spanish,
published in journals with a Creative Commons license. Most were downloaded from
the SciELO repository and free abstracts in PubMed.
Abstracts were retrieved with the query:
Clinical Trial[ptyp] AND “loattrfree full text”[sb] AND “spanish”[la].
(Information collected from 10.1186/s12911-021-01395-z)
### Ctebmsp Eudract
The "abstracts" subset of the Clinical Trials for Evidence-Based Medicine in Spanish
(CT-EBM-SP) corpus contains 500 abstracts of clinical trial studies in Spanish,
published in journals with a Creative Commons license. Most were downloaded from
the SciELO repository and free abstracts in PubMed.
Abstracts were retrieved with the query:
Clinical Trial[ptyp] AND “loattrfree full text”[sb] AND “spanish”[la].
(Information collected from 10.1186/s12911-021-01395-z)
## Citation Information
```
@article{CampillosLlanos2021,
author = {Leonardo Campillos-Llanos and
Ana Valverde-Mateos and
Adri{'{a}}n Capllonch-Carri{'{o}}n and
Antonio Moreno-Sandoval},
title = {A clinical trials corpus annotated with {UMLS}
entities to enhance the access to evidence-based medicine},
journal = {{BMC} Medical Informatics and Decision Making},
volume = {21},
year = {2021},
url = {https://doi.org/10.1186/s12911-021-01395-z},
doi = {10.1186/s12911-021-01395-z},
biburl = {},
bibsource = {}
}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.