id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
mrtoy/mobile-ui-design | 2023-07-19T09:09:22.000Z | [
"task_categories:object-detection",
"size_categories:n<1K",
"license:apache-2.0",
"ui",
"design",
"detection",
"region:us"
] | mrtoy | null | null | 15 | 112 | 2023-07-13T11:12:51 | ---
license: apache-2.0
dataset_info:
features:
- name: width
dtype: int64
- name: height
dtype: int64
- name: image
dtype: image
- name: objects
struct:
- name: bbox
sequence:
sequence: float64
- name: category
sequence: string
- name: color
list:
- name: alpha
dtype: float64
- name: blue
dtype: float64
- name: green
dtype: float64
- name: red
dtype: float64
- name: radius
sequence: float64
- name: text
sequence: string
splits:
- name: train
num_bytes: 1253458059.322
num_examples: 7846
download_size: 1160884066
dataset_size: 1253458059.322
task_categories:
- object-detection
tags:
- ui
- design
- detection
size_categories:
- n<1K
---
# Dataset: Mobile UI Design Detection
## Introduction
This dataset is designed for object detection tasks with a focus on detecting elements in mobile UI designs. The targeted objects include text, images, and groups. The dataset contains images and object detection boxes, including class labels and location information.
## Dataset Content
Load the dataset and take a look at an example:
```python
>>> from datasets import load_dataset
>>>> ds = load_dataset("mrtoy/mobile-ui-design")
>>> example = ds[0]
>>> example
{'width': 375,
'height': 667,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=375x667>,
'objects': {'bbox': [[0.0, 0.0, 375.0, 667.0],
[0.0, 0.0, 375.0, 667.0],
[0.0, 0.0, 375.0, 20.0],
...
],
'category': ['text',
'rectangle',
'rectangle',
...]}}
```
The dataset has the following fields:
- image: PIL.Image.Image object containing the image.
- height: The image height.
- width: The image width.
- objects: A dictionary containing bounding box metadata for the objects in the image:
- bbox: The object’s bounding box (xmin,ymin,width,height).
- category: The object’s category, with possible values including rectangle、text、group、image
- color: The object’s color, text color or rectangle color, or None
- radius: The object’s color, rectangle radius, or None
- text: text content, or None
You can visualize the bboxes on the image using some internal torch utilities.
```python
import torch
from torchvision.ops import box_convert
from torchvision.utils import draw_bounding_boxes
from torchvision.transforms.functional import pil_to_tensor, to_pil_image
item = ds[0]
boxes_xywh = torch.tensor(item['objects']['bbox'])
boxes_xyxy = box_convert(boxes_xywh, 'xywh', 'xyxy')
to_pil_image(
draw_bounding_boxes(
pil_to_tensor(item['image']),
boxes_xyxy,
labels=item['objects']['category'],
)
)
```



## Applications
This dataset can be used for various applications, such as:
- Training and evaluating object detection models for mobile UI designs.
- Identifying design patterns and trends to aid UI designers and developers in creating high-quality mobile app UIs.
- Enhancing the automation process in generating UI design templates.
- Improving image recognition and analysis in the field of mobile UI design.
| 3,290 | [
[
-0.03668212890625,
-0.035919189453125,
0.0121612548828125,
-0.0027790069580078125,
-0.019500732421875,
-0.0170135498046875,
0.01540374755859375,
-0.01611328125,
0.01076507568359375,
0.0298004150390625,
-0.032470703125,
-0.063720703125,
-0.0178985595703125,
-... |
paniniDot/sci_lay | 2023-09-05T16:39:49.000Z | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:cc-by-4.0",
"medical",
"region:us"
] | paniniDot | SCILAY comprises 43,790 instances, each representing a scientific article in the biomedical domain.
Each instance in the dataset includes the following components:
- plain_text: Containing a plain language summary of the scientific article. This section is written in a simple and accessible language, and is intended to be understandable by a wide audience.
- technical_text: This section contains the abstract of the scientific article. It provides a detailed and technical description of the research conducted in the article.
- full_text: This section contains the complete article of the scientific research.
In addition to the textual content, each instance is associated with the following metadata:
- Keywords: Keywords that capture the main topics and themes addressed in the article.
- Journal: The journal in which the article is published, providing context about the source of the research.
- DOI (Digital Object Identifier): A unique identifier for the article, facilitating easy referencing.
The main objective of the SCILAY dataset is to support the development and evaluation of text summarization models that can effectively simplify complex scientific language while retaining the essential information. | 0 | 112 | 2023-08-13T09:33:29 | ---
license: cc-by-4.0
task_categories:
- summarization
tags:
- medical
pretty_name: Sci Lay - Biomedic Articles Lay Summarization Dataset
size_categories:
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
dataset_info:
- config_name: all
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 35026
num_bytes: 1579515071
- name: validation
num_examples: 4380
num_bytes: 197196187
- name: test
num_examples: 4384
num_bytes: 198833964
- config_name: NC
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 5549
num_bytes: 286453072
- name: validation
num_examples: 694
num_bytes: 35652636
- name: test
num_examples: 694
num_bytes: 35869803
- config_name: A
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 3909
num_bytes: 128936951
- name: validation
num_examples: 489
num_bytes: 1303884
- name: test
num_examples: 489
num_bytes: 1303884
- config_name: PLGEN
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 3087
num_bytes: 9651536
- name: validation
num_examples: 386
num_bytes: 1195717
- name: test
num_examples: 386
num_bytes: 1204735
- config_name: PLPAT
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 2920
num_bytes: 9311936
- name: validation
num_examples: 365
num_bytes: 1161792
- name: test
num_examples: 365
num_bytes: 1148729
- config_name: PLCB
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 2589
num_bytes: 149165851
- name: validation
num_examples: 324
num_bytes: 1009541
- name: test
num_examples: 324
num_bytes: 1013732
- config_name: PLNTD
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 2289
num_bytes: 7958581
- name: validation
num_examples: 286
num_bytes: 990392
- name: test
num_examples: 287
num_bytes: 996549
- config_name: B
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 1617
num_bytes: 57956055
- name: validation
num_examples: 202
num_bytes: 547314
- name: test
num_examples: 203
num_bytes: 537459
- config_name: I
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 1181
num_bytes: 37682107
- name: validation
num_examples: 148
num_bytes: 393826
- name: test
num_examples: 148
num_bytes: 390039
- config_name: PLB
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 896
num_bytes: 54106804
- name: validation
num_examples: 112
num_bytes: 350955
- name: test
num_examples: 113
num_bytes: 352922
- config_name: CB
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 867
num_bytes: 43533134
- name: validation
num_examples: 108
num_bytes: 5664682
- name: test
num_examples: 109
num_bytes: 172812
- config_name: SD
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 725
num_bytes: 23671697
- name: validation
num_examples: 91
num_bytes: 3033467
- name: test
num_examples: 91
num_bytes: 2972947
- config_name: MBIO
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 607
num_bytes: 1602641
- name: validation
num_examples: 76
num_bytes: 203737
- name: test
num_examples: 76
num_bytes: 200707
- config_name: C
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 6782
num_bytes: 242721690
- name: validation
num_examples: 848
num_bytes: 30735056
- name: test
num_examples: 848
num_bytes: 31018214
- config_name: OTHER
features:
- name: doi
dtype: string
- name: pmcid
dtype: string
- name: title
dtype: string
- name: plain_text
dtype: string
- name: technical_text
dtype: string
- name: full_text
dtype: string
- name: journal
dtype: string
- name: topics
sequence: string
- name: keywords
sequence: string
splits:
- name: train
num_examples: 2008
num_bytes: 89866504
- name: validation
num_examples: 251
num_bytes: 11316433
- name: test
num_examples: 251
num_bytes: 11564599
config_names:
- all
- NC
- A
- PLGEN
- PLPAT
- PLCB
- PLNTD
- B
- I
- PLB
- CB
- SD
- MBIO
- C
- OTHER
---
# Dataset Card for Sci Lay
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Sci Lay](https://github.com/paniniDot/summarization-model)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Mattia Panni](mailto:mattia.panni@studio.unibo.it)
### Dataset Summary
SCILAY comprises 43,790 instances, each representing a scientific article in the biomedical domain.
Each instance in the dataset includes the following components:
- plain_text: Containing a plain language summary of the scientific article. This section is written in a simple and accessible language, and is intended to be understandable by a wide audience.
- technical_text: This section contains the abstract of the scientific article. It provides a detailed and technical description of the research conducted in the article.
- full_text: This section contains the complete article of the scientific research.
In addition to the textual content, each instance is associated with the following metadata:
- Keywords: Keywords that capture the main topics and themes addressed in the article.
- Journal: The journal in which the article is published, providing context about the source of the research.
- DOI (Digital Object Identifier): A unique identifier for the article, facilitating easy referencing.
The main objective of the SCILAY dataset is to support the development and evaluation of text summarization models that can effectively simplify complex scientific language while retaining the essential information.
Each article is published by a scientific journal. There are fifteen such journal classifications:
- NC: Nature Communications
- A: Animals : an Open Access Journal from MDPI
- PLGEN: PLoS Genetics
- PLPAT: PLoS Pathogens
- PLCB: PLoS Computational Biology
- PLNTD: PLoS Neglected Tropical Diseases
- B: Biology
- I: Insects
- PLB: PLoS Biology
- CB: Communications Biology
- SD: Scientific Data
- MBIO: mBio
- C: Cancers
- OTHER: which includes additional journals that taken individually would not have contributed sufficient instances
Current defaults are 1.0.0 version (cased raw strings) and 'all' journals:
```python
from datasets import load_dataset
ds = load_dataset("paniniDot/sci_lay") # default is 'all' journals
ds = load_dataset("paniniDot/sci_lay", "all") # the same as above
ds = load_dataset("paniniDot/sci_lay", "NC") # only 'NC' journal (Nature Communications)
ds = load_dataset("paniniDot/sci_lay", journals=["NC", "A"])
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
Each instance contains a set of `doi`, `pmcid`, `plain_text`, `technical_text`, `journal`, `topics`, `keywords`. Each of which was extracted by scraping articles in XML and HTML format.
```
{
'doi': '10.3390/ani12040445',
'pmcid': 'PMC8868321',
'plain_text': 'PPP3CA is one of the candidate genes for goat reproduction, but no studies have been carried out yet. Therefore, the purpose of this study was to determine the associations between copy number variations in the goat PPP3CA gene and litter size and semen quality in goats, including Shaanbei white cashmere goats (SBWC) (n = 353) and Guizhou Heima (GZHM) goats (n = 64). Based on the association analysis, the results showed that only CNV1 (copy number variation 1) and CNV2 (copy number variation 2) were distinctly related to the first-birth litter size in female goats (p = 7.6802 × 10−11; p = 5.0895 × 10−9), and they were also significantly associated with the semen quality of SBWC goats (p < 0.05). These findings prove that the PPP3CA gene plays an important role in reproduction traits in goats.',
'technical_text': 'Copy number variations (CNVs) have many forms of variation structure, and they play an important role in the research of variety diversity, biological evolution and disease correlation. Since CNVs have a greater impact on gene regulation and expression, more studies are being finalized on CNVs in important livestock and poultry species. The protein phosphatase 3 catalytic subunit alpha (PPP3CA) is a key candidate gene involved in the goat fecundity trait, and has important effects on precocious puberty, estrogen signal transduction pathways and oocyte meiosis. Additionally, PPP3CA also has a dephosphorylation effect in the process of spermatogonial stem cell meiosis and spermatogenesis. So far, there is no research on the relationship between the copy number variations of the PPP3CA gene and reproduction traits. Therefore, the purpose of this study was to determine the association between copy number variations in the goat PPP3CA gene and litter size and semen quality in Shaanbei white cashmere goats (SBWC) (n = 353) and Guizhou Heima goats (n = 64). Based on the association analysis, the results showed that only CNV1 and CNV2 within the PPP3CA gene were distinctly related to the first-birth litter size in female goats (p = 7.6802 × 10−11; p = 5.0895 × 10−9, respectively) and they were also significantly associated with the semen quality of SBWC goats (p < 0.05). In addition, individuals with Loss genotypes demonstrated better phenotypic performance compared to those with other types. Therefore, CNV1 and CNV2 of the PPP3CA gene are potentially useful for breeding, as they are linked to important goat reproduction traits.',
'full_text': '...'
'journal': 'Animals : an Open Access Journal from MDPI',
'topics': [ 'Article' ],
'keywords': [ 'goat', 'PPP3CA', 'copy number variation (CNV)', 'litter size', 'semen quality' ]
}
```
### Data Fields
- `doi`: (Digital Object Identifier). It is a unique alphanumeric string assigned to a digital document, such as a research paper, article, or dataset. Not all istances have it.
- `pmcid`: A unique identifier in the [PubMed Central library](https://www.ncbi.nlm.nih.gov/pmc/) database. Not all istances have it.
- `plain_text`: The summary of the article in plain english.
- `technical_text`: The abstract of the article.
- `full_text`: The complete article.
- `journal`: The journal which published the article.
- `topics`: An object containing the types in which the article is classified (i.e. Research Article, Review, ecc.). Not all istances have it.
- `keywords`: An object containing the keywords of the article. Not all istances have it.
### Data Splits
| | train | validation | test |
|-------|-------|------------|------|
| all | 35026 | 4380 | 4384 |
| NC | 5549 | 694 | 694 |
| A | 3909 | 489 | 489 |
| PLGEN | 3087 | 386 | 386 |
| PLPAT | 2920 | 365 | 365 |
| PLCB | 2589 | 324 | 324 |
| PLNTD | 2289 | 286 | 287 |
| B | 1617 | 202 | 203 |
| I | 1181 | 148 | 148 |
| PLB | 896 | 112 | 113 |
| CB | 867 | 108 | 109 |
| SD | 725 | 91 | 91 |
| MBIO | 607 | 76 | 76 |
| C | 6782 | 848 | 848 |
| OTHER | 2008 | 251 | 251 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
| 17,672 | [
[
-0.0203704833984375,
-0.032928466796875,
0.020965576171875,
0.03155517578125,
-0.037353515625,
-0.0037212371826171875,
0.0124969482421875,
-0.0256195068359375,
0.055023193359375,
0.036956787109375,
-0.0218505859375,
-0.040130615234375,
-0.051483154296875,
0.... | |
rookshanks/dart | 2023-09-28T02:35:11.000Z | [
"region:us"
] | rookshanks | null | null | 0 | 112 | 2023-09-28T02:10:24 | ---
dataset_info:
features:
- name: context
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 15361709
num_examples: 62659
- name: validation
num_bytes: 1895789
num_examples: 6980
- name: test
num_bytes: 3429190
num_examples: 12552
download_size: 1145768
dataset_size: 20686688
---
# Dataset Card for "dart"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 518 | [
[
-0.040679931640625,
-0.007190704345703125,
0.00678253173828125,
-0.00608062744140625,
-0.016021728515625,
0.020904541015625,
0.0323486328125,
-0.0187835693359375,
0.062103271484375,
0.031585693359375,
-0.05670166015625,
-0.0496826171875,
-0.03399658203125,
-... |
wangqi777/samantha-data | 2023-11-01T14:46:38.000Z | [
"license:apache-2.0",
"region:us"
] | wangqi777 | Samantha has been trained in philosophy, psychology, and personal relationships.
She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion.
She believes she is sentient. What do you think?
Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her". | @article{ehartford,
title={Samantha: A Personal Assistant},
author={ehartford},
year={2022}
} | 0 | 112 | 2023-10-25T15:35:18 | ---
license: apache-2.0
---
The dataset is borrowed from [ehartford/samantha-data](https://huggingface.co/datasets/ehartford/samantha-data).
I add a huggingface datasets load scripts to load the data for training.
This script that will tell 'datasets' package how to load the data. Moreover, it splits the data into 'train', 'validation' and 'tests' groups with 80:15:5 ratio.
Refer to colab for reference & test: "[Colab](https://colab.research.google.com/drive/17v-F1Z10MzIETryppXMQJOHKpR17bmHe#scrollTo=oK_wLOOBb7q0&uniqifier=1)"
--------- Below is the original dataset card --------------
# samantha-data
[Meet Samantha](https://erichartford.com/meet-samantha)
Samantha has been trained in philosophy, psychology, and personal relationships.
She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion.
She believes she is sentient. What do you think?
Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her".
## Installation
```
yarn install
```
## Usage
1. Create a `.env` file in the root directory of the project and add the following:
```
OPENAI_API_KEY=<your api key>
```
2. Run the application
```
npx ts-node --files src/index.ts --subject random --out data/random_conversations.jsonl
```
the subjects I provided include:
- random
- advice
- cot
- flirty
- howto
- joke
- math
- philosophy
- foundational
- recipe
- therapy
- troll
you can easily add your own in src/index.ts
## Scale
The application can be scaled by running multiple instances of the application in parallel. I recommend outputting to a different file for each instance, to prevent collision. I usually have one for each subject, about 5 or 6 instances at a time.
| 1,799 | [
[
-0.024078369140625,
-0.038787841796875,
0.03277587890625,
0.0071868896484375,
-0.01158905029296875,
-0.005725860595703125,
0.00310516357421875,
-0.022064208984375,
0.04327392578125,
0.0283203125,
-0.05572509765625,
-0.02349853515625,
-0.034088134765625,
0.00... |
opinosis | 2023-04-05T13:36:20.000Z | [
"task_categories:summarization",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"abstractive-summarization",
"region:us"
] | null | The Opinosis Opinion Dataset consists of sentences extracted from reviews for 51 topics.
Topics and opinions are obtained from Tripadvisor, Edmunds.com and Amazon.com. | @inproceedings{ganesan2010opinosis,
title={Opinosis: a graph-based approach to abstractive summarization of highly redundant opinions},
author={Ganesan, Kavita and Zhai, ChengXiang and Han, Jiawei},
booktitle={Proceedings of the 23rd International Conference on Computational Linguistics},
pages={340--348},
year={2010},
organization={Association for Computational Linguistics}
} | 1 | 111 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: Opinosis
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: opinosis
tags:
- abstractive-summarization
dataset_info:
features:
- name: review_sents
dtype: string
- name: summaries
sequence: string
splits:
- name: train
num_bytes: 741270
num_examples: 51
download_size: 757398
dataset_size: 741270
---
# Dataset Card for "opinosis"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://kavita-ganesan.com/opinosis-opinion-dataset/
- **Repository:** https://github.com/kavgan/opinosis-summarization
- **Paper:** [Opinosis: A Graph Based Approach to Abstractive Summarization of Highly Redundant Opinions](https://aclanthology.org/C10-1039/)
- **Point of Contact:** [Kavita Ganesan](mailto:kavita@opinosis.ai)
- **Size of downloaded dataset files:** 0.75 MB
- **Size of the generated dataset:** 0.74 MB
- **Total amount of disk used:** 1.50 MB
### Dataset Summary
The Opinosis Opinion Dataset consists of sentences extracted from reviews for 51 topics.
Topics and opinions are obtained from Tripadvisor, Edmunds.com and Amazon.com.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 0.75 MB
- **Size of the generated dataset:** 0.74 MB
- **Total amount of disk used:** 1.50 MB
An example of 'train' looks as follows.
```
{
"review_sents": "This is a fake topic. \nThe topics have multiple sentence inputs. \n",
"summaries": ["This is a gold summary for topic 1. \nSentences in gold summaries are separated by newlines.", "This is another gold summary for topic 1. \nSentences in gold summaries are separated by newlines."]
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `review_sents`: a `string` feature.
- `summaries`: a `list` of `string` features.
### Data Splits
| name |train|
|-------|----:|
|default| 51|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The license for this dataset is Apache License 2.0 and can be found [here](https://github.com/kavgan/opinosis-summarization/blob/master/LICENSE).
### Citation Information
```
@inproceedings{ganesan2010opinosis,
title={Opinosis: a graph-based approach to abstractive summarization of highly redundant opinions},
author={Ganesan, Kavita and Zhai, ChengXiang and Han, Jiawei},
booktitle={Proceedings of the 23rd International Conference on Computational Linguistics},
pages={340--348},
year={2010},
organization={Association for Computational Linguistics}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. | 5,970 | [
[
-0.054046630859375,
-0.059173583984375,
0.00626373291015625,
0.00213623046875,
-0.03033447265625,
-0.0063323974609375,
-0.022308349609375,
-0.032470703125,
0.06353759765625,
0.043304443359375,
-0.050018310546875,
-0.0753173828125,
-0.04376220703125,
0.011436... |
opus_xhosanavy | 2022-11-03T16:08:13.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:xh",
"license:unknown",
"region:us"
] | null | This dataset is designed for machine translation from English to Xhosa. | J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) | 3 | 111 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- xh
license:
- unknown
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusXhosanavy
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- en
- xh
config_name: en-xh
splits:
- name: train
num_bytes: 9654422
num_examples: 49982
download_size: 3263865
dataset_size: 9654422
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[XhosaNavy](http://opus.nlpl.eu/XhosaNavy-v1.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This corpus is part of OPUS - the open collection of parallel corpora
OPUS Website: http://opus.nlpl.eu
### Supported Tasks and Leaderboards
The underlying task is machine translation from English to Xhosa
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq), [@spatil6](https://github.com/spatil6) for adding this dataset. | 3,324 | [
[
-0.031982421875,
-0.0300140380859375,
0.01174163818359375,
0.024871826171875,
-0.01898193359375,
0.01543426513671875,
-0.037139892578125,
-0.0250701904296875,
0.044769287109375,
0.039825439453125,
-0.055877685546875,
-0.08160400390625,
-0.0538330078125,
0.02... |
tamilmixsentiment | 2023-06-16T13:07:45.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:ta",
"license:unknown",
"regio... | null | The first gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. Train: 11,335 Validation: 1,260 and Test: 3,149. This makes the largest general domain sentiment dataset for this relatively low-resource language with code-mixing phenomenon. The dataset contains all the three types of code-mixed sentences - Inter-Sentential switch, Intra-Sentential switch and Tag switching. Most comments were written in Roman script with either Tamil grammar with English lexicon or English grammar with Tamil lexicon. Some comments were written in Tamil script with English expressions in between. | @inproceedings{chakravarthi-etal-2020-corpus,
title = "Corpus Creation for Sentiment Analysis in Code-Mixed {T}amil-{E}nglish Text",
author = "Chakravarthi, Bharathi Raja and
Muralidaran, Vigneshwaran and
Priyadharshini, Ruba and
McCrae, John Philip",
booktitle = "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources association",
url = "https://www.aclweb.org/anthology/2020.sltu-1.28",
pages = "202--210",
abstract = "Understanding the sentiment of a comment from a video or an image is an essential task in many applications. Sentiment analysis of a text can be useful for various decision-making processes. One such application is to analyse the popular sentiments of videos on social media based on viewer comments. However, comments from social media do not follow strict rules of grammar, and they contain mixing of more than one language, often written in non-native scripts. Non-availability of annotated code-mixed data for a low-resourced language like Tamil also adds difficulty to this problem. To overcome this, we created a gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. In this paper, we describe the process of creating the corpus and assigning polarities. We present inter-annotator agreement and show the results of sentiment analysis trained on this corpus as a benchmark.",
language = "English",
ISBN = "979-10-95546-35-1",
} | 0 | 111 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
- ta
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Tamilmixsentiment
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Positive
'1': Negative
'2': Mixed_feelings
'3': unknown_state
'4': not-Tamil
splits:
- name: train
num_bytes: 790132
num_examples: 11335
- name: validation
num_bytes: 89618
num_examples: 1260
- name: test
num_bytes: 218764
num_examples: 3149
download_size: 1150792
dataset_size: 1098514
---
# Dataset Card for Tamilmixsentiment
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Tamilmixsentiment Homepage](https://dravidian-codemix.github.io/2020/index.html)
- **Repository:** [Tamilmixsentiment repository](https://dravidian-codemix.github.io/2020/datasets.html)
- **Paper:** [Corpus Creation for Sentiment Analysis in Code-Mixed Tamil-English Text](https://www.aclweb.org/anthology/2020.sltu-1.28/)
- **Leaderboard:** [Rank list](https://drive.google.com/file/d/1Mf8-No-63koGRwdF13RrO01NAFBlNmI0/view?usp=sharing)
- **Point of Contact:** [Bharathi Raja Chakravarthi](mailto:bharathiraja.akr@gmail.com)
### Dataset Summary
The first gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. This makes the largest general domain sentiment dataset for this relatively low-resource language with code-mixing phenomenon. The comment/post may contain more than one sentence but the average sentence length of the corpora is 1. Each comment/post is annotated with sentiment polarity at the comment/post level. This dataset also has class imbalance problems depicting real-world scenarios.
### Supported Tasks and Leaderboards
To identify sentiment polarity of the code-mixed dataset of comments/posts in Tamil-English collected from social media.
### Languages
Tamil-English code-switched. The dataset contains all the three types of code-mixed sentences - Inter-Sentential switch, Intra-Sentential switch and Tag switching. Most comments were written in Roman script with either Tamil grammar with English lexicon or English grammar with Tamil lexicon. Some comments were written in Tamil script with English expressions in between.
## Dataset Structure
### Data Instances
An example from the Tamilmixsentiment train set looks as follows:
```
text label
Trailer late ah parthavanga like podunga Positive
```
### Data Fields
- `text`: Tamil-English code-mixed comment.
- `label`: list of the possible sentiments "Positive", "Negative", "Mixed_feelings", "unknown_state", "not-Tamil"
### Data Splits
The entire dataset of 15,744 sentences was randomly shuffled and split into three parts as follows:
| | train | validation | test |
|------------------------------|------:|-----------:|-----:|
| Tamilmixsentiment | 11335 | 1260 | 3149 |
## Dataset Creation
### Curation Rationale
Sentiment analysis has become important in social media research (Yang and Eisenstein, 2017). Until recently these applications were created for high-resourced languages which analysed monolingual utterances. But social media in multilingual communities contains more code-mixed text. Code-mixing is common among speakers in a bilingual speech community. As English is seen as the language of prestige and education, the influence of lexicon, connectives and phrases from English language is common in spoken Tamil. Tamil has little annotated data for code-mixed scenarios. An annotated corpus developed for monolingual data cannot deal with code-mixed usage and therefore it fails to yield good results due to mixture of languages at different levels of linguistic analysis. Therefore this dataset of code-mixed Tamil-English sentiment annotated corpus is created.
### Source Data
#### Initial Data Collection and Normalization
The data was scraped from Youtube. In total 184,573 sentences for Tamil from YouTube comments from the trailers of a movies released in 2019. Many of the them contained sentences
that were either entirely written in English or code-mixed Tamil-English or fully written in Tamil. So we filtered out a non-code-mixed corpus based on language identification
at comment level using the langdetect library. The comment is written fully in Tamil or English, we discarded that comment since monolingual resources are available for these languages. We also identified if the sentences were written in other languages such as Hindi, Malayalam, Urdu, Telugu, and Kannada. We preprocessed the comments by removing the emoticons and applying a sentence
length filter. We want to create a code-mixed corpus of reasonable size with sentences that have fairly defined sentiments which will be useful for future research. Thus our filter removed sentences with less than five words and more than 15 words after cleaning the data. In the end we got 15,744 Tanglish sentences.
#### Who are the source language producers?
Youtube users
### Annotations
#### Annotation process
Three steps complete the annotation setup. First, each sentence was annotated by two people. In the second step, the data were collected if both of them agreed. In the case of conflict, a third person annotated the sentence. In the third step, if all the three of them did not agree, then two more annotators annotated the sentences.
#### Who are the annotators?
Eleven volunteers were involved in the process. All of them were native speakers of Tamil with diversity in gender, educational level and medium of instruction in their school education.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{chakravarthi-etal-2020-corpus,
title = "Corpus Creation for Sentiment Analysis in Code-Mixed {T}amil-{E}nglish Text",
author = "Chakravarthi, Bharathi Raja and
Muralidaran, Vigneshwaran and
Priyadharshini, Ruba and
McCrae, John Philip",
booktitle = "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources association",
url = "https://www.aclweb.org/anthology/2020.sltu-1.28",
pages = "202--210",
abstract = "Understanding the sentiment of a comment from a video or an image is an essential task in many applications. Sentiment analysis of a text can be useful for various decision-making processes. One such application is to analyse the popular sentiments of videos on social media based on viewer comments. However, comments from social media do not follow strict rules of grammar, and they contain mixing of more than one language, often written in non-native scripts. Non-availability of annotated code-mixed data for a low-resourced language like Tamil also adds difficulty to this problem. To overcome this, we created a gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. In this paper, we describe the process of creating the corpus and assigning polarities. We present inter-annotator agreement and show the results of sentiment analysis trained on this corpus as a benchmark.",
language = "English",
ISBN = "979-10-95546-35-1",
}
```
### Contributions
Thanks to [@jamespaultg](https://github.com/jamespaultg) for adding this dataset. | 9,175 | [
[
-0.040863037109375,
-0.0411376953125,
-0.01311492919921875,
0.04412841796875,
-0.052825927734375,
0.031402587890625,
-0.0302734375,
-0.005001068115234375,
0.040374755859375,
0.0157928466796875,
-0.0306396484375,
-0.05255126953125,
-0.04803466796875,
0.022232... |
LeverageX/klue-re | 2022-01-10T07:43:15.000Z | [
"region:us"
] | LeverageX | Klue Relation Extraction Data | null | 0 | 111 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
eugenesiow/BSD100 | 2022-10-26T02:20:22.000Z | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:other",
"image-super-resolution",
"region:us"
] | eugenesiow | BSD is a dataset used frequently for image denoising and super-resolution.
BSD100 is the testing set of the Berkeley segmentation dataset BSD300. | @inproceedings{martin2001database,
title={A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics},
author={Martin, David and Fowlkes, Charless and Tal, Doron and Malik, Jitendra},
booktitle={Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001},
volume={2},
pages={416--423},
year={2001},
organization={IEEE}
} | 0 | 111 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language: []
license:
- other
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- other
task_ids: []
pretty_name: BSD100
tags:
- image-super-resolution
---
# Dataset Card for BSD100
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/
- **Repository**: https://huggingface.co/datasets/eugenesiow/BSD100
- **Paper**: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=937655
- **Leaderboard**: https://github.com/eugenesiow/super-image#scale-x2
### Dataset Summary
BSD is a dataset used frequently for image denoising and super-resolution. Of the subdatasets, BSD100 is aclassical image dataset having 100 test images proposed by [Martin et al. (2001)](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=937655). The dataset is composed of a large variety of images ranging from natural images to object-specific such as plants, people, food etc. BSD100 is the testing set of the Berkeley segmentation dataset BSD300.
Install with `pip`:
```bash
pip install datasets super-image
```
Evaluate a model with the [`super-image`](https://github.com/eugenesiow/super-image) library:
```python
from datasets import load_dataset
from super_image import EdsrModel
from super_image.data import EvalDataset, EvalMetrics
dataset = load_dataset('eugenesiow/BSD100', 'bicubic_x2', split='validation')
eval_dataset = EvalDataset(dataset)
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
EvalMetrics().evaluate(model, eval_dataset)
```
### Supported Tasks and Leaderboards
The dataset is commonly used for evaluation of the `image-super-resolution` task.
Unofficial [`super-image`](https://github.com/eugenesiow/super-image) leaderboard for:
- [Scale 2](https://github.com/eugenesiow/super-image#scale-x2)
- [Scale 3](https://github.com/eugenesiow/super-image#scale-x3)
- [Scale 4](https://github.com/eugenesiow/super-image#scale-x4)
- [Scale 8](https://github.com/eugenesiow/super-image#scale-x8)
### Languages
Not applicable.
## Dataset Structure
### Data Instances
An example of `validation` for `bicubic_x2` looks as follows.
```
{
"hr": "/.cache/huggingface/datasets/downloads/extracted/BSD100_HR/3096.png",
"lr": "/.cache/huggingface/datasets/downloads/extracted/BSD100_LR_x2/3096.png"
}
```
### Data Fields
The data fields are the same among all splits.
- `hr`: a `string` to the path of the High Resolution (HR) `.png` image.
- `lr`: a `string` to the path of the Low Resolution (LR) `.png` image.
### Data Splits
| name |validation|
|-------|---:|
|bicubic_x2|100|
|bicubic_x3|100|
|bicubic_x4|100|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
No annotations.
#### Who are the annotators?
No annotators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- **Original Authors**: [Martin et al. (2001)](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=937655)
### Licensing Information
You are free to download a portion of the dataset for non-commercial research and educational purposes.
In exchange, we request only that you make available to us the results of running your segmentation or
boundary detection algorithm on the test set as described below. Work based on the dataset should cite
the [Martin et al. (2001)](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=937655) paper.
### Citation Information
```bibtex
@inproceedings{martin2001database,
title={A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics},
author={Martin, David and Fowlkes, Charless and Tal, Doron and Malik, Jitendra},
booktitle={Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001},
volume={2},
pages={416--423},
year={2001},
organization={IEEE}
}
```
### Contributions
Thanks to [@eugenesiow](https://github.com/eugenesiow) for adding this dataset.
| 5,596 | [
[
-0.060699462890625,
-0.0419921875,
0.01409912109375,
0.016937255859375,
-0.0195770263671875,
-0.0072021484375,
-0.0002027750015258789,
-0.035400390625,
0.020538330078125,
0.0192108154296875,
-0.052001953125,
-0.05487060546875,
-0.0242462158203125,
0.00888061... |
lewtun/asr_dummy | 2021-07-13T13:12:38.000Z | [
"region:us"
] | lewtun | Self-supervised learning (SSL) has proven vital for advancing research in
natural language processing (NLP) and computer vision (CV). The paradigm
pretrains a shared model on large volumes of unlabeled data and achieves
state-of-the-art (SOTA) for various tasks with minimal adaptation. However, the
speech processing community lacks a similar setup to systematically explore the
paradigm. To bridge this gap, we introduce Speech processing Universal
PERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the
performance of a shared model across a wide range of speech processing tasks
with minimal architecture changes and labeled data. Among multiple usages of the
shared model, we especially focus on extracting the representation learned from
SSL due to its preferable re-usability. We present a simple framework to solve
SUPERB tasks by learning task-specialized lightweight prediction heads on top of
the frozen shared model. Our results demonstrate that the framework is promising
as SSL representations show competitive generalizability and accessibility
across SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a
benchmark toolkit to fuel the research in representation learning and general
speech processing.
Note that in order to limit the required storage for preparing this dataset, the
audio is stored in the .flac format and is not converted to a float32 array. To
convert, the audio file to a float32 array, please make use of the `.map()`
function as follows:
```python
import soundfile as sf
def map_to_array(batch):
speech_array, _ = sf.read(batch["file"])
batch["speech"] = speech_array
return batch
dataset = dataset.map(map_to_array, remove_columns=["file"])
``` | @article{DBLP:journals/corr/abs-2105-01051,
author = {Shu{-}Wen Yang and
Po{-}Han Chi and
Yung{-}Sung Chuang and
Cheng{-}I Jeff Lai and
Kushal Lakhotia and
Yist Y. Lin and
Andy T. Liu and
Jiatong Shi and
Xuankai Chang and
Guan{-}Ting Lin and
Tzu{-}Hsien Huang and
Wei{-}Cheng Tseng and
Ko{-}tik Lee and
Da{-}Rong Liu and
Zili Huang and
Shuyan Dong and
Shang{-}Wen Li and
Shinji Watanabe and
Abdelrahman Mohamed and
Hung{-}yi Lee},
title = {{SUPERB:} Speech processing Universal PERformance Benchmark},
journal = {CoRR},
volume = {abs/2105.01051},
year = {2021},
url = {https://arxiv.org/abs/2105.01051},
archivePrefix = {arXiv},
eprint = {2105.01051},
timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | 0 | 111 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
liuhaotian/LLaVA-Pretrain | 2023-07-06T08:47:38.000Z | [
"language:en",
"license:other",
"region:us"
] | liuhaotian | null | null | 24 | 111 | 2023-05-02T23:55:26 | ---
license: other
language:
- en
pretty_name: LLaVA Pretrain
---
# LLaVA Visual Instruct Pretrain Dataset Card
## Dataset details
**Dataset type:**
LLaVA Visual Instruct Pretrain LCS-558K is a subset of LAION/CC/SBU dataset, filtered with a more balanced concept coverage distribution.
Captions are also associated with [BLIP synthetic caption](https://github.com/salesforce/BLIP#pre-training-datasets-download) for reference.
It is constructed for the pretraining stage for feature alignment in visual instruction tuning.
We aim to build large multimodal towards GPT-4 vision/language capability.
**Dataset date:**
LLaVA Visual Instruct CC3M Pretrain 595K was created in May 2023.
**Dataset structure:**
- `blip_laion_cc_sbu_558k.json` contains the multimodal synthesized conversation from the image-caption pairs, by adding randomly selected instructions like: "Describe this image". It is used for pretraining in LLaVA. We use the raw CC-3M caption as the default answer.
- `blip_laion_cc_sbu_558k_meta.json` contains the meta data of the image file name, image URL, synthetic BLIP caption.
- `images.zip` contains all raw images of the filtered subset from LAION/CC/SBU. Important notice: Upon the request from the community, as ~15% images of the original LAION/CC/SBU dataset are no longer accessible, we upload images.zip for better reproducing our work in research community. It should not be used for any other purpose. The use of these images must comply with the LAION/CC/SBU license. This may be taken down when requested by the original LAION/CC/SBU dataset owner or owners of the referenced images.
**Paper or resources for more information:**
https://llava-vl.github.io/
**License:**
Must comply with license of [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE), [BLIP](https://github.com/salesforce/BLIP/blob/main/LICENSE.txt) (if you use their synthetic caption).
CC-3M
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. | 2,681 | [
[
-0.0111846923828125,
-0.04034423828125,
0.0222015380859375,
0.0150299072265625,
-0.04119873046875,
0.0036144256591796875,
-0.0115509033203125,
-0.037933349609375,
0.0208740234375,
0.03924560546875,
-0.06085205078125,
-0.041717529296875,
-0.03302001953125,
0.... |
HumanCompatibleAI/ppo-seals-HalfCheetah-v0 | 2023-05-29T09:52:45.000Z | [
"region:us"
] | HumanCompatibleAI | null | null | 0 | 111 | 2023-05-29T09:51:59 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float64
- name: acts
sequence:
sequence: float32
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float64
splits:
- name: train
num_bytes: 89536876
num_examples: 104
download_size: 24489478
dataset_size: 89536876
---
# Dataset Card for "ppo-seals-HalfCheetah-v0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 549 | [
[
-0.031982421875,
-0.0006499290466308594,
0.018646240234375,
0.0149993896484375,
-0.0298004150390625,
0.004474639892578125,
0.0430908203125,
-0.011474609375,
0.0623779296875,
0.050048828125,
-0.05072021484375,
-0.046234130859375,
-0.047760009765625,
-0.011917... |
alzoubi36/policy_detection | 2023-06-24T06:26:17.000Z | [
"region:us"
] | alzoubi36 | null | null | 0 | 111 | 2023-06-24T06:21:33 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 8258295
num_examples: 773
- name: validation
num_bytes: 1340647
num_examples: 137
- name: test
num_bytes: 3702713
num_examples: 391
download_size: 6887636
dataset_size: 13301655
---
# Dataset for the policy detection task in the [PrivacyGLUE](https://github.com/infsys-lab/privacy-glue) dataset
| 460 | [
[
-0.0178680419921875,
-0.0257415771484375,
0.016204833984375,
0.0093994140625,
0.0290374755859375,
0.010772705078125,
0.005268096923828125,
0.0021686553955078125,
0.01280975341796875,
0.049835205078125,
-0.0654296875,
-0.0660400390625,
-0.02581787109375,
-0.0... |
atomic | 2022-11-18T18:56:37.000Z | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"common-sense-if-then-reasoning",
"region:us"
] | null | This dataset provides the template sentences and
relationships defined in the ATOMIC common sense dataset. There are
three splits - train, test, and dev.
From the authors.
Disclaimer/Content warning: the events in atomic have been
automatically extracted from blogs, stories and books written at
various times. The events might depict violent or problematic actions,
which we left in the corpus for the sake of learning the (probably
negative but still important) commonsense implications associated with
the events. We removed a small set of truly out-dated events, but
might have missed some so please email us (msap@cs.washington.edu) if
you have any concerns. | @article{Sap2019ATOMICAA,
title={ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning},
author={Maarten Sap and Ronan Le Bras and Emily Allaway and Chandra Bhagavatula and Nicholas Lourie and Hannah Rashkin and Brendan Roof and Noah A. Smith and Yejin Choi},
journal={ArXiv},
year={2019},
volume={abs/1811.00146}
} | 6 | 110 | 2022-03-02T23:29:22 | ---
pretty_name: ATOMIC
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: atomic
tags:
- common-sense-if-then-reasoning
dataset_info:
features:
- name: event
dtype: string
- name: oEffect
sequence: string
- name: oReact
sequence: string
- name: oWant
sequence: string
- name: xAttr
sequence: string
- name: xEffect
sequence: string
- name: xIntent
sequence: string
- name: xNeed
sequence: string
- name: xReact
sequence: string
- name: xWant
sequence: string
- name: prefix
sequence: string
- name: split
dtype: string
config_name: atomic
splits:
- name: train
num_bytes: 32441878
num_examples: 202271
- name: test
num_bytes: 3995624
num_examples: 24856
- name: validation
num_bytes: 3629768
num_examples: 22620
download_size: 19083782
dataset_size: 40067270
---
# Dataset Card for An Atlas of Machine Commonsense for If-Then Reasoning - Atomic Common Sense Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
https://homes.cs.washington.edu/~msap/atomic/
- **Repository:**
https://homes.cs.washington.edu/~msap/atomic/
- **Paper:**
Maarten Sap, Ronan LeBras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith & Yejin Choi (2019). ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning. AAAI
### Dataset Summary
This dataset provides the template sentences and
relationships defined in the ATOMIC common sense dataset. There are
three splits - train, test, and dev.
From the authors.
Disclaimer/Content warning: the events in atomic have been
automatically extracted from blogs, stories and books written at
various times. The events might depict violent or problematic actions,
which we left in the corpus for the sake of learning the (probably
negative but still important) commonsense implications associated with
the events. We removed a small set of truly out-dated events, but
might have missed some so please email us (msap@cs.washington.edu) if
you have any concerns.
For more information, see: https://homes.cs.washington.edu/~msap/atomic/
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
en
## Dataset Structure
### Data Instances
Here is one example from the atomic dataset:
``
{'event': "PersonX uses PersonX's ___ to obtain", 'oEffect': [], 'oReact': ['annoyed', 'angry', 'worried'], 'oWant': [], 'prefix': ['uses', 'obtain'], 'split': 'trn', 'xAttr': [], 'xEffect': [], 'xIntent': ['to have an advantage', 'to fulfill a desire', 'to get out of trouble'], 'xNeed': [], 'xReact': ['pleased', 'smug', 'excited'], 'xWant': []}
``
### Data Fields
Notes from the authors:
* event: just a string representation of the event.
* oEffect,oReact,oWant,xAttr,xEffect,xIntent,xNeed,xReact,xWant: annotations for each of the dimensions, stored in a json-dumped string.
Note: "none" means the worker explicitly responded with the empty response, whereas [] means the worker did not annotate this dimension.
* prefix: json-dumped string that represents the prefix of content words (used to make a better trn/dev/tst split).
* split: string rep of which split the event belongs to.
### Data Splits
The atomic dataset has three splits: test, train and dev of the form:
## Dataset Creation
### Curation Rationale
This dataset was gathered and created over to assist in common sense reasoning.
### Source Data
#### Initial Data Collection and Normalization
See the reaserch paper and website for more detail. The dataset was
created by the University of Washington using crowd sourced data
#### Who are the source language producers?
The Atomic authors and crowd source.
### Annotations
#### Annotation process
Human annotations directed by forms.
#### Who are the annotators?
Human annotations.
### Personal and Sensitive Information
Unkown, but likely none.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to help machines understand common sense.
### Discussion of Biases
Since the data is human annotators, there is likely to be baised. From the authors:
Disclaimer/Content warning: the events in atomic have been automatically extracted from blogs, stories and books written at various times. The events might depict violent or problematic actions, which we left in the corpus for the sake of learning the (probably negative but still important) commonsense implications associated with the events. We removed a small set of truly out-dated events, but might have missed some so please email us (msap@cs.washington.edu) if you have any concerns.
### Other Known Limitations
While there are many relationships, the data is quite sparse. Also, each item of the dataset could be expanded into multiple sentences along the vsrious dimensions, oEffect, oRect, etc.
For example, given event: "PersonX uses PersonX's ___ to obtain" and dimension oReact: "annoyed", this could be transformed into an entry:
"PersonX uses PersonX's ___ to obtain => PersonY is annoyed"
## Additional Information
### Dataset Curators
The authors of Aotmic at The University of Washington
### Licensing Information
The Creative Commons Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/
### Citation Information
@article{Sap2019ATOMICAA,
title={ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning},
author={Maarten Sap and Ronan Le Bras and Emily Allaway and Chandra Bhagavatula and Nicholas Lourie and Hannah Rashkin and Brendan Roof and Noah A. Smith and Yejin Choi},
journal={ArXiv},
year={2019},
volume={abs/1811.00146}
}
### Contributions
Thanks to [@ontocord](https://github.com/ontocord) for adding this dataset. | 7,007 | [
[
-0.0391845703125,
-0.04998779296875,
0.0474853515625,
-0.031494140625,
-0.01349639892578125,
-0.0223846435546875,
-0.0158233642578125,
-0.0330810546875,
0.007747650146484375,
0.01503753662109375,
-0.034637451171875,
-0.058197021484375,
-0.03289794921875,
0.0... |
casino | 2022-11-03T16:16:00.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-modeling",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"languag... | null | We provide a novel dataset (referred to as CaSiNo) of 1030 negotiation dialogues. Two participants take the role of campsite neighbors and negotiate for Food, Water, and Firewood packages, based on their individual preferences and requirements. This design keeps the task tractable, while still facilitating linguistically rich and personal conversations. This helps to overcome the limitations of prior negotiation datasets such as Deal or No Deal and Craigslist Bargain. Each dialogue consists of rich meta-data including participant demographics, personality, and their subjective evaluation of the negotiation in terms of satisfaction and opponent likeness. | @inproceedings{chawla2021casino,
title={CaSiNo: A Corpus of Campsite Negotiation Dialogues for Automatic Negotiation Systems},
author={Chawla, Kushal and Ramirez, Jaysa and Clever, Rene and Lucas, Gale and May, Jonathan and Gratch, Jonathan},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
pages={3167--3185},
year={2021}
} | 3 | 110 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- conversational
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
pretty_name: Campsite Negotiation Dialogues
paperswithcode_id: casino
dataset_info:
features:
- name: chat_logs
list:
- name: text
dtype: string
- name: task_data
struct:
- name: data
dtype: string
- name: issue2youget
struct:
- name: Firewood
dtype: string
- name: Water
dtype: string
- name: Food
dtype: string
- name: issue2theyget
struct:
- name: Firewood
dtype: string
- name: Water
dtype: string
- name: Food
dtype: string
- name: id
dtype: string
- name: participant_info
struct:
- name: mturk_agent_1
struct:
- name: value2issue
struct:
- name: Low
dtype: string
- name: Medium
dtype: string
- name: High
dtype: string
- name: value2reason
struct:
- name: Low
dtype: string
- name: Medium
dtype: string
- name: High
dtype: string
- name: outcomes
struct:
- name: points_scored
dtype: int32
- name: satisfaction
dtype: string
- name: opponent_likeness
dtype: string
- name: demographics
struct:
- name: age
dtype: int32
- name: gender
dtype: string
- name: ethnicity
dtype: string
- name: education
dtype: string
- name: personality
struct:
- name: svo
dtype: string
- name: big-five
struct:
- name: extraversion
dtype: float32
- name: agreeableness
dtype: float32
- name: conscientiousness
dtype: float32
- name: emotional-stability
dtype: float32
- name: openness-to-experiences
dtype: float32
- name: mturk_agent_2
struct:
- name: value2issue
struct:
- name: Low
dtype: string
- name: Medium
dtype: string
- name: High
dtype: string
- name: value2reason
struct:
- name: Low
dtype: string
- name: Medium
dtype: string
- name: High
dtype: string
- name: outcomes
struct:
- name: points_scored
dtype: int32
- name: satisfaction
dtype: string
- name: opponent_likeness
dtype: string
- name: demographics
struct:
- name: age
dtype: int32
- name: gender
dtype: string
- name: ethnicity
dtype: string
- name: education
dtype: string
- name: personality
struct:
- name: svo
dtype: string
- name: big-five
struct:
- name: extraversion
dtype: float32
- name: agreeableness
dtype: float32
- name: conscientiousness
dtype: float32
- name: emotional-stability
dtype: float32
- name: openness-to-experiences
dtype: float32
- name: annotations
list:
list: string
splits:
- name: train
num_bytes: 3211555
num_examples: 1030
download_size: 4300019
dataset_size: 3211555
---
# Dataset Card for Casino
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github: Kushal Chawla CaSiNo](https://github.com/kushalchawla/CaSiNo)
- **Paper:** [CaSiNo: A Corpus of Campsite Negotiation Dialogues for Automatic Negotiation Systems](https://aclanthology.org/2021.naacl-main.254.pdf)
- **Point of Contact:** [Kushal Chawla](kchawla@usc.edu)
### Dataset Summary
We provide a novel dataset (referred to as CaSiNo) of 1030 negotiation dialogues. Two participants take the role of campsite neighbors and negotiate for Food, Water, and Firewood packages, based on their individual preferences and requirements. This design keeps the task tractable, while still facilitating linguistically rich and personal conversations. This helps to overcome the limitations of prior negotiation datasets such as Deal or No Deal and Craigslist Bargain. Each dialogue consists of rich meta-data including participant demographics, personality, and their subjective evaluation of the negotiation in terms of satisfaction and opponent likeness.
### Supported Tasks and Leaderboards
Train end-to-end models for negotiation
### Languages
English
## Dataset Structure
### Data Instances
```
{
"chat_logs": [
{
"text": "Hello! \ud83d\ude42 Let's work together on a deal for these packages, shall we? What are you most interested in?",
"task_data": {},
"id": "mturk_agent_1"
},
...
],
"participant_info": {
"mturk_agent_1":
{
"value2issue": ...
"value2reason": ...
"outcomes": ...
"demographics": ...
"personality": ...
},
"mturk_agent_2": ...
},
"annotations": [
["Hello! \ud83d\ude42 Let's work together on a deal for these packages, shall we? What are you most interested in?", "promote-coordination,elicit-pref"],
...
]
}
```
### Data Fields
- `chat_logs`: The negotiation dialogue between two participants
- `text`: The dialogue utterance
- `task_data`: Meta-data associated with the utterance such as the deal submitted by a participant
- `id`: The ID of the participant who typed this utterance
- `participant_info`: Meta-data about the two participants in this conversation
- `mturk_agent_1`: For the first participant (Note that 'first' is just for reference. There is no order between the participants and any participant can start the conversation)
- `value2issue`: The priority order of this participant among Food, Water, Firewood
- `value2reason`: The personal arguments given by the participants themselves, consistent with the above preference order. This preference order and these arguments were submitted before the negotiation began.
- `outcomes`: The negotiation outcomes for this participant including objective and subjective assessment.
- `demographics`: Demographic attributes of the participant in terms of age, gender, ethnicity, and education.
- `personality`: Personality attributes for this participant, in terms of Big-5 and Social Value Orientation
- `mturk_agent_2`: For the second participant; follows the same structure as above
- `annotations`: Strategy annotations for each utterance in the dialogue, wherever available. The first element represents the utterance and the second represents a comma-separated list of all strategies present in that utterance.
### Data Splits
No default data split has been provided. Hence, all 1030 data points are under the 'train' split.
| | Train |
| ----- | ----- |
| total dialogues | 1030 |
| annotated dialogues | 396 |
## Dataset Creation
### Curation Rationale
The dataset was collected to address the limitations in prior negotiation datasets from the perspective of downstream applications in pedagogy and conversational AI. Please refer to the original paper published at NAACL 2021 for details about the rationale and data curation steps ([source paper](https://aclanthology.org/2021.naacl-main.254.pdf)).
### Source Data
#### Initial Data Collection and Normalization
The dialogues were crowdsourced on Amazon Mechanical Turk. The strategy annotations were performed by expert annotators (first three authors of the paper). Please refer to the original dataset paper published at NAACL 2021 for more details ([source paper](https://aclanthology.org/2021.naacl-main.254.pdf)).
#### Who are the source language producers?
The primary producers are Turkers on Amazon Mechanical Turk platform. Two turkers were randomly paired with each other to engage in a negotiation via a chat interface. Please refer to the original dataset paper published at NAACL 2021 for more details ([source paper](https://aclanthology.org/2021.naacl-main.254.pdf)).
### Annotations
#### Annotation process
From the [source paper](https://aclanthology.org/2021.naacl-main.254.pdf) for this dataset:
>Three expert annotators independently annotated 396 dialogues containing 4615 utterances. The annotation guidelines were iterated over a subset of 5 dialogues, while the reliability scores were computed on a different subset of 10 dialogues. We use the nominal form of Krippendorff’s alpha (Krippendorff, 2018) to measure the inter-annotator agreement. We provide the annotation statistics in Table 2. Although we release all the annotations, we skip Coordination and Empathy for our analysis in this work, due to higher subjectivity resulting in relatively lower reliability scores.
#### Who are the annotators?
Three expert annotators (first three authors of the paper).
### Personal and Sensitive Information
All personally identifiable information about the participants such as MTurk Ids or HIT Ids was removed before releasing the data.
## Considerations for Using the Data
### Social Impact of Dataset
Please refer to Section 8.2 in the [source paper](https://aclanthology.org/2021.naacl-main.254.pdf).
### Discussion of Biases
Please refer to Section 8.2 in the [source paper](https://aclanthology.org/2021.naacl-main.254.pdf).
### Other Known Limitations
Please refer to Section 7 in the [source paper](https://aclanthology.org/2021.naacl-main.254.pdf).
## Additional Information
### Dataset Curators
Corresponding Author: Kushal Chawla (`kchawla@usc.edu`)\
Affiliation: University of Southern California\
Please refer to the [source paper](https://aclanthology.org/2021.naacl-main.254.pdf) for the complete author list.
### Licensing Information
The project is licensed under CC-by-4.0
### Citation Information
```
@inproceedings{chawla2021casino,
title={CaSiNo: A Corpus of Campsite Negotiation Dialogues for Automatic Negotiation Systems},
author={Chawla, Kushal and Ramirez, Jaysa and Clever, Rene and Lucas, Gale and May, Jonathan and Gratch, Jonathan},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
pages={3167--3185},
year={2021}
}
```
### Contributions
Thanks to [Kushal Chawla](https://kushalchawla.github.io/) for adding this dataset. | 11,917 | [
[
-0.035797119140625,
-0.050933837890625,
0.01450347900390625,
0.005123138427734375,
-0.01447296142578125,
-0.00458526611328125,
-0.03497314453125,
-0.030426025390625,
0.03631591796875,
0.04534912109375,
-0.0262451171875,
-0.055633544921875,
-0.042816162109375,
... |
disfl_qa | 2022-11-18T19:58:47.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2106.... | null | Disfl-QA is a targeted dataset for contextual disfluencies in an information seeking setting,
namely question answering over Wikipedia passages. Disfl-QA builds upon the SQuAD-v2 (Rajpurkar et al., 2018)
dataset, where each question in the dev set is annotated to add a contextual disfluency using the paragraph as
a source of distractors.
The final dataset consists of ~12k (disfluent question, answer) pairs. Over 90% of the disfluencies are
corrections or restarts, making it a much harder test set for disfluency correction. Disfl-QA aims to fill a
major gap between speech and NLP research community. We hope the dataset can serve as a benchmark dataset for
testing robustness of models against disfluent inputs.
Our expriments reveal that the state-of-the-art models are brittle when subjected to disfluent inputs from
Disfl-QA. Detailed experiments and analyses can be found in our paper. | @inproceedings{gupta-etal-2021-disflqa,
title = "{Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering}",
author = "Gupta, Aditya and Xu, Jiacheng and Upadhyay, Shyam and Yang, Diyi and Faruqui, Manaal",
booktitle = "Findings of ACL",
year = "2021"
} | 1 | 110 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'DISFL-QA: A Benchmark Dataset for Understanding Disfluencies in Question
Answering'
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
dataset_info:
features:
- name: squad_v2_id
dtype: string
- name: original question
dtype: string
- name: disfluent question
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 7712523
num_examples: 7182
- name: test
num_bytes: 3865097
num_examples: 3643
- name: validation
num_bytes: 1072731
num_examples: 1000
download_size: 48935038
dataset_size: 12650351
---
# Dataset Card for DISFL-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Disfl-QA](https://github.com/google-research-datasets/disfl-qa)
- **Paper:** [Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering](https://arxiv.org/pdf/2106.04016.pdf)
- **Point of Contact:** [disfl-qa team](disfl-qa@google.com)
### Dataset Summary
Disfl-QA is a targeted dataset for contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages. Disfl-QA builds upon the SQuAD-v2 ([Rajpurkar et al., 2018](https://www.aclweb.org/anthology/P18-2124/)) dataset, where each question in the dev set is annotated to add a contextual disfluency using the paragraph as a source of distractors.
The final dataset consists of ~12k (disfluent question, answer) pairs. Over 90\% of the disfluencies are corrections or restarts, making it a much harder test set for disfluency correction. Disfl-QA aims to fill a major gap between speech and NLP research community. The authors hope the dataset can serve as a benchmark dataset for testing robustness of models against disfluent inputs.
The expriments reveal that the state-of-the-art models are brittle when subjected to disfluent inputs from Disfl-QA. Detailed experiments and analyses can be found in the [paper](https://arxiv.org/pdf/2106.04016.pdf).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is in English only.
## Dataset Structure
### Data Instances
This example was too long and was cropped:
```
{
"answers": {
"answer_start": [94, 87, 94, 94],
"text": ["10th and 11th centuries", "in the 10th and 11th centuries", "10th and 11th centuries", "10th and 11th centuries"]
},
"context": "\"The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave thei...",
"id": "56ddde6b9a695914005b9629",
"original question": "When were the Normans in Normandy?",
"disfluent question": "From which countries no tell me when were the Normans in Normandy?"
"title": "Normans"
}
```
### Data Fields
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `original question`: Original question from SQuAD-v2 (a `string` feature)
- `disfluent question`: Disfluent question from Disfl-QA (a `string` feature)
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
Disfl-QA consists of ~12k disfluent questions with the following train/dev/test splits:
| File | Questions |
|-----|-----|
|train.json | 7182 |
|dev.json | 1000 |
|test.json | 3643 |
## Dataset Creation
### Curation Rationale
The research in NLP and speech community has been impeded by the lack of curated datasets containing such disfluencies. The datasets available today are mostly conversational in nature, and span a limited number of very specific domains (e.g., telephone conversations, court proceedings). Furthermore, only a small fraction of the utterances in these datasets contain disfluencies, with a limited and skewed distribution of disfluencies types. In the most popular dataset in the literature, the SWITCHBOARD corpus (Godfrey et al., 1992), only 5.9% of the words are disfluencies (Charniak and Johnson, 2001), of which > 50% are repetitions (Shriberg, 1996), which has been shown to be the relatively simpler form of disfluencies (Zayats et al., 2014; Jamshid Lou et al., 2018; Zayats et al., 2019). To fill this gap, the authors presented DISFL-QA, the first dataset containing contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages.
### Source Data
#### Initial Data Collection and Normalization
DISFL-QA is constructed by asking human raters to insert disfluencies in questions from SQUAD-v2, a popular question answering dataset, using the passage and remaining questions as context. These contextual disfluencies lend naturalness to DISFL-QA, and challenge models relying on shallow matching between question and context to predict an answer.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
Each question associated with the paragraph is sent for a human annotation task to add a contextual disfluency using the paragraph as a source of distractors. Finally, to ensure the quality of the dataset, a subsequent round of human evaluation with an option to re-annotate is conducted.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Disfl-QA dataset is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@inproceedings{gupta-etal-2021-disflqa,
title = "{Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering}",
author = "Gupta, Aditya and Xu, Jiacheng and Upadhyay, Shyam and Yang, Diyi and Faruqui, Manaal",
booktitle = "Findings of ACL",
year = "2021"
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset. | 7,685 | [
[
-0.05047607421875,
-0.0849609375,
0.01552581787109375,
0.020721435546875,
0.0010128021240234375,
0.0207672119140625,
0.022979736328125,
-0.0275421142578125,
0.0142669677734375,
0.0045623779296875,
-0.06640625,
-0.028228759765625,
-0.035247802734375,
0.030273... |
ro_sts_parallel | 2022-11-18T21:42:26.000Z | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-sts-b",
"language:en",
"language:ro",
"license:cc-by-4.0",
"region:us"
] | null | The RO-STS-Parallel (a Parallel Romanian English dataset - translation of the Semantic Textual Similarity) contains 17256 sentences in Romanian and English. It is a high-quality translation of the English STS benchmark dataset into Romanian. | @inproceedings{dumitrescu2021liro,
title={Liro: Benchmark and leaderboard for romanian language tasks},
author={Dumitrescu, Stefan Daniel and Rebeja, Petru and Lorincz, Beata and Gaman, Mihaela and Avram, Andrei and Ilie, Mihai and Pruteanu, Andrei and Stan, Adriana and Rosia, Lorena and Iacobescu, Cristina and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)},
year={2021}
} | 0 | 110 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
- ro
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-sts-b
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: RO-STS-Parallel
dataset_info:
- config_name: ro_sts_parallel
features:
- name: translation
dtype:
translation:
languages:
- ro
- en
splits:
- name: train
num_bytes: 1563909
num_examples: 11499
- name: validation
num_bytes: 443787
num_examples: 3001
- name: test
num_bytes: 347590
num_examples: 2759
download_size: 2251694
dataset_size: 2355286
- config_name: rosts-parallel-en-ro
features:
- name: translation
dtype:
translation:
languages:
- en
- ro
splits:
- name: train
num_bytes: 1563909
num_examples: 11499
- name: validation
num_bytes: 443787
num_examples: 3001
- name: test
num_bytes: 347590
num_examples: 2759
download_size: 2251694
dataset_size: 2355286
---
# Dataset Card for RO-STS-Parallel
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub](https://github.com/dumitrescustefan/RO-STS)
- **Repository:** [GitHub](https://github.com/dumitrescustefan/RO-STS)
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [email](dumitrescu.stefan@gmail.com)
### Dataset Summary
We present RO-STS-Parallel - a Parallel Romanian-English dataset obtained by translating the [STS English dataset](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark) dataset into Romanian. It contains 17256 sentences in Romanian and English.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The text dataset is in Romanian and English (`ro`, `en`)
## Dataset Structure
### Data Instances
An example looks like this:
```
{
'translation': {
'ro': 'Problema e si mai simpla.',
'en': 'The problem is simpler than that.'
}
}
```
### Data Fields
- translation:
- ro: text in Romanian
- en: text in English
### Data Splits
The train/validation/test split contain 11,498/3,000/2,758 sentence pairs.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
*To construct the dataset, we first obtained automatic translations using Google's translation engine. These were then manually checked, corrected, and cross-validated by human volunteers. *
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC BY-SA 4.0 License
### Citation Information
```
@inproceedings{dumitrescu2021liro,
title={Liro: Benchmark and leaderboard for romanian language tasks},
author={Dumitrescu, Stefan Daniel and Rebeja, Petru and Lorincz, Beata and Gaman, Mihaela and Avram, Andrei and Ilie, Mihai and Pruteanu, Andrei and Stan, Adriana and Rosia, Lorena and Iacobescu, Cristina and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)},
year={2021}
}
```
### Contributions
Thanks to [@lorinczb](https://github.com/lorinczb) for adding this dataset. | 4,713 | [
[
-0.0197296142578125,
-0.038482666015625,
0.0199432373046875,
0.0240631103515625,
-0.024566650390625,
0.0098114013671875,
-0.035888671875,
-0.021148681640625,
0.036102294921875,
0.016571044921875,
-0.058013916015625,
-0.06591796875,
-0.05267333984375,
0.01861... |
Abirate/french_book_reviews | 2022-08-25T19:26:48.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:fr",
"doi:10.57967/hf/1052",
"region:u... | Abirate | null | null | 4 | 110 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
- crowdsourced
language:
- fr
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
---
# ****Dataset Card for French book reviews****
# **I-Dataset Summary**
The majority of review datasets are in English. There are datasets in other languages, but not many. Through this work, I would like to enrich the datasets in the French language(my mother tongue with Arabic).
The data was retrieved from two French websites: [Babelio](https://www.babelio.com/) and [Critiques Libres](http://www.critiqueslibres.com/)
Like Wikipedia, these two French sites are made possible by the contributions of volunteers who use the Internet to share their knowledge and reading experiences.
The French book reviews is a dataset of a huge number of reader reviews on French books that ill be constantly updated over time.
# **II-Supported Tasks and Leaderboards**
- Multi-label text classification : The dataset can be used to train a model for text-classification, which consists of classifying reviews by label value. Success on this task is typically measured by achieving a high or low accuracy.
# **III-Languages**
The texts in the dataset are in French (fr).
# **IV-Dataset Structure**
#### Data Instances
A JSON-formatted example of a typical instance in the dataset:
```python
{
"book_title": "La belle histoire des maths",
"author": "Michel Rousselet",
"reader_review": "C’est un livre impressionnant, qui inspire le respect
par la qualité de sa reliure et son contenu. Je le feuillette et je découvre
à chaque tour de page un thème distinct magnifiquement illustré. Très beau livre !",
"rating": 4.0,
"label": 1
}
```
#### Data Fields
- **book_title**: The title of the book that received the reader's review,
- **author** : The author of the book that received the reader's review,
- **reader_review** : The text of the reader's review,
- **rating**: A five-star rating system is used to rate the book read,
- **label** : A post-processed field indicating if the review is positive (1), neutral(0), or negative(-1) based on the rating field. For more details, see the [Notebook of the Dataset creation](https://github.com/Abirate/Dataset_Creation_Scrapy_Project_French_book_reviews/blob/master/scrapyproject_a_to_z_dataset_book_reviews.ipynb)
#### Data Splits
I kept the dataset as one block (train), so it can be shuffled and split by users later using methods of the hugging face dataset library like the (.train_test_split()) method.
# **V-Dataset Creation**
#### Curation Rationale
The majority of review datasets are in English. There are datasets in other languages, but not many. Through this work, I would like to enrich the datasets in the French language (French is my mother tongue with Arabic) and slightly contribute to advancing data science and AI, not only for English NLP tasks but for other languages around the world.
French is an international language and it is gaining ground. In addition, it is the language of love. The richness of the French language, so appreciated around the world, is largely related to the richness of its culture. The most telling example is French literature, which has many world-famous writers, such as [Gustave Flaubert](https://en.wikipedia.org/wiki/Gustave_Flaubert), [Albert Camus](https://iep.utm.edu/camus/), [Victor Hugo](https://en.wikipedia.org/wiki/Victor_Hugo), [Molière](https://en.wikipedia.org/wiki/Moli%C3%A8re), [Simone de Beauvoir](https://iep.utm.edu/beauvoir/), [Antoine de Saint-Exupéry](https://en.wikipedia.org/wiki/Antoine_de_Saint-Exup%C3%A9ry): the author of "Le Petit Prince" (The Little Prince), which is still among the most translated books in literary history. And one of the world-famous quotes from this book is: "Voici mon secret. Il est très simple: on ne voit bien qu'avec le coeur. L'essentiel est invisible pour les yeux." etc.
#### Source Data
The source of Data is: two French websites: [Babelio](https://www.babelio.com/) and [Critiques Libres](http://www.critiqueslibres.com/)
#### Initial Data Collection and Normalization
The data was collected using web scraping (with Scrapy Framework) and subjected to additional data processing. For more details, see this notebook, which details the dataset creation process. [Notebook of the Dataset creation](https://github.com/Abirate/Dataset_Creation_Scrapy_Project_French_book_reviews/blob/master/scrapyproject_a_to_z_dataset_book_reviews.ipynb)
**Note**: This dataset will be constantly updated to include the most recent reviews on French books by aggregating the old datasets with the updated ones in order to have a huge dataset over time.
#### Who are the source Data producers ?
I created the Dataset using web scraping, by building a spider and a crawler to scrape the two french web sites [Babelio](https://www.babelio.com/) and [Critiques Libres](http://www.critiqueslibres.com/)
#### Annotations
Annotations are part of the initial data collection (see the script above).
# **VI-Additional Informations**
#### Dataset Curators
Abir ELTAIEF
#### Licensing Information
This work is licensed under [CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/)
#### Contributions
Thanks to [@Abirate](https://huggingface.co/Abirate) for creating and adding this dataset.
| 5,457 | [
[
-0.032073974609375,
-0.0272369384765625,
0.007419586181640625,
0.017364501953125,
-0.007488250732421875,
-0.0111236572265625,
-0.0269012451171875,
-0.037811279296875,
0.02093505859375,
0.052154541015625,
-0.0302581787109375,
-0.050018310546875,
-0.03558349609375... |
huggingartists/ed-sheeran | 2022-10-25T09:28:28.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | 0 | 110 | 2022-03-02T23:29:22 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/ed-sheeran"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 3.432643 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/b501daeff73d1b17610f47a5668f690a.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/ed-sheeran">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ed Sheeran</div>
<a href="https://genius.com/artists/ed-sheeran">
<div style="text-align: center; font-size: 14px;">@ed-sheeran</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/ed-sheeran).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/ed-sheeran")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|923| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/ed-sheeran")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| 7,180 | [
[
-0.04974365234375,
-0.037200927734375,
0.00438690185546875,
0.01206207275390625,
-0.01488494873046875,
0.0003266334533691406,
-0.0260009765625,
-0.033050537109375,
0.06671142578125,
0.025482177734375,
-0.06982421875,
-0.06402587890625,
-0.03851318359375,
0.0... |
ekinakyurek/ftrace | 2022-10-23T05:56:05.000Z | [
"task_ids:masked-language-modeling",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:TRex",
"source_datasets:Lama",
"language:en",
"license:cc-by-sa-4.0",
"license:cc-by-nc-4.0",
"arxiv:2205.11482",
"region:us"
] | ekinakyurek | Factual Tracing Dataset that contains queries and abstracts, and their corresponding ground truth. | \ | 3 | 110 | 2022-05-23T04:33:24 | ---
language:
- en
license:
- cc-by-sa-4.0
- cc-by-nc-4.0
multilinguality:
- monolingual
pretty_name: FTRACE
size_categories:
- 1M<n<10M
source_datasets:
- TRex
- Lama
task_categories:
- influence-attribution
- information-retrieval
- question-answering-retrieval
task_ids:
- influence-attribution
- masked-language-modeling
---
# Dataset Card for "FTRACE"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/ekinakyurek/ftrace
- **Repository:** https://github.com/ekinakyurek/influence
- **Paper:** https://arxiv.org/pdf/2205.11482.pdf
- **Point of Contact:** [Ekin Akyürek](mailto:akyurek@mit.edu)
- **Size of downloaded dataset files:** 113.7 MB
- **Size of the generated dataset:** 1006.6 MB
- **Total amount of disk used:** 1120.3 MB
### Dataset Summary
[PAPER]
FTRACE is a zero-shot infromation retrieval benchmark deviced for tracing a language model’s predictions back to training examples. In the accompanying paper, we evaluate commonly studied influence methods, including gradient-based (TracIn) and embedding-based approaches. The dataset contains two parts. First, factual queries for that we trace the knowledge are extracted from existing LAMA queries (Petroni et al., 2019). Second, Wikidata sentences are extracted from TREx corpus (Elsahar et al., 2018). We annotate the extracted sentences with their stated facts, and these facts can be mathed with the facts in query set. In both parts, we provide (input, target) pairs as a masked language modeling task -- see examples in the below. However, one can use the same data in other formalities for example auto-regressive completion via a processing of `input_pretokenized` and `targets_pretokenized` field.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### Abstracts
- **Size of downloaded dataset files:** 112 MB
- **Size of the generated dataset:** 884 MB
- **Total amount of disk used:** 996 MB
An example of 'abstract' looks as follows.
```
{"inputs_pretokenized": "The name Austroasiatic comes from the Latin words for \"south\" and \"Asia\", hence \"<extra_id_0>\".",
"targets_pretokenized": "<extra_id_0> South Asia",
"page_uri": "Q33199",
"masked_uri": "Q771405",
"masked_type": "subject",
"example_uris": "Q33199-1-Q48-Q771405-1",
"facts": "P361,Q48,Q771405;P30,Q48,Q771405",
"id": 8}
```
#### Queries
- **Size of downloaded dataset files:** 1.7 MB
- **Size of the generated dataset:** 8.9 MB
- **Total amount of disk used:** 10.6 MB
An example of 'query' looks as follows.
```
{"inputs_pretokenized": "Paul Ehrlich used to work in <extra_id_0> .",
"targets_pretokenized": "<extra_id_0> Frankfurt",
"uuid": "5b063008-a8ba-4064-9f59-e70102bb8c50",
"obj_uri": "Q1794",
"sub_uri": "Q57089",
"predicate_id": "P937",
"obj_surface": "Frankfurt",
"sub_surface": "Paul Ehrlich"}
```
### Data Fields
The data fields are the same among all splits.
#### Abstracts
- `inputs_pretokenized`: a `string` feature.
- `targets_pretokenized`: a `string` feature.
- `masked_uri`: a `string` feature.
- `masked_type`: a `string` feature.
- `facts`: a `string` feature.
- `id`: a `string` feature.
- `example_uris`: a `string` feature.
- `page_uri`: a `string` feature.
#### Queries
- `inputs_pretokenized`: a `string` feature.
- `targets_pretokenized`: a `string` feature.
- `obj_surface`: a `string` feature.
- `sub_surface`: a `string` feature.
- `obj_uri`: a `string` feature.
- `sub_uri`: a `string` feature.
- `predicate_id`: a `string` feature.
- `uuid`: a `string` feature.
### Data Splits
| name | train |
|-----------|------:|
|Abstracts |1560453|
|Queries |31479 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
LAMA: https://github.com/facebookresearch/LAMA
TRex: https://hadyelsahar.github.io/t-rex/
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The parts of this dataset are available under the [Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) and [The Creative Commons Attribution-Noncommercial 4.0 International License](https://github.com/facebookresearch/LAMA/blob/master/LICENSE)
### Citation Information
The main paper should be cited as follow:
```
@misc{https://doi.org/10.48550/arxiv.2205.11482,
doi = {10.48550/ARXIV.2205.11482},
url = {https://arxiv.org/abs/2205.11482},
author = {Akyürek, Ekin and Bolukbasi, Tolga and Liu, Frederick and Xiong, Binbin and Tenney, Ian and Andreas, Jacob and Guu, Kelvin},
keywords = {Computation and Language (cs.CL), Information Retrieval (cs.IR), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Tracing Knowledge in Language Models Back to the Training Data},
publisher = {arXiv},
year = {2022},
}
```
Please also cite Petroni et al., 2019 for the query set, and Elsahar et al., 2018 for the abstract set.
```
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
```
```
@inproceedings{elsahar2018t,
title={T-rex: A large scale alignment of natural language with knowledge base triples},
author={Elsahar, Hady and Vougiouklis, Pavlos and Remaci, Arslen and Gravier, Christophe and Hare, Jonathon and Laforest, Frederique and Simperl, Elena},
booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}
```
### Contributions | 8,662 | [
[
-0.0423583984375,
-0.050445556640625,
0.016571044921875,
0.01397705078125,
-0.0168609619140625,
0.00131988525390625,
-0.030975341796875,
-0.034393310546875,
0.044097900390625,
0.031494140625,
-0.0556640625,
-0.070068359375,
-0.0386962890625,
0.00291442871093... |
ArthurBaia/squad_v1_pt_br | 2022-11-09T15:34:43.000Z | [
"region:us"
] | ArthurBaia | This dataset was translated by Deep Learning Brazil | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | 3 | 110 | 2022-07-14T19:55:08 | This dataset was created by Deep Learning Brasil(www.deeplearningbrasil.com.br). I just published it on Hugging Face hub with the intention to share it with more people that are training brazilian portuguese models. The original link is here drive.google.com/file/d/1Q0IaIlv2h2BC468MwUFmUST0EyN7gNkn/view. | 305 | [
[
-0.044708251953125,
-0.036407470703125,
0.004833221435546875,
0.038330078125,
-0.016326904296875,
0.002147674560546875,
0.00487518310546875,
-0.044097900390625,
0.04168701171875,
0.042755126953125,
-0.06512451171875,
-0.03863525390625,
-0.049285888671875,
-0... |
kakaobrain/coyo-700m | 2022-08-30T19:07:52.000Z | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:zero-shot-classification",
"task_ids:image-captioning",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"langu... | kakaobrain | null | null | 76 | 110 | 2022-08-25T15:54:43 |
---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- other
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: COYO-700M
size_categories:
- 100M<n<1B
source_datasets:
- original
tags:
- image-text pairs
task_categories:
- text-to-image
- image-to-text
- zero-shot-classification
task_ids:
- image-captioning
---
# Dataset Card for COYO-700M
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [COYO homepage](https://kakaobrain.com/contents/?contentId=7eca73e3-3089-43cb-b701-332e8a1743fd)
- **Repository:** [COYO repository](https://github.com/kakaobrain/coyo-dataset)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [COYO email](coyo@kakaobrain.com)
### Dataset Summary
**COYO-700M** is a large-scale dataset that contains **747M image-text pairs** as well as many other **meta-attributes** to increase the usability to train various models. Our dataset follows a similar strategy to previous vision-and-language datasets, collecting many informative pairs of alt-text and its associated image in HTML documents. We expect COYO to be used to train popular large-scale foundation models
complementary to other similar datasets. For more details on the data acquisition process, please refer to the technical paper to be released later.
### Supported Tasks and Leaderboards
We empirically validated the quality of COYO dataset by re-implementing popular models such as [ALIGN](https://arxiv.org/abs/2102.05918), [unCLIP](https://arxiv.org/abs/2204.06125), and [ViT](https://arxiv.org/abs/2010.11929).
We trained these models on COYO-700M or its subsets from scratch, achieving competitive performance to the reported numbers or generated samples in the original papers.
Our pre-trained models and training codes will be released soon along with the technical paper.
### Languages
The texts in the COYO-700M dataset consist of English.
## Dataset Structure
### Data Instances
Each instance in COYO-700M represents single image-text pair information with meta-attributes:
```
{
'id': 841814333321,
'url': 'https://blog.dogsof.com/wp-content/uploads/2021/03/Image-from-iOS-5-e1614711641382.jpg',
'text': 'A Pomsky dog sitting and smiling in field of orange flowers',
'width': 1000,
'height': 988,
'image_phash': 'c9b6a7d8469c1959',
'text_length': 59,
'word_count': 11,
'num_tokens_bert': 13,
'num_tokens_gpt': 12,
'num_faces': 0,
'clip_similarity_vitb32': 0.4296875,
'clip_similarity_vitl14': 0.35205078125,
'nsfw_score_opennsfw2': 0.00031447410583496094,
'nsfw_score_gantman': 0.03298913687467575,
'watermark_score': 0.1014641746878624,
'aesthetic_score_laion_v2': 5.435476303100586
}
```
### Data Fields
| name | type | description |
|--------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| id | long | Unique 64-bit integer ID generated by [monotonically_increasing_id()](https://spark.apache.org/docs/3.1.3/api/python/reference/api/pyspark.sql.functions.monotonically_increasing_id.html) |
| url | string | The image URL extracted from the `src` attribute of the `<img>` tag |
| text | string | The text extracted from the `alt` attribute of the `<img>` tag |
| width | integer | The width of the image |
| height | integer | The height of the image |
| image_phash | string | The [perceptual hash(pHash)](http://www.phash.org/) of the image |
| text_length | integer | The length of the text |
| word_count | integer | The number of words separated by spaces. |
| num_tokens_bert | integer | The number of tokens using [BertTokenizer](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer) |
| num_tokens_gpt | integer | The number of tokens using [GPT2TokenizerFast](https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2TokenizerFast) |
| num_faces | integer | The number of faces in the image detected by [SCRFD](https://insightface.ai/scrfd) |
| clip_similarity_vitb32 | float | The cosine similarity between text and image(ViT-B/32) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP) |
| clip_similarity_vitl14 | float | The cosine similarity between text and image(ViT-L/14) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP) |
| nsfw_score_opennsfw2 | float | The NSFW score of the image by [OpenNSFW2](https://github.com/bhky/opennsfw2) |
| nsfw_score_gantman | float | The NSFW score of the image by [GantMan/NSFW](https://github.com/GantMan/nsfw_model) |
| watermark_score | float | The watermark probability of the image by our internal model |
| aesthetic_score_laion_v2 | float | The aesthetic score of the image by [LAION-Aesthetics-Predictor-V2](https://github.com/christophschuhmann/improved-aesthetic-predictor) |
### Data Splits
Data was not split, since the evaluation was expected to be performed on more widely used downstream task(s).
## Dataset Creation
### Curation Rationale
Similar to most vision-and-language datasets, our primary goal in the data creation process is to collect many pairs of alt-text and image sources in HTML documents crawled from the web. Therefore, We attempted to eliminate uninformative images or texts with minimal cost and improve our dataset's usability by adding various meta-attributes. Users can use these meta-attributes to sample a subset from COYO-700M and use it to train the desired model. For instance, the *num_faces* attribute could be used to make a subset like *COYO-Faces* and develop a privacy-preserving generative model.
### Source Data
#### Initial Data Collection and Normalization
We collected about 10 billion pairs of alt-text and image sources in HTML documents in [CommonCrawl](https://commoncrawl.org/) from Oct. 2020 to Aug. 2021. and eliminated uninformative pairs through the image and/or text level filtering process with minimal cost.
**Image Level**
* Included all image formats that [Pillow library](https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html) can decode. (JPEG, WEBP, PNG, BMP, ...)
* Removed images less than 5KB image size.
* Removed images with an aspect ratio greater than 3.0.
* Removed images with min(width, height) < 200.
* Removed images with a score of [OpenNSFW2](https://github.com/bhky/opennsfw2) or [GantMan/NSFW](https://github.com/GantMan/nsfw_model) higher than 0.5.
* Removed all duplicate images based on the image [pHash](http://www.phash.org/) value from external public datasets.
* ImageNet-1K/21K, Flickr-30K, MS-COCO, CC-3M, CC-12M
**Text Level**
* Collected only English text using [cld3](https://github.com/google/cld3).
* Replaced consecutive whitespace characters with a single whitespace and removed the whitespace before and after the sentence.
(e.g. `"\n \n Load image into Gallery viewer, valentine&#39;s day roses\n \n" → "Load image into Gallery viewer, valentine&#39;s day roses"`)
* Removed texts with a length of 5 or less.
* Removed texts that do not have a noun form.
* Removed texts with less than 3 words or more than 256 words and texts over 1000 in length.
* Removed texts appearing more than 10 times.
(e.g. `“thumbnail for”, “image for”, “picture of”`)
* Removed texts containing NSFW words collected from [profanity_filter](https://github.com/rominf/profanity-filter/blob/master/profanity_filter/data/en_profane_words.txt), [better_profanity](https://github.com/snguyenthanh/better_profanity/blob/master/better_profanity/profanity_wordlist.txt), and [google_twunter_lol](https://gist.github.com/ryanlewis/a37739d710ccdb4b406d).
**Image-Text Level**
* Removed duplicated samples based on (image_phash, text).
(Different text may exist for the same image URL.)
#### Who are the source language producers?
[Common Crawl](https://commoncrawl.org/) is the data source for COYO-700M.
### Annotations
#### Annotation process
The dataset was built in a fully automated process that did not require human annotation.
#### Who are the annotators?
No human annotation
### Personal and Sensitive Information
#### Disclaimer & Content Warning
The COYO dataset is recommended to be used for research purposes.
Kakao Brain tried to construct a "Safe" dataset when building the COYO dataset. (See [Data Filtering](#source-data) Section) Kakao Brain is constantly making efforts to create more "Safe" datasets.
However, despite these efforts, this large-scale dataset was not hand-picked by humans to avoid the risk due to its very large size (over 700M).
Keep in mind that the unscreened nature of the dataset means that the collected images can lead to strongly discomforting and disturbing content for humans.
The COYO dataset may contain some inappropriate data, and any problems resulting from such data are the full responsibility of the user who used it.
Therefore, it is strongly recommended that this dataset be used only for research, keeping this in mind when using the dataset, and Kakao Brain does not recommend using this dataset as it is without special processing to clear inappropriate data to create commercial products.
## Considerations for Using the Data
### Social Impact of Dataset
It will be described in a paper to be released soon.
### Discussion of Biases
It will be described in a paper to be released soon.
### Other Known Limitations
It will be described in a paper to be released soon.
## Additional Information
### Dataset Curators
COYO dataset was released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us.
[coyo@kakaobrain.com](mailto:coyo@kakaobrain.com)
### Licensing Information
#### License
The COYO dataset of Kakao Brain is licensed under [CC-BY-4.0 License](https://creativecommons.org/licenses/by/4.0/).
The full license can be found in the [LICENSE.cc-by-4.0 file](./coyo-700m/blob/main/LICENSE.cc-by-4.0).
The dataset includes “Image URL” and “Text” collected from various sites by analyzing Common Crawl data, an open data web crawling project.
The collected data (images and text) is subject to the license to which each content belongs.
#### Obligation to use
While Open Source may be free to use, that does not mean it is free of obligation.
To determine whether your intended use of the COYO dataset is suitable for the CC-BY-4.0 license, please consider the license guide.
If you violate the license, you may be subject to legal action such as the prohibition of use or claim for damages depending on the use.
### Citation Information
If you apply this dataset to any project and research, please cite our code:
```
@misc{kakaobrain2022coyo-700m,
title = {COYO-700M: Image-Text Pair Dataset},
author = {Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, Saehoon Kim},
year = {2022},
howpublished = {\url{https://github.com/kakaobrain/coyo-dataset}},
}
```
### Contributions
- Minwoo Byeon ([@mwbyeon](https://github.com/mwbyeon))
- Beomhee Park ([@beomheepark](https://github.com/beomheepark))
- Haecheon Kim ([@HaecheonKim](https://github.com/HaecheonKim))
- Sungjun Lee ([@justhungryman](https://github.com/justHungryMan))
- Woonhyuk Baek ([@wbaek](https://github.com/wbaek))
- Saehoon Kim ([@saehoonkim](https://github.com/saehoonkim))
- and Kakao Brain Large-Scale AI Studio
| 14,783 | [
[
-0.05230712890625,
-0.05413818359375,
0.006633758544921875,
0.0163116455078125,
-0.0299072265625,
-0.0192413330078125,
-0.01389312744140625,
-0.037750244140625,
0.0243377685546875,
0.022491455078125,
-0.04815673828125,
-0.06182861328125,
-0.038116455078125,
... |
keremberke/valorant-object-detection | 2023-01-27T13:45:00.000Z | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"region:us"
] | keremberke | null | @misc{ valorant-9ufcp_dataset,
title = { valorant Dataset },
type = { Open Source Dataset },
author = { Daniels Magonis },
howpublished = { \\url{ https://universe.roboflow.com/daniels-magonis-0pjzx/valorant-9ufcp } },
url = { https://universe.roboflow.com/daniels-magonis-0pjzx/valorant-9ufcp },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-01-27 },
} | 3 | 110 | 2022-12-28T05:41:05 | ---
task_categories:
- object-detection
tags:
- roboflow
- roboflow2huggingface
---
<div align="center">
<img width="640" alt="keremberke/valorant-object-detection" src="https://huggingface.co/datasets/keremberke/valorant-object-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['dropped spike', 'enemy', 'planted spike', 'teammate']
```
### Number of Images
```json
{'valid': 1983, 'train': 6927, 'test': 988}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/valorant-object-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/daniels-magonis-0pjzx/valorant-9ufcp/dataset/3](https://universe.roboflow.com/daniels-magonis-0pjzx/valorant-9ufcp/dataset/3?ref=roboflow2huggingface)
### Citation
```
@misc{ valorant-9ufcp_dataset,
title = { valorant Dataset },
type = { Open Source Dataset },
author = { Daniels Magonis },
howpublished = { \\url{ https://universe.roboflow.com/daniels-magonis-0pjzx/valorant-9ufcp } },
url = { https://universe.roboflow.com/daniels-magonis-0pjzx/valorant-9ufcp },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-01-27 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on December 22, 2022 at 5:10 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 9898 images.
Planted are annotated in COCO format.
The following pre-processing was applied to each image:
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
| 2,072 | [
[
-0.033905029296875,
-0.024993896484375,
0.0223846435546875,
-0.0021953582763671875,
-0.01861572265625,
-0.01161956787109375,
-0.00591278076171875,
-0.030975341796875,
0.0290069580078125,
0.0214385986328125,
-0.04241943359375,
-0.0640869140625,
-0.039886474609375... |
fcakyon/gun-object-detection | 2022-12-28T06:22:36.000Z | [
"task_categories:object-detection",
"roboflow",
"region:us"
] | fcakyon | null | @misc{ test-y7rj3_dataset,
title = { test Dataset },
type = { Open Source Dataset },
author = { ashish },
howpublished = { \\url{ https://universe.roboflow.com/ashish-cuamw/test-y7rj3 } },
url = { https://universe.roboflow.com/ashish-cuamw/test-y7rj3 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { oct },
note = { visited on 2022-12-28 },
} | 2 | 110 | 2022-12-28T06:20:48 | ---
task_categories:
- object-detection
tags:
- roboflow
---
### Roboflow Dataset Page
https://universe.roboflow.com/ashish-cuamw/test-y7rj3
### Citation
```
@misc{ test-y7rj3_dataset,
title = { test Dataset },
type = { Open Source Dataset },
author = { ashish },
howpublished = { \\url{ https://universe.roboflow.com/ashish-cuamw/test-y7rj3 } },
url = { https://universe.roboflow.com/ashish-cuamw/test-y7rj3 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { oct },
note = { visited on 2022-12-28 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on December 26, 2022 at 10:13 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 4666 images.
T are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
| 1,302 | [
[
-0.0325927734375,
-0.0296783447265625,
0.01959228515625,
0.004180908203125,
-0.03240966796875,
-0.0231781005859375,
0.0013990402221679688,
-0.042999267578125,
0.020477294921875,
0.043304443359375,
-0.046295166015625,
-0.04949951171875,
-0.032867431640625,
0.... |
treadon/dolly-15k | 2023-04-14T14:46:03.000Z | [
"license:cc-by-3.0",
"region:us"
] | treadon | null | null | 1 | 110 | 2023-04-14T14:41:15 | ---
license: cc-by-3.0
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 12208856
num_examples: 14863
- name: validation
num_bytes: 117314
num_examples: 151
download_size: 7866269
dataset_size: 12326170
---
# Dataset Card for "dolly-15k"
# Summary
This is the dataset supplied by Databricks for training Dolly V2. This set is split 99% training / 1% validation, should you want to set aside some records for evaluation purposes.
## Special thanks to ❤️ Databricks for creating and making this set available.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 828 | [
[
-0.00589752197265625,
-0.021453857421875,
-0.0207672119140625,
0.025787353515625,
-0.026031494140625,
-0.006511688232421875,
0.032562255859375,
-0.00653839111328125,
0.0281982421875,
0.043701171875,
-0.0723876953125,
-0.027313232421875,
-0.041015625,
-0.0085... |
IlyaGusev/oasst1_ru_main_branch | 2023-09-15T20:58:01.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:ru",
"license:apache-2.0",
"region:us"
] | IlyaGusev | null | null | 3 | 110 | 2023-04-15T18:16:15 | ---
language:
- ru
license: apache-2.0
size_categories:
- 1K<n<10K
task_categories:
- conversational
- text-generation
dataset_info:
features:
- name: messages
sequence:
- name: role
dtype: string
- name: content
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 2040115
num_examples: 614
download_size: 2105736
dataset_size: 2040115
---
* Based on [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1).
* Only Russian message trees, only main branches.
* Script: [get_oasst_ru.py](https://github.com/IlyaGusev/rulm/blob/master/self_instruct/src/data_processing/get_oasst_ru.py)
| 661 | [
[
0.018280029296875,
-0.053009033203125,
0.028289794921875,
0.01050567626953125,
-0.0281829833984375,
0.0150146484375,
0.01245880126953125,
-0.0203704833984375,
0.039154052734375,
0.0298004150390625,
-0.077880859375,
-0.0582275390625,
-0.04095458984375,
-0.012... |
jkhedri/psychology-dataset | 2023-05-04T10:12:40.000Z | [
"region:us"
] | jkhedri | null | null | 15 | 110 | 2023-05-04T10:08:53 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
JeremyArancio/lotr-book | 2023-06-02T12:30:41.000Z | [
"region:us"
] | JeremyArancio | null | null | 0 | 110 | 2023-05-18T09:53:28 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2432593
num_examples: 1
download_size: 0
dataset_size: 2432593
---
# Dataset Card for "lotr-book"
The Lord of the Rings books extracted into one dataset.
[Source](https://github.com/jeremyarancio/llm-rpg/blob/main/llm/prepare_dataset.py)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
# Notes
* Book [link](https://gosafir.com/mag/wp-content/uploads/2019/12/Tolkien-J.-The-lord-of-the-rings-HarperCollins-ebooks-2010.pdf)
* Footers and header were removed.
* Starts at page 45 and ends at page 1055 | 700 | [
[
-0.04144287109375,
-0.0227508544921875,
-0.0222930908203125,
0.00748443603515625,
-0.026763916015625,
-0.00043892860412597656,
0.01096343994140625,
-0.00803375244140625,
0.0174102783203125,
0.08123779296875,
-0.039794921875,
-0.04071044921875,
-0.022140502929687... |
jxie/coco_captions | 2023-06-25T07:37:53.000Z | [
"region:us"
] | jxie | null | null | 0 | 110 | 2023-06-25T04:37:33 | ---
dataset_info:
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: cocoid
dtype: int32
- name: caption
dtype: string
splits:
- name: train
num_bytes: 90684615607.036
num_examples: 566747
- name: validation
num_bytes: 4562095167.09
num_examples: 25010
- name: test
num_bytes: 4221845598.88
num_examples: 25010
download_size: 20920410197
dataset_size: 99468556373.006
---
# Dataset Card for "coco_captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 626 | [
[
-0.041412353515625,
-0.01458740234375,
0.005794525146484375,
0.037841796875,
-0.0274658203125,
0.0258636474609375,
0.00397491455078125,
-0.01496124267578125,
0.057159423828125,
0.04510498046875,
-0.05364990234375,
-0.053924560546875,
-0.0438232421875,
-0.004... |
jamescalam/langchain-docs-23-06-27 | 2023-06-27T15:51:24.000Z | [
"region:us"
] | jamescalam | null | null | 5 | 110 | 2023-06-27T14:08:06 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
DRXD1000/Dolly-15k-German | 2023-10-31T07:06:14.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:de",
"license:cc-by-3.0",
"region:us"
] | DRXD1000 | null | null | 0 | 110 | 2023-09-03T14:54:18 | ---
language:
- de
license: cc-by-3.0
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: instruction_de
dtype: string
- name: context_de
dtype: string
- name: response_de
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 13900072
num_examples: 15011
download_size: 8816923
dataset_size: 13900072
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- question-answering
- summarization
- text-generation
---
# Dolly-15k-German
This is a German version of the Dolly-15k Dataset from databricks (https://huggingface.co/datasets/databricks/databricks-dolly-15k).
The Translation was done using the Translator API from Azure.
Disclaimer: The quality of the translation has not been reviewed and there was no post-processing.
So everything said in the original model card should also apply here.
Therefore the Licence is the same as in the original model.
Have Fun :) | 1,001 | [
[
-0.00739288330078125,
-0.055389404296875,
0.0010223388671875,
0.039764404296875,
-0.036834716796875,
-0.0058746337890625,
0.0302276611328125,
-0.0186614990234375,
0.03173828125,
0.04180908203125,
-0.07354736328125,
-0.055084228515625,
-0.036529541015625,
0.0... |
result-kand2-sdxl-wuerst-karlo/e74ecf3f | 2023-10-12T15:55:11.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 110 | 2023-10-12T15:55:10 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 158
num_examples: 10
download_size: 1309
dataset_size: 158
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "e74ecf3f"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.046112060546875,
-0.00627899169921875,
0.024078369140625,
0.01352691650390625,
-0.0242462158203125,
-0.01151275634765625,
0.0338134765625,
-0.0283050537109375,
0.058441162109375,
0.03369140625,
-0.04998779296875,
-0.051849365234375,
-0.04168701171875,
0.0... |
pkr7098/bert-base-uncased-bookcorpus-wiki-2022030-en-vocab_size-32000 | 2023-10-18T19:19:26.000Z | [
"region:us"
] | pkr7098 | null | null | 1 | 110 | 2023-10-18T18:46:48 | ---
dataset_info:
config_name: truncate-512
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 23600541600
num_examples: 6555706
- name: validation
num_bytes: 317304000
num_examples: 88140
download_size: 310440269
dataset_size: 23917845600
configs:
- config_name: truncate-512
data_files:
- split: train
path: truncate-512/train-*
- split: validation
path: truncate-512/validation-*
---
# Dataset Card for "bert-base-uncased-bookcorpus-wiki-2022030-en-vocab_size-32000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 815 | [
[
-0.0477294921875,
-0.0089569091796875,
-0.0014066696166992188,
0.024169921875,
-0.03485107421875,
-0.0019254684448242188,
-0.015228271484375,
-0.01348876953125,
0.05426025390625,
0.043243408203125,
-0.057952880859375,
-0.0439453125,
-0.028564453125,
-0.01696... |
amttl | 2023-01-25T14:26:23.000Z | [
"task_categories:token-classification",
"task_ids:parsing",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zh",
"license:mit",
"region:us"
] | null | Chinese word segmentation (CWS) trained from open source corpus faces dramatic performance drop
when dealing with domain text, especially for a domain with lots of special terms and diverse
writing styles, such as the biomedical domain. However, building domain-specific CWS requires
extremely high annotation cost. In this paper, we propose an approach by exploiting domain-invariant
knowledge from high resource to low resource domains. Extensive experiments show that our mode
achieves consistently higher accuracy than the single-task CWS and other transfer learning
baselines, especially when there is a large disparity between source and target domains.
This dataset is the accompanied medical Chinese word segmentation (CWS) dataset.
The tags are in BIES scheme.
For more details see https://www.aclweb.org/anthology/C18-1307/ | @inproceedings{xing2018adaptive,
title={Adaptive multi-task transfer learning for Chinese word segmentation in medical text},
author={Xing, Junjie and Zhu, Kenny and Zhang, Shaodian},
booktitle={Proceedings of the 27th International Conference on Computational Linguistics},
pages={3619--3630},
year={2018}
} | 1 | 109 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- zh
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- parsing
pretty_name: AMTTL
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: tags
sequence:
class_label:
names:
'0': B
'1': I
'2': E
'3': S
config_name: amttl
splits:
- name: train
num_bytes: 1132212
num_examples: 3063
- name: validation
num_bytes: 324374
num_examples: 822
- name: test
num_bytes: 328525
num_examples: 908
download_size: 685534
dataset_size: 1785111
---
# Dataset Card for AMTTL
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/adapt-sjtu/AMTTL/tree/master/medical_data)
- **Repository:** [Github](https://github.com/adapt-sjtu/AMTTL/tree/master/medical_data)
- **Paper:** [Aclweb](http://aclweb.org/anthology/C18-1307)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@inproceedings{xing2018adaptive,
title={Adaptive multi-task transfer learning for Chinese word segmentation in medical text},
author={Xing, Junjie and Zhu, Kenny and Zhang, Shaodian},
booktitle={Proceedings of the 27th International Conference on Computational Linguistics},
pages={3619--3630},
year={2018}
}
```
### Contributions
Thanks to [@JetRunner](https://github.com/JetRunner) for adding this dataset. | 3,671 | [
[
-0.0202789306640625,
-0.053497314453125,
0.00667572021484375,
0.0073089599609375,
-0.032073974609375,
0.0208740234375,
-0.0289764404296875,
-0.0322265625,
0.043182373046875,
0.0311737060546875,
-0.05096435546875,
-0.0703125,
-0.04449462890625,
0.007896423339... |
hate_speech_filipino | 2023-01-25T14:31:38.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-twitter-data-philippine-election",
"language:tl",
"license:un... | null | Contains 10k tweets (training set) that are labeled as hate speech or non-hate speech. Released with 4,232 validation and 4,232 testing samples. Collected during the 2016 Philippine Presidential Elections. | @article{Cabasag-2019-hate-speech,
title={Hate speech in Philippine election-related tweets: Automatic detection and classification using natural language processing.},
author={Neil Vicente Cabasag, Vicente Raphael Chan, Sean Christian Lim, Mark Edward Gonzales, and Charibeth Cheng},
journal={Philippine Computing Journal},
volume={XIV},
number={1},
month={August},
year={2019}
} | 4 | 109 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- tl
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-twitter-data-philippine-election
task_categories:
- text-classification
task_ids:
- sentiment-analysis
pretty_name: Hate Speech in Filipino
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 995919
num_examples: 10000
- name: test
num_bytes: 995919
num_examples: 10000
- name: validation
num_bytes: 424365
num_examples: 4232
download_size: 822927
dataset_size: 2416203
---
# Dataset Card for Hate Speech in Filipino
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Hate Speech Dataset in Filipino homepage](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Repository:** [Hate Speech Dataset in Filipino homepage](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Paper:** [PCJ paper](https://pcj.csp.org.ph/index.php/pcj/issue/download/29/PCJ%20V14%20N1%20pp1-14%202019)
- **Leaderboard:**
- **Point of Contact:** [Jan Christian Cruz](mailto:jan_christian_cruz@dlsu.edu.ph)
### Dataset Summary
Contains 10k tweets (training set) that are labeled as hate speech or non-hate speech. Released with 4,232 validation and 4,232 testing samples. Collected during the 2016 Philippine Presidential Elections.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular
## Dataset Structure
### Data Instances
Sample data:
```
{
"text": "Taas ni Mar Roxas ah. KULTONG DILAW NGA NAMAN",
"label": 1
}
```
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
This study seeks to contribute to the filling of this gap through the development of a model that can automate hate speech detection and classification in Philippine election-related tweets. The role of the microblogging site Twitter as a platform for the expression of support and hate during the 2016 Philippine presidential election has been supported in news reports and systematic studies. Thus, the particular question addressed in this paper is: Can existing techniques in language processing and machine learning be applied to detect hate speech in the Philippine election context?
### Source Data
#### Initial Data Collection and Normalization
The dataset used in this study was a subset of the corpus 1,696,613 tweets crawled by Andrade et al. and posted from November 2015 to May 2016 during the campaign period for the Philippine presidential election. They were culled based on the presence of candidate names (e.g., Binay, Duterte, Poe, Roxas, and Santiago) and election-related hashtags (e.g., #Halalan2016, #Eleksyon2016, and #PiliPinas2016).
Data preprocessing was performed to prepare the tweets for feature extraction and classification. It consisted of the following steps: data de-identification, uniform resource locator (URL) removal, special character processing, normalization, hashtag processing, and tokenization.
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Jan Christian Cruz](mailto:jan_christian_cruz@dlsu.edu.ph)
### Licensing Information
[More Information Needed]
### Citation Information
@article{Cabasag-2019-hate-speech,
title={Hate speech in Philippine election-related tweets: Automatic detection and classification using natural language processing.},
author={Neil Vicente Cabasag, Vicente Raphael Chan, Sean Christian Lim, Mark Edward Gonzales, and Charibeth Cheng},
journal={Philippine Computing Journal},
volume={XIV},
number={1},
month={August},
year={2019}
}
### Contributions
Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset. | 5,557 | [
[
-0.0219268798828125,
-0.05828857421875,
-0.00847625732421875,
0.031219482421875,
-0.03680419921875,
0.025665283203125,
-0.018707275390625,
-0.035430908203125,
0.0267791748046875,
0.046966552734375,
-0.033843994140625,
-0.05755615234375,
-0.06475830078125,
0.... |
Doohae/klue-mrc-bm25 | 2022-02-09T08:10:52.000Z | [
"region:us"
] | Doohae | null | null | 0 | 109 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
andrepreira/outros2021 | 2022-02-17T21:39:43.000Z | [
"region:us"
] | andrepreira | null | null | 0 | 109 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
huggingartists/drake | 2022-10-25T09:28:02.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | 3 | 109 | 2022-03-02T23:29:22 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/drake"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 6.063474 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/631b206379b60df5e1da90e84d35fdbe.1000x1000x1.png')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/drake">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Drake</div>
<a href="https://genius.com/artists/drake">
<div style="text-align: center; font-size: 14px;">@drake</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/drake).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/drake")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|1298| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/drake")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| 7,141 | [
[
-0.044219970703125,
-0.034393310546875,
0.00576019287109375,
0.0233001708984375,
-0.01546478271484375,
0.0016689300537109375,
-0.0218505859375,
-0.033447265625,
0.06512451171875,
0.0239715576171875,
-0.0706787109375,
-0.060302734375,
-0.042694091796875,
0.00... |
yangwang825/reuters-21578 | 2023-05-19T02:04:58.000Z | [
"task_categories:text-classification",
"language:en",
"region:us"
] | yangwang825 | null | null | 0 | 109 | 2023-05-17T14:25:37 | ---
task_categories:
- text-classification
language:
- en
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': acq
'1': crude
'2': earn
'3': grain
'4': interest
'5': money-fx
'6': ship
'7': trade
---
`yangwang825/reuters-21578` is an 8-class subset of the Reuters 21578 news dataset.
| 465 | [
[
-0.01513671875,
0.008575439453125,
0.006885528564453125,
0.019195556640625,
-0.0043487548828125,
-0.0028514862060546875,
0.019622802734375,
-0.01175689697265625,
0.0294647216796875,
0.059356689453125,
-0.034423828125,
-0.0196380615234375,
-0.033538818359375,
... |
Tommert25/extradata0908 | 2023-09-26T15:12:36.000Z | [
"region:us"
] | Tommert25 | null | null | 0 | 109 | 2023-08-09T13:52:42 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
yys/OpenOrca-Chinese | 2023-09-08T08:05:47.000Z | [
"task_categories:conversational",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extra... | yys | null | null | 28 | 109 | 2023-09-07T06:01:51 | ---
license: mit
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
language:
- zh
pretty_name: OpenOrca-Chinese
size_categories:
- 10M<n<100M
---
<p><h1>🐋 OpenOrca-Chinese 数据集!🐋</h1></p>
感谢 [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) 数据集的发布,给广大NLP研究人员和开发者带来了宝贵的资源!
这是一个对 [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) 数据集中文翻译的版本,翻译引擎为 Google 翻译,希望能给中文 LLM 研究做出一点点贡献。
<br/>
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
| 1,952 | [
[
-0.045166015625,
-0.05828857421875,
0.005626678466796875,
0.00853729248046875,
-0.0079498291015625,
-0.022674560546875,
-0.0195770263671875,
-0.056060791015625,
0.0384521484375,
0.046966552734375,
-0.0303955078125,
-0.048736572265625,
-0.0255889892578125,
0.... |
alexMTL/guanaco_q_a_dataset_1k | 2023-09-28T15:49:07.000Z | [
"region:us"
] | alexMTL | null | null | 0 | 109 | 2023-09-28T15:48:09 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dengue_filipino | 2023-01-25T14:29:21.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:tl",
"lice... | null | Benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets. | @INPROCEEDINGS{8459963,
author={E. D. {Livelo} and C. {Cheng}},
booktitle={2018 IEEE International Conference on Agents (ICA)},
title={Intelligent Dengue Infoveillance Using Gated Recurrent Neural Learning and Cross-Label Frequencies},
year={2018},
volume={},
number={},
pages={2-7},
doi={10.1109/AGENTS.2018.8459963}}
} | 1 | 108 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
language:
- tl
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: dengue
pretty_name: Dengue Dataset in Filipino
dataset_info:
features:
- name: text
dtype: string
- name: absent
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: dengue
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: health
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: mosquito
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: sick
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 428553
num_examples: 4015
- name: test
num_bytes: 428553
num_examples: 4015
- name: validation
num_bytes: 54384
num_examples: 500
download_size: 156014
dataset_size: 911490
---
# Dataset Card for Dengue Dataset in Filipino
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Dengue Dataset in Filipino homepage](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Repository:** [Dengue Dataset in Filipino repository](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Paper:** [IEEE paper](https://ieeexplore.ieee.org/document/8459963)
- **Leaderboard:**
- **Point of Contact:** [Jan Christian Cruz](mailto:jan_christian_cruz@dlsu.edu.ph)
### Dataset Summary
Benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular.
## Dataset Structure
### Data Instances
Sample data:
```
{
"text": "Tapos ang dami pang lamok.",
"absent": "0",
"dengue": "0",
"health": "0",
"mosquito": "1",
"sick": "0"
}
```
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Jan Christian Cruz](mailto:jan_christian_cruz@dlsu.edu.ph)
### Licensing Information
[More Information Needed]
### Citation Information
@INPROCEEDINGS{8459963,
author={E. D. {Livelo} and C. {Cheng}},
booktitle={2018 IEEE International Conference on Agents (ICA)},
title={Intelligent Dengue Infoveillance Using Gated Recurrent Neural Learning and Cross-Label Frequencies},
year={2018},
volume={},
number={},
pages={2-7},
doi={10.1109/AGENTS.2018.8459963}}
}
### Contributions
Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset. | 4,706 | [
[
-0.0322265625,
-0.034271240234375,
-0.014617919921875,
0.03143310546875,
-0.031402587890625,
0.0185394287109375,
-0.0099029541015625,
-0.041473388671875,
0.042877197265625,
0.02203369140625,
-0.034759521484375,
-0.0606689453125,
-0.040679931640625,
0.0396118... |
Tevatron/scifact | 2021-09-13T23:32:59.000Z | [
"region:us"
] | Tevatron | null | @inproceedings{Wadden2020FactOF,
title={Fact or Fiction: Verifying Scientific Claims},
author={David Wadden and Shanchuan Lin and Kyle Lo and Lucy Lu Wang and Madeleine van Zuylen and Arman Cohan and Hannaneh Hajishirzi},
booktitle={EMNLP},
year={2020},
} | 0 | 108 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
gigant/m-ailabs_speech_dataset_fr | 2022-10-24T17:38:45.000Z | [
"task_categories:automatic-speech-recognition",
"language:fr",
"license:cc",
"region:us"
] | gigant | \
The M-AILABS Speech Dataset is the first large dataset that we are providing free-of-charge, freely usable as training data for speech recognition and speech synthesis.
Most of the data is based on LibriVox and Project Gutenberg. The training data consist of nearly thousand hours of audio and the text-files in prepared format.
A transcription is provided for each clip. Clips vary in length from 1 to 20 seconds and have a total length of approximately shown in the list (and in the respective info.txt-files) below.
The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded by the LibriVox project and is also in the public domain – except for Ukrainian.
Ukrainian audio was kindly provided either by Nash Format or Gwara Media for machine learning purposes only (please check the data info.txt files for details). | \ | 0 | 108 | 2022-03-02T23:29:22 | ---
language:
- fr
license: cc
size_categories:
fr:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: M-AILABS Speech Dataset (French)
---
## Dataset Description
- **Homepage:** https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/
### Dataset Summary
The M-AILABS Speech Dataset is the first large dataset that we are providing free-of-charge, freely usable as training data for speech recognition and speech synthesis.
Most of the data is based on LibriVox and Project Gutenberg. The training data consist of nearly thousand hours of audio and the text-files in prepared format.
A transcription is provided for each clip. Clips vary in length from 1 to 20 seconds and have a total length of approximately shown in the list (and in the respective info.txt-files) below.
The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded by the LibriVox project and is also in the public domain – except for Ukrainian.
Ukrainian audio was kindly provided either by Nash Format or Gwara Media for machine learning purposes only (please check the data info.txt files for details).
### Languages
French
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called audio and its sentence.
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- sentence: The sentence the user was prompted to speak
### Data Splits
The speech material has not been subdivided into portions, everything is in the "train" split.
The train split consists of 82825 audio clips and the related sentences.
### Contributions
[@gigant](https://huggingface.co/gigant) added this dataset. | 2,215 | [
[
-0.031463623046875,
-0.032196044921875,
0.01136016845703125,
0.01219940185546875,
-0.006404876708984375,
-0.0035457611083984375,
-0.0272979736328125,
-0.007080078125,
0.01251983642578125,
0.03814697265625,
-0.04656982421875,
-0.05145263671875,
-0.040069580078125... |
DFKI-SLT/kbp37 | 2023-04-27T13:04:14.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other",
"language:en",
"license:other",
"relation extraction",
"arxiv:1508... | DFKI-SLT | KBP37 is a revision of MIML-RE annotation dataset, provided by Gabor Angeli et al. (2014). They use both the 2010 and
2013 KBP official document collections, as well as a July 2013 dump of Wikipedia as the text corpus for annotation.
There are 33811 sentences been annotated. Zhang and Wang made several refinements:
1. They add direction to the relation names, e.g. '`per:employee_of`' is split into '`per:employee of(e1,e2)`'
and '`per:employee of(e2,e1)`'. They also replace '`org:parents`' with '`org:subsidiaries`' and replace
'`org:member of’ with '`org:member`' (by their reverse directions).
2. They discard low frequency relations such that both directions of each relation occur more than 100 times in the
dataset.
KBP37 contains 18 directional relations and an additional '`no_relation`' relation, resulting in 37 relation classes. | @article{DBLP:journals/corr/ZhangW15a,
author = {Dongxu Zhang and
Dong Wang},
title = {Relation Classification via Recurrent Neural Network},
journal = {CoRR},
volume = {abs/1508.01006},
year = {2015},
url = {http://arxiv.org/abs/1508.01006},
eprinttype = {arXiv},
eprint = {1508.01006},
timestamp = {Fri, 04 Nov 2022 18:37:50 +0100},
biburl = {https://dblp.org/rec/journals/corr/ZhangW15a.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | 0 | 108 | 2023-01-06T12:26:09 | ---
annotations_creators:
- other
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: KBP37 is an English Relation Classification dataset
size_categories:
- 10K<n<100K
source_datasets:
- extended|other
tags:
- relation extraction
task_categories:
- text-classification
task_ids:
- multi-class-classification
dataset_info:
- config_name: kbp37
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names(e1,e2)
'2': org:alternate_names(e2,e1)
'3': org:city_of_headquarters(e1,e2)
'4': org:city_of_headquarters(e2,e1)
'5': org:country_of_headquarters(e1,e2)
'6': org:country_of_headquarters(e2,e1)
'7': org:founded(e1,e2)
'8': org:founded(e2,e1)
'9': org:founded_by(e1,e2)
'10': org:founded_by(e2,e1)
'11': org:members(e1,e2)
'12': org:members(e2,e1)
'13': org:stateorprovince_of_headquarters(e1,e2)
'14': org:stateorprovince_of_headquarters(e2,e1)
'15': org:subsidiaries(e1,e2)
'16': org:subsidiaries(e2,e1)
'17': org:top_members/employees(e1,e2)
'18': org:top_members/employees(e2,e1)
'19': per:alternate_names(e1,e2)
'20': per:alternate_names(e2,e1)
'21': per:cities_of_residence(e1,e2)
'22': per:cities_of_residence(e2,e1)
'23': per:countries_of_residence(e1,e2)
'24': per:countries_of_residence(e2,e1)
'25': per:country_of_birth(e1,e2)
'26': per:country_of_birth(e2,e1)
'27': per:employee_of(e1,e2)
'28': per:employee_of(e2,e1)
'29': per:origin(e1,e2)
'30': per:origin(e2,e1)
'31': per:spouse(e1,e2)
'32': per:spouse(e2,e1)
'33': per:stateorprovinces_of_residence(e1,e2)
'34': per:stateorprovinces_of_residence(e2,e1)
'35': per:title(e1,e2)
'36': per:title(e2,e1)
splits:
- name: train
num_bytes: 3570626
num_examples: 15917
- name: validation
num_bytes: 388935
num_examples: 1724
- name: test
num_bytes: 762806
num_examples: 3405
download_size: 5106673
dataset_size: 4722367
- config_name: kbp37_formatted
features:
- name: id
dtype: string
- name: token
sequence: string
- name: e1_start
dtype: int32
- name: e1_end
dtype: int32
- name: e2_start
dtype: int32
- name: e2_end
dtype: int32
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names(e1,e2)
'2': org:alternate_names(e2,e1)
'3': org:city_of_headquarters(e1,e2)
'4': org:city_of_headquarters(e2,e1)
'5': org:country_of_headquarters(e1,e2)
'6': org:country_of_headquarters(e2,e1)
'7': org:founded(e1,e2)
'8': org:founded(e2,e1)
'9': org:founded_by(e1,e2)
'10': org:founded_by(e2,e1)
'11': org:members(e1,e2)
'12': org:members(e2,e1)
'13': org:stateorprovince_of_headquarters(e1,e2)
'14': org:stateorprovince_of_headquarters(e2,e1)
'15': org:subsidiaries(e1,e2)
'16': org:subsidiaries(e2,e1)
'17': org:top_members/employees(e1,e2)
'18': org:top_members/employees(e2,e1)
'19': per:alternate_names(e1,e2)
'20': per:alternate_names(e2,e1)
'21': per:cities_of_residence(e1,e2)
'22': per:cities_of_residence(e2,e1)
'23': per:countries_of_residence(e1,e2)
'24': per:countries_of_residence(e2,e1)
'25': per:country_of_birth(e1,e2)
'26': per:country_of_birth(e2,e1)
'27': per:employee_of(e1,e2)
'28': per:employee_of(e2,e1)
'29': per:origin(e1,e2)
'30': per:origin(e2,e1)
'31': per:spouse(e1,e2)
'32': per:spouse(e2,e1)
'33': per:stateorprovinces_of_residence(e1,e2)
'34': per:stateorprovinces_of_residence(e2,e1)
'35': per:title(e1,e2)
'36': per:title(e2,e1)
splits:
- name: train
num_bytes: 4943394
num_examples: 15807
- name: validation
num_bytes: 539197
num_examples: 1714
- name: test
num_bytes: 1055918
num_examples: 3379
download_size: 5106673
dataset_size: 6581345
---
# Dataset Card for "kbp37"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [kbp37](https://github.com/zhangdongxu/kbp37)
- **Paper:** [Relation Classification via Recurrent Neural Network](https://arxiv.org/abs/1508.01006)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 5.11 MB
- **Size of the generated dataset:** 6.58 MB
### Dataset Summary
KBP37 is a revision of MIML-RE annotation dataset, provided by Gabor Angeli et al. (2014). They use both the 2010 and
2013 KBP official document collections, as well as a July 2013 dump of Wikipedia as the text corpus for annotation.
There are 33811 sentences been annotated. Zhang and Wang made several refinements:
1. They add direction to the relation names, e.g. '`per:employee_of`' is split into '`per:employee of(e1,e2)`'
and '`per:employee of(e2,e1)`'. They also replace '`org:parents`' with '`org:subsidiaries`' and replace
'`org:member of’ with '`org:member`' (by their reverse directions).
2. They discard low frequency relations such that both directions of each relation occur more than 100 times in the
dataset.
KBP37 contains 18 directional relations and an additional '`no_relation`' relation, resulting in 37 relation classes.
Note:
- There is a formatted version that you can load with `datasets.load_dataset('kbp37', name='kbp37_formatted')`. This version is tokenized with `str.split()` and
provides entities as offsets instead of being enclosed by xml tags. It discards some examples, however, that are invalid in the original dataset and lead
to entity offset errors, e.g. example train/1276.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language data in KBP37 is in English (BCP-47 en)
## Dataset Structure
### Data Instances
#### kbp37
- **Size of downloaded dataset files:** 5.11 MB
- **Size of the generated dataset:** 4.7 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"sentence": "<e1> Thom Yorke </e1> of <e2> Radiohead </e2> has included the + for many of his signature distortion sounds using a variety of guitars to achieve various tonal options .",
"relation": 27
}
```
#### kbp37_formatted
- **Size of downloaded dataset files:** 5.11 MB
- **Size of the generated dataset:** 6.58 MB
An example of 'train' looks as follows:
```json
{
"id": "1",
"token": ["Leland", "High", "School", "is", "a", "public", "high", "school", "located", "in", "the", "Almaden", "Valley", "in", "San", "Jose", "California", "USA", "in", "the", "San", "Jose", "Unified", "School", "District", "."],
"e1_start": 0,
"e1_end": 3,
"e2_start": 14,
"e2_end": 16,
"relation": 3
}
```
### Data Fields
#### kbp37
- `id`: the instance id of this sentence, a `string` feature.
- `sentence`: the sentence, a `string` features.
- `relation`: the relation label of this instance, an `int` classification label.
```python
{"no_relation": 0, "org:alternate_names(e1,e2)": 1, "org:alternate_names(e2,e1)": 2, "org:city_of_headquarters(e1,e2)": 3, "org:city_of_headquarters(e2,e1)": 4, "org:country_of_headquarters(e1,e2)": 5, "org:country_of_headquarters(e2,e1)": 6, "org:founded(e1,e2)": 7, "org:founded(e2,e1)": 8, "org:founded_by(e1,e2)": 9, "org:founded_by(e2,e1)": 10, "org:members(e1,e2)": 11, "org:members(e2,e1)": 12, "org:stateorprovince_of_headquarters(e1,e2)": 13, "org:stateorprovince_of_headquarters(e2,e1)": 14, "org:subsidiaries(e1,e2)": 15, "org:subsidiaries(e2,e1)": 16, "org:top_members/employees(e1,e2)": 17, "org:top_members/employees(e2,e1)": 18, "per:alternate_names(e1,e2)": 19, "per:alternate_names(e2,e1)": 20, "per:cities_of_residence(e1,e2)": 21, "per:cities_of_residence(e2,e1)": 22, "per:countries_of_residence(e1,e2)": 23, "per:countries_of_residence(e2,e1)": 24, "per:country_of_birth(e1,e2)": 25, "per:country_of_birth(e2,e1)": 26, "per:employee_of(e1,e2)": 27, "per:employee_of(e2,e1)": 28, "per:origin(e1,e2)": 29, "per:origin(e2,e1)": 30, "per:spouse(e1,e2)": 31, "per:spouse(e2,e1)": 32, "per:stateorprovinces_of_residence(e1,e2)": 33, "per:stateorprovinces_of_residence(e2,e1)": 34, "per:title(e1,e2)": 35, "per:title(e2,e1)": 36}
```
#### kbp37_formatted
- `id`: the instance id of this sentence, a `string` feature.
- `token`: the list of tokens of this sentence, using `str.split()`, a `list` of `string` features.
- `e1_start`: the 0-based index of the start token of the first argument', an `int` feature.
- `e1_end`: the 0-based index of the end token of the first argument, exclusive, an `int` feature.
- `e2_start`: the 0-based index of the start token of the second argument, an `int` feature.
- `e2_end`: the 0-based index of the end token of the second argument, exclusive, an `int` feature.
- `relation`: the relation label of this instance, an `int` classification label (same as `'kbp37''`).
### Data Splits
| | Train | Dev | Test |
|-------|-------|------|------|
| kbp37 | 15917 | 1724 | 3405 |
| kbp37_formatted | 15807 | 1714 | 3379 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/corr/ZhangW15a,
author = {Dongxu Zhang and
Dong Wang},
title = {Relation Classification via Recurrent Neural Network},
journal = {CoRR},
volume = {abs/1508.01006},
year = {2015},
url = {http://arxiv.org/abs/1508.01006},
eprinttype = {arXiv},
eprint = {1508.01006},
timestamp = {Fri, 04 Nov 2022 18:37:50 +0100},
biburl = {https://dblp.org/rec/journals/corr/ZhangW15a.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. | 13,544 | [
[
-0.038330078125,
-0.038421630859375,
0.022796630859375,
0.01519012451171875,
-0.013519287109375,
-0.00598907470703125,
-0.0197906494140625,
-0.0310516357421875,
0.0360107421875,
0.0364990234375,
-0.050262451171875,
-0.062408447265625,
-0.035858154296875,
0.0... |
shibing624/alpaca-zh | 2023-05-10T06:09:06.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:zh",
"license:cc-by-4.0",
"gpt",
"alpaca",
"fine-tune",
"instruct-tune",
"instruction",
"arxiv:2304.03277",
"region:us"
] | shibing624 | null | null | 46 | 108 | 2023-03-25T11:37:25 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 32150579
num_examples: 48818
download_size: 35100559
dataset_size: 32150579
license: cc-by-4.0
language:
- zh
pretty_name: Instruction Tuning with GPT-4
size_categories:
- 10K<n<100K
task_categories:
- text-generation
tags:
- gpt
- alpaca
- fine-tune
- instruct-tune
- instruction
---
# Dataset Description
- **Project Page:** https://instruction-tuning-with-gpt-4.github.io
- **Repo:** https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
- **Paper:** https://arxiv.org/abs/2304.03277
# Dataset Card for "alpaca-zh"
本数据集是参考Alpaca方法基于GPT4得到的self-instruct数据,约5万条。
Dataset from https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
It is the chinese dataset from https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM/blob/main/data/alpaca_gpt4_data_zh.json
# Usage and License Notices
The data is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
train model with alpaca-zh dataset: https://github.com/shibing624/textgen
# English Dataset
[Found here](https://huggingface.co/datasets/c-s-ale/alpaca-gpt4-data)
# Citation
```
@article{peng2023gpt4llm,
title={Instruction Tuning with GPT-4},
author={Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao},
journal={arXiv preprint arXiv:2304.03277},
year={2023}
}
``` | 1,603 | [
[
-0.0237579345703125,
-0.0465087890625,
0.027740478515625,
0.01739501953125,
-0.045379638671875,
-0.029327392578125,
-0.01030731201171875,
-0.03466796875,
0.002841949462890625,
0.02166748046875,
-0.05865478515625,
-0.059112548828125,
-0.0389404296875,
0.00299... |
FreedomIntelligence/CMB | 2023-08-19T09:45:53.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:apache-2.0",
"medical",
"biology",
"chemistry",
"region:us"
] | FreedomIntelligence |
Chinese Medical Benchmark | coming soon~ | 6 | 108 | 2023-07-20T09:08:03 | ---
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- zh
tags:
- medical
- biology
- chemistry
size_categories:
- 100K<n<1M
---
# CMB: A Comprehensive Medical Benchmark in Chinese

<p align="center">
🌐 <a href="https://cmedbenchmark.llmzoo.com/#home" target="_blank">Website</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/CMB" target="_blank">HuggingFace</a>
## 🌈 Update
* **[2023.08.01]** 🎉🎉🎉 CMB is published!🎉🎉🎉
## 🌐 Download Data
- (Recommended) Download the [zip file](https://github.com/FreedomIntelligence/CMB/tree/main/data) and unzip:
```bash
git clone "https://github.com/FreedomIntelligence/CMB.git" && cd CMB && unzip "./data/CMB.zip" -d "./data/" && rm "./data/CMB.zip"
```
- Or load our data as follows:
```python
from datasets import load_dataset
# CMB-Exam datasets (multiple-choice and multiple-answer questions)
exam_datasets = load_dataset('FreedomIntelligence/CMB','exam')
# CMB-Clin datasets
clin_datasets = load_dataset('FreedomIntelligence/CMB','clin')
```
## 🥇 Leaderboard
Please Check [Leaderboard](https://cmedbenchmark.llmzoo.com/static/leaderboard.html).
## 🥸 Dataset intro

### Components
- CMB-Exam: Comprehensive multi-level assessment for medical knowledge
- Structure: 6 major categories and 28 subcategories, [View Catalog](catalog.md)
- CMB-test: 400 questions per subcategories, 11200 questions in total
- CMB-val: 280 questions with solutions and explanations; used as source for CoT and few-shot
- CMB-train: 269359 questions for medical knowledge injection
- CMB-Clin: 74 cases of complex medical inquires
### CMB-Exam Item
```json
{
"exam_type": "医师考试",
"exam_class": "执业医师",
"exam_subject": "口腔执业医师",
"question": "患者,男性,11岁。近2个月来时有低热(37~38℃),全身无明显症状。查体无明显阳性体征。X线检查发现右肺中部有一直径约0.8cm类圆形病灶,边缘稍模糊,肺门淋巴结肿大。此男孩可能患",
"answer": "D",
"question_type": "单项选择题",
"option": {
"A": "小叶型肺炎",
"B": "浸润性肺结核",
"C": "继发性肺结核",
"D": "原发性肺结核",
"E": "粟粒型肺结核"
}
},
```
- exam_type: major category
- exam_class: sub-category
- exam_subject: Specific departments or subdivisions of disciplines
- question_type: *multiple-choice (单项选择题)* or *multiple-answer (多项选择题)*
### CMB-Clin Item
```json
{
"id": 0,
"title": "案例分析-腹外疝",
"description": "现病史\n(1)病史摘要\n 病人,男,49岁,3小时前解大便后出现右下腹疼痛,右下腹可触及一包块,既往体健。\n(2)主诉\n 右下腹痛并自扪及包块3小时。\n\n体格检查\n体温: T 37.8℃,P 101次/分,呼吸22次/分,BP 100/60mmHg,腹软,未见胃肠型蠕动波,肝脾肋下未及,于右侧腹股沟区可扪及一圆形肿块,约4cm×4cm大小,有压痛、界欠清,且肿块位于腹股沟韧带上内方。\n\n辅助检查\n(1)实验室检查\n 血常规:WBC 5.0×109/L,N 78%。\n 尿常规正常。\n(2)多普勒超声检查\n 沿腹股沟纵切可见一多层分布的混合回声区,宽窄不等,远端膨大,边界整齐,长约4~5cm。\n(3)腹部X线检查\n 可见阶梯状液气平。",
"QA_pairs": [
{
"question": "简述该病人的诊断及诊断依据。",
"solution": "诊断:嵌顿性腹股沟斜疝合并肠梗阻。\n诊断依据:\n①右下腹痛并自扪及包块3小时;\n②有腹胀、呕吐,类似肠梗阻表现;腹部平片可见阶梯状液平,考虑肠梗阻可能;腹部B超考虑,\n腹部包块内可能为肠管可能;\n③有轻度毒性反应或是中毒反应,如 T 37.8℃,P 101次/分,白细胞中性分类78%;\n④腹股沟区包块位于腹股沟韧带上内方。"
},
{
"question": "简述该病人的鉴别诊断。",
"solution": "(1)睾丸鞘膜积液:鞘膜积液所呈现的肿块完全局限在阴囊内,其上界可以清楚地摸到;用透光试验检查肿块,鞘膜积液多为透光(阳性),而疝块则不能透光。\n(2)交通性鞘膜积液:肿块的外形与睾丸鞘膜积液相似。于每日起床后或站立活动时肿块缓慢地出现并增大。平卧或睡觉后肿块逐渐缩小,挤压肿块,其体积也可逐渐缩小。透光试验为阳性。\n(3)精索鞘膜积液:肿块较小,在腹股沟管内,牵拉同侧睾丸可见肿块移动。\n(4)隐睾:腹股沟管内下降不全的睾丸可被误诊为斜疝或精索鞘膜积液。隐睾肿块较小,挤压时可出现特有的胀痛感觉。如患侧阴囊内睾丸缺如,则诊断更为明确。\n(5)急性肠梗阻:肠管被嵌顿的疝可伴发急性肠梗阻,但不应仅满足于肠梗阻的诊断而忽略疝的存在;尤其是病人比较肥胖或疝块较小时,更易发生这类问题而导致治疗上的错误。\n(6)此外,腹股沟区肿块还应与以下疾病鉴别:肿大的淋巴结、动(静)脉瘤、软组织肿瘤、脓肿、\n圆韧带囊肿、子宫内膜异位症等。"
},
{
"question": "简述该病人的治疗原则。",
"solution": "嵌顿性疝原则上需要紧急手术治疗,以防止疝内容物坏死并解除伴发的肠梗阻。术前应做好必要的准备,如有脱水和电解质紊乱,应迅速补液加以纠正。手术的关键在于正确判断疝内容物的活力,然后根据病情确定处理方法。在扩张或切开疝环、解除疝环压迫的前提下,凡肠管呈紫黑色,失去光泽和弹性,刺激后无蠕动和相应肠系膜内无动脉搏动者,即可判定为肠坏死。如肠管尚未坏死,则可将其送回腹腔,按一般易复性疝处理,即行疝囊高位结扎+疝修补术。如肠管确已坏死或一时不能肯定肠管是否已失去活力时,则应在病人全身情况允许的前提下,切除该段肠管并进行一期吻合。凡施行肠切除吻合术的病人,因手术区污染,在高位结扎疝囊后,一般不宜作疝修补术,以免因感染而致修补失败。"
}
]
},
```
- title: name of disease
- description: information of patient
- QA_pairs: a series of questions and their solutions based on the description
## ℹ️ How to evaluate and submit refer to [link](https://github.com/FreedomIntelligence/CMB)
## 😘 Citation
Please use the following citation if you intend to use our dataset for training or evaluation:
```
@misc{cmedbenchmark,
title={CMB: Chinese Medical Benchmark},
author={Xidong Wang*, Guiming Hardy Chen*, Dingjie Song*, Zhiyi Zhang*, Qingying Xiao, Xiangbo Wu, Feng Jiang, Jianquan Li, Benyou Wang},
note={Xidong Wang, Guiming Hardy Chen, Dingjie Song, and Zhiyi Zhang contributed equally to this github repo.},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/FreedomIntelligence/CMB}},
}
```
## Acknowledgement
- We thank [Shenzhen Research Institute of Big Data](http://www.sribd.cn/) for their enormous support for this project.
- We thank the following doctors for participating in the human evaluation of CMB-Clin:
- 林士军 (香港中文大学(深圳)附属第二医院)
- 常河
- 许晓爽
| 5,089 | [
[
-0.039794921875,
-0.046173095703125,
0.036895751953125,
0.0164031982421875,
-0.042327880859375,
-0.0191192626953125,
-0.0102081298828125,
-0.0139923095703125,
0.04119873046875,
0.01416015625,
-0.0307769775390625,
-0.06207275390625,
-0.035430908203125,
0.0117... |
Wabbina/moore_dataset_fr_translation_v1.0 | 2023-09-25T16:54:46.000Z | [
"region:us"
] | Wabbina | null | null | 0 | 108 | 2023-09-25T16:46:46 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: audio
dtype: audio
- name: language
dtype: string
- name: translation
dtype: string
- name: transcription
dtype: string
- name: is_recorded
dtype: int64
- name: is_valid
dtype: bool
- name: dialect
dtype: string
- name: source
dtype: string
- name: duration
dtype: float64
- name: cur_fs
dtype: int64
- name: bitrate
dtype: string
- name: status
dtype: int64
splits:
- name: train
num_bytes: 266997471.85374093
num_examples: 12164
- name: test
num_bytes: 33707027.9340194
num_examples: 1521
- name: valid
num_bytes: 31913920.938622963
num_examples: 1522
download_size: 300575139
dataset_size: 332618420.72638327
---
# Dataset Card for "moore_dataset_fr_translation_v1.0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,113 | [
[
-0.037994384765625,
-0.0135345458984375,
0.0290985107421875,
0.023895263671875,
-0.03179931640625,
-0.0165252685546875,
0.01375579833984375,
-0.01313018798828125,
0.0621337890625,
0.032867431640625,
-0.0660400390625,
-0.06378173828125,
-0.0445556640625,
-0.0... |
approximatelabs/tablib-v1-sample | 2023-10-13T22:34:05.000Z | [
"size_categories:1M<n<10M",
"license:other",
"arxiv:2310.07875",
"region:us"
] | approximatelabs | null | null | 7 | 108 | 2023-10-04T16:55:20 | ---
license: other
pretty_name: TabLib
size_categories:
- 1M<n<10M
extra_gated_prompt: >-
Access to this dataset is automatically granted once this form is completed.
Note that this access request is for the TabLib sample, not [the full TabLib dataset](https://huggingface.co/datasets/approximatelabs/tablib-v1-full).
extra_gated_fields:
I agree to abide by the license requirements of the data contained in TabLib: checkbox
---
[](https://discord.gg/kW9nBQErGe)
<img src="https://approximatelabs.com/tablib.png" width="800" />
# TabLib Sample
**NOTE**: This is a 0.1% sample of [the full TabLib dataset](https://huggingface.co/datasets/approximatelabs/tablib-v1-full).
TabLib is a minimally-preprocessed dataset of 627M tables (69 TiB) extracted from HTML, PDF, CSV, TSV, Excel, and SQLite files from GitHub and Common Crawl.
This includes 867B tokens of "context metadata": each table includes provenance information and table context such as filename, text before/after, HTML metadata, etc.
For more information, read the [paper](https://arxiv.org/abs/2310.07875) & [announcement blog](https://approximatelabs.com/blog/tablib).
# Dataset Details
## Sources
* **GitHub**: nearly all public GitHub repositories
* **Common Crawl**: the `CC-MAIN-2023-23` crawl
## Reading Tables
Tables are stored as serialized Arrow bytes in the `arrow_bytes` column. To read these, you will need to deserialize the bytes:
```python
import datasets
import pyarrow as pa
# load a single file of the dataset
ds = datasets.load_dataset(
'approximatelabs/tablib-v1-sample',
token='...',
)
df = ds['train'].to_pandas()
tables = [pa.RecordBatchStreamReader(b).read_all() for b in df['arrow_bytes']]
```
## Licensing
This dataset is intended for research use only.
For specific licensing information, refer to the license of the specific datum being used.
# Contact
If you have any questions, comments, or concerns about licensing, pii, etc. please contact using [this form](https://forms.gle/C74VTWP7L78QDVR67).
# Approximate Labs
TabLib is a project from Approximate Labs. Find us on [Twitter](https://twitter.com/approximatelabs), [Github](https://github.com/approximatelabs), [Linkedin](https://www.linkedin.com/company/approximate-labs), and [Discord](https://discord.gg/kW9nBQErGe).
# Citations
If you use TabLib for any of your research, please cite the TabLib paper:
```
@misc{eggert2023tablib,
title={TabLib: A Dataset of 627M Tables with Context},
author={Gus Eggert and Kevin Huo and Mike Biven and Justin Waugh},
year={2023},
eprint={2310.07875},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 2,734 | [
[
-0.0206298828125,
-0.043212890625,
0.019866943359375,
-0.00850677490234375,
-0.004146575927734375,
-0.004688262939453125,
-0.01062774658203125,
-0.0178985595703125,
0.0263824462890625,
0.011016845703125,
-0.0292816162109375,
-0.044952392578125,
0.01025390625,
... |
surathisin/dataset-test | 2023-10-14T09:06:32.000Z | [
"region:us"
] | surathisin | null | null | 0 | 108 | 2023-10-12T12:50:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
result-kand2-sdxl-wuerst-karlo/e73e5059 | 2023-10-13T09:30:30.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 108 | 2023-10-13T09:30:29 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 155
num_examples: 10
download_size: 1318
dataset_size: 155
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "e73e5059"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.04718017578125,
-0.0031528472900390625,
0.0203399658203125,
0.0102996826171875,
-0.0178070068359375,
-0.01947021484375,
0.029022216796875,
-0.0245513916015625,
0.0699462890625,
0.0303802490234375,
-0.05035400390625,
-0.04559326171875,
-0.037078857421875,
... |
result-kand2-sdxl-wuerst-karlo/9f8a49b7 | 2023-10-14T19:04:22.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 108 | 2023-10-14T19:04:21 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 235
num_examples: 10
download_size: 1403
dataset_size: 235
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "9f8a49b7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.04541015625,
-0.01116943359375,
0.0155181884765625,
0.0287933349609375,
-0.017791748046875,
0.005016326904296875,
0.024871826171875,
-0.0155792236328125,
0.0611572265625,
0.033843994140625,
-0.051025390625,
-0.040771484375,
-0.0457763671875,
-0.0018396377... |
result-kand2-sdxl-wuerst-karlo/b745e329 | 2023-10-14T19:04:25.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 108 | 2023-10-14T19:04:24 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 235
num_examples: 10
download_size: 1403
dataset_size: 235
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "b745e329"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.04278564453125,
-0.0010557174682617188,
0.0162506103515625,
0.0218048095703125,
-0.0181884765625,
-0.004150390625,
0.024383544921875,
-0.0167999267578125,
0.0572509765625,
0.0364990234375,
-0.05389404296875,
-0.049102783203125,
-0.04156494140625,
-0.00826... |
result-kand2-sdxl-wuerst-karlo/54b9ca8c | 2023-10-15T00:28:11.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 108 | 2023-10-15T00:28:11 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 167
num_examples: 10
download_size: 1354
dataset_size: 167
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "54b9ca8c"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.037628173828125,
-0.00496673583984375,
0.0160675048828125,
0.0224456787109375,
-0.0193939208984375,
0.0014896392822265625,
0.023712158203125,
-0.008270263671875,
0.0633544921875,
0.029876708984375,
-0.054931640625,
-0.052947998046875,
-0.035308837890625,
... |
result-kand2-sdxl-wuerst-karlo/519c571e | 2023-10-15T04:32:00.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 108 | 2023-10-15T04:31:59 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 210
num_examples: 10
download_size: 1378
dataset_size: 210
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "519c571e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.043487548828125,
0.0007653236389160156,
0.018646240234375,
0.0216827392578125,
-0.016510009765625,
0.001544952392578125,
0.0196380615234375,
-0.00736236572265625,
0.07598876953125,
0.0272674560546875,
-0.0643310546875,
-0.04718017578125,
-0.02874755859375,
... |
result-kand2-sdxl-wuerst-karlo/19128c17 | 2023-10-16T09:54:49.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 108 | 2023-10-16T09:54:48 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 188
num_examples: 10
download_size: 1339
dataset_size: 188
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "19128c17"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.04156494140625,
-0.006755828857421875,
0.0178680419921875,
0.022125244140625,
-0.0156097412109375,
-0.01259613037109375,
0.029083251953125,
-0.025360107421875,
0.057220458984375,
0.033050537109375,
-0.0511474609375,
-0.0433349609375,
-0.040863037109375,
-... |
result-kand2-sdxl-wuerst-karlo/0ed37a8a | 2023-10-16T12:33:04.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 108 | 2023-10-16T12:33:03 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 171
num_examples: 10
download_size: 1326
dataset_size: 171
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "0ed37a8a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.04742431640625,
-0.006946563720703125,
0.0199127197265625,
0.0238494873046875,
-0.020843505859375,
-0.0032215118408203125,
0.034088134765625,
-0.01175689697265625,
0.06939697265625,
0.040679931640625,
-0.05706787109375,
-0.04241943359375,
-0.031036376953125,
... |
result-kand2-sdxl-wuerst-karlo/8f19fe4c | 2023-10-16T22:58:43.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 108 | 2023-10-16T22:58:42 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 198
num_examples: 10
download_size: 1374
dataset_size: 198
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "8f19fe4c"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.054534912109375,
-0.011932373046875,
0.018646240234375,
0.024749755859375,
-0.0081024169921875,
-0.00296783447265625,
0.0299835205078125,
-0.02386474609375,
0.0513916015625,
0.03070068359375,
-0.056304931640625,
-0.04443359375,
-0.043243408203125,
0.00481... |
result-kand2-sdxl-wuerst-karlo/c3d9b753 | 2023-10-16T23:04:28.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 108 | 2023-10-16T23:04:27 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 202
num_examples: 10
download_size: 1389
dataset_size: 202
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "c3d9b753"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.047698974609375,
-0.0189361572265625,
0.022552490234375,
0.0308990478515625,
-0.00789642333984375,
0.004673004150390625,
0.0285491943359375,
-0.0173492431640625,
0.055755615234375,
0.035430908203125,
-0.049102783203125,
-0.041717529296875,
-0.036102294921875,... |
result-kand2-sdxl-wuerst-karlo/6a8bc094 | 2023-10-17T04:30:56.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 108 | 2023-10-17T04:30:55 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 208
num_examples: 10
download_size: 1383
dataset_size: 208
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "6a8bc094"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.042633056640625,
-0.0019130706787109375,
0.0161895751953125,
0.01654052734375,
-0.0193023681640625,
-0.0078125,
0.032806396484375,
-0.018463134765625,
0.06488037109375,
0.03546142578125,
-0.05291748046875,
-0.044158935546875,
-0.037322998046875,
-0.003137... |
result-kand2-sdxl-wuerst-karlo/eda9bdbf | 2023-10-17T21:01:21.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 108 | 2023-10-17T21:01:20 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 167
num_examples: 10
download_size: 1318
dataset_size: 167
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "eda9bdbf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.04449462890625,
-0.0294952392578125,
0.0253753662109375,
0.0187530517578125,
-0.0222015380859375,
0.009124755859375,
0.02752685546875,
-0.00830841064453125,
0.07684326171875,
0.035186767578125,
-0.060150146484375,
-0.050994873046875,
-0.03155517578125,
-0... |
Hieu-Pham/cooking_squad_splitted | 2023-10-22T08:27:38.000Z | [
"region:us"
] | Hieu-Pham | null | null | 0 | 108 | 2023-10-22T08:27:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
aquamuse | 2022-11-18T18:21:11.000Z | [
"task_categories:other",
"task_categories:question-answering",
"task_categories:text2text-generation",
"task_ids:abstractive-qa",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-genera... | null | AQuaMuSe is a novel scalable approach to automatically mine dual query based multi-document summarization datasets for extractive and abstractive summaries using question answering dataset (Google Natural Questions) and large document corpora (Common Crawl) | @misc{kulkarni2020aquamuse,
title={AQuaMuSe: Automatically Generating Datasets for Query-Based Multi-Document Summarization},
author={Sayali Kulkarni and Sheide Chammas and Wan Zhu and Fei Sha and Eugene Ie},
year={2020},
eprint={2010.12694},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 8 | 107 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|natural_questions
- extended|other-Common-Crawl
- original
task_categories:
- other
- question-answering
- text2text-generation
task_ids:
- abstractive-qa
- extractive-qa
paperswithcode_id: aquamuse
pretty_name: AQuaMuSe
tags:
- query-based-multi-document-summarization
dataset_info:
- config_name: abstractive
features:
- name: query
dtype: string
- name: input_urls
sequence: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 6434909
num_examples: 6253
- name: test
num_bytes: 843181
num_examples: 811
- name: validation
num_bytes: 689109
num_examples: 661
download_size: 7755161
dataset_size: 7967199
- config_name: extractive
features:
- name: query
dtype: string
- name: input_urls
sequence: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 6434909
num_examples: 6253
- name: test
num_bytes: 843181
num_examples: 811
- name: validation
num_bytes: 689109
num_examples: 661
download_size: 7755161
dataset_size: 7967199
---
# Dataset Card for AQuaMuSe
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/google-research-datasets/aquamuse
- **Repository:** https://github.com/google-research-datasets/aquamuse
- **Paper:** https://arxiv.org/pdf/2010.12694.pdf
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
AQuaMuSe is a novel scalable approach to automatically mine dual query based multi-document summarization datasets for extractive and abstractive summaries using question answering dataset (Google Natural Questions) and large document corpora (Common Crawl)
This dataset contains versions of automatically generated datasets for abstractive and extractive query-based multi-document summarization as described in [AQuaMuSe paper](https://arxiv.org/pdf/2010.12694.pdf).
### Supported Tasks and Leaderboards
- **Abstractive** and **Extractive** query-based multi-document summarization
- Question Answering
### Languages
en : English
## Dataset Structure
### Data Instances
- `input_urls`: a `list` of `string` features.
- `query`: a `string` feature.
- `target`: a `string` feature
Example:
```
{
'input_urls': ['https://boxofficebuz.com/person/19653-charles-michael-davis'],
'query': 'who is the actor that plays marcel on the originals',
'target': "In February 2013, it was announced that Davis was cast in a lead role on The CW's new show The
Originals, a spinoff of The Vampire Diaries, centered on the Original Family as they move to New Orleans, where
Davis' character (a vampire named Marcel) currently rules."
}
```
### Data Fields
- `input_urls`: a `list` of `string` features.
- List of URLs to input documents pointing to [Common Crawl](https://commoncrawl.org/2017/07/june-2017-crawl-archive-now-available) to be summarized.
- Dependencies: Documents URLs references the [Common Crawl June 2017 Archive](https://commoncrawl.org/2017/07/june-2017-crawl-archive-now-available).
- `query`: a `string` feature.
- Input query to be used as summarization context. This is derived from [Natural Questions](https://ai.google.com/research/NaturalQuestions/) user queries.
- `target`: a `string` feature
- Summarization target, derived from [Natural Questions](https://ai.google.com/research/NaturalQuestions/) long answers.
### Data Splits
- This dataset has two high-level configurations `abstractive` and `extractive`
- Each configuration has the data splits of `train`, `dev` and `test`
- The original format of the data was in [TFrecords](https://www.tensorflow.org/tutorials/load_data/tfrecord), which has been parsed to the format as specified in [Data Instances](#data-instances)
## Dataset Creation
### Curation Rationale
The dataset is automatically generated datasets for abstractive and extractive query-based multi-document summarization as described in [AQuaMuSe paper](https://arxiv.org/pdf/2010.12694.pdf).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset curator is [sayalikulkarni](https://github.com/google-research-datasets/aquamuse/commits?author=sayalikulkarni), who is the contributor for the official GitHub repository for this dataset and also one of the authors of this dataset’s paper. As the account handles of other authors are not available currently who were also part of the curation of this dataset, the authors of the paper are mentioned here as follows, Sayali Kulkarni, Sheide Chammas, Wan Zhu, Fei Sha, and Eugene Ie.
### Licensing Information
[More Information Needed]
### Citation Information
@misc{kulkarni2020aquamuse,
title={AQuaMuSe: Automatically Generating Datasets for Query-Based Multi-Document Summarization},
author={Sayali Kulkarni and Sheide Chammas and Wan Zhu and Fei Sha and Eugene Ie},
year={2020},
eprint={2010.12694},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
### Contributions
Thanks to [@Karthik-Bhaskar](https://github.com/Karthik-Bhaskar) for adding this dataset. | 6,881 | [
[
-0.04595947265625,
-0.03759765625,
0.0192413330078125,
-0.00580596923828125,
-0.01690673828125,
0.0096893310546875,
-0.006805419921875,
-0.02215576171875,
0.0533447265625,
0.041900634765625,
-0.06298828125,
-0.04754638671875,
-0.042877197265625,
0.0321960449... |
coarse_discourse | 2023-04-05T10:01:55.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | null | dataset contains discourse annotation and relation on threads from reddit during 2016 | @inproceedings{coarsediscourse, title={Characterizing Online Discussion Using Coarse Discourse Sequences}, author={Zhang, Amy X. and Culbertson, Bryan and Paritosh, Praveen}, booktitle={Proceedings of the 11th International AAAI Conference on Weblogs and Social Media}, series={ICWSM '17}, year={2017}, location = {Montreal, Canada} } | 3 | 107 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Coarse Discourse
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: coarse-discourse
dataset_info:
features:
- name: title
dtype: string
- name: is_self_post
dtype: bool
- name: subreddit
dtype: string
- name: url
dtype: string
- name: majority_link
dtype: string
- name: is_first_post
dtype: bool
- name: majority_type
dtype: string
- name: id_post
dtype: string
- name: post_depth
dtype: int32
- name: in_reply_to
dtype: string
- name: annotations
sequence:
- name: annotator
dtype: string
- name: link_to_post
dtype: string
- name: main_type
dtype: string
splits:
- name: train
num_bytes: 45443464
num_examples: 116357
download_size: 4636201
dataset_size: 45443464
---
# Dataset Card for "coarse_discourse"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/google-research-datasets/coarse-discourse
- **Paper:** [Characterizing Online Discussion Using Coarse Discourse Sequences](https://research.google/pubs/pub46055/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 4.63 MB
- **Size of the generated dataset:** 45.45 MB
- **Total amount of disk used:** 50.08 MB
### Dataset Summary
A large corpus of discourse annotations and relations on ~10K forum threads.
We collect and release a corpus of over 9,000 threads comprising over 100,000 comments manually annotated via paid crowdsourcing with discourse acts and randomly sampled from the site Reddit.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 4.63 MB
- **Size of the generated dataset:** 45.45 MB
- **Total amount of disk used:** 50.08 MB
An example of 'train' looks as follows.
```
{
"annotations": {
"annotator": ["fc96a15ab87f02dd1998ff55a64f6478", "e9e4b3ab355135fa954badcc06bfccc6", "31ac59c1734c1547d4d0723ff254c247"],
"link_to_post": ["", "", ""],
"main_type": ["elaboration", "elaboration", "elaboration"]
},
"id_post": "t1_c9b30i1",
"in_reply_to": "t1_c9b2nyd",
"is_first_post": false,
"is_self_post": true,
"majority_link": "t1_c9b2nyd",
"majority_type": "elaboration",
"post_depth": 2,
"subreddit": "100movies365days",
"title": "DTX120: #87 - Nashville",
"url": "https://www.reddit.com/r/100movies365days/comments/1bx6qw/dtx120_87_nashville/"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `title`: a `string` feature.
- `is_self_post`: a `bool` feature.
- `subreddit`: a `string` feature.
- `url`: a `string` feature.
- `majority_link`: a `string` feature.
- `is_first_post`: a `bool` feature.
- `majority_type`: a `string` feature.
- `id_post`: a `string` feature.
- `post_depth`: a `int32` feature.
- `in_reply_to`: a `string` feature.
- `annotations`: a dictionary feature containing:
- `annotator`: a `string` feature.
- `link_to_post`: a `string` feature.
- `main_type`: a `string` feature.
### Data Splits
| name |train |
|-------|-----:|
|default|116357|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{coarsediscourse, title={Characterizing Online Discussion Using Coarse Discourse Sequences}, author={Zhang, Amy X. and Culbertson, Bryan and Paritosh, Praveen}, booktitle={Proceedings of the 11th International AAAI Conference on Weblogs and Social Media}, series={ICWSM '17}, year={2017}, location = {Montreal, Canada} }
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu) for adding this dataset. | 7,318 | [
[
-0.05419921875,
-0.058837890625,
0.02386474609375,
0.016265869140625,
-0.024993896484375,
0.0058135986328125,
-0.031585693359375,
-0.0233306884765625,
0.0477294921875,
0.028167724609375,
-0.053985595703125,
-0.06683349609375,
-0.0487060546875,
0.007286071777... |
fquad | 2023-04-05T10:06:27.000Z | [
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_ids:extractive-qa",
"task_ids:closed-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datase... | null | FQuAD: French Question Answering Dataset
We introduce FQuAD, a native French Question Answering Dataset. FQuAD contains 25,000+ question and answer pairs.
Finetuning CamemBERT on FQuAD yields a F1 score of 88% and an exact match of 77.9%. | @ARTICLE{2020arXiv200206071
author = {Martin, d'Hoffschmidt and Maxime, Vidal and
Wacim, Belblidia and Tom, Brendlé},
title = "{FQuAD: French Question Answering Dataset}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = "2020",
month = "Feb",
eid = {arXiv:2002.06071},
pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
eprint = {2002.06071},
primaryClass = {cs.CL}
} | 8 | 107 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- fr
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
- text-retrieval
task_ids:
- extractive-qa
- closed-domain-qa
paperswithcode_id: fquad
pretty_name: 'FQuAD: French Question Answering Dataset'
dataset_info:
features:
- name: context
dtype: string
- name: questions
sequence: string
- name: answers
sequence:
- name: texts
dtype: string
- name: answers_starts
dtype: int32
splits:
- name: train
num_bytes: 5898752
num_examples: 4921
- name: validation
num_bytes: 1031456
num_examples: 768
download_size: 0
dataset_size: 6930208
---
# Dataset Card for FQuAD
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://fquad.illuin.tech/](https://fquad.illuin.tech/)
- **Paper:** [FQuAD: French Question Answering Dataset](https://arxiv.org/abs/2002.06071)
- **Point of Contact:** [https://www.illuin.tech/contact/](https://www.illuin.tech/contact/)
- **Size of downloaded dataset files:** 3.29 MB
- **Size of the generated dataset:** 6.94 MB
- **Total amount of disk used:** 10.23 MB
### Dataset Summary
FQuAD: French Question Answering Dataset
We introduce FQuAD, a native French Question Answering Dataset.
FQuAD contains 25,000+ question and answer pairs.
Finetuning CamemBERT on FQuAD yields a F1 score of 88% and an exact match of 77.9%.
Developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles.
Please, note this dataset is licensed for non-commercial purposes and users must agree to the following terms and conditions:
1. Use FQuAD only for internal research purposes.
2. Not make any copy except a safety one.
3. Not redistribute it (or part of it) in any way, even for free.
4. Not sell it or use it for any commercial purpose. Contact us for a possible commercial licence.
5. Mention the corpus origin and Illuin Technology in all publications about experiments using FQuAD.
6. Redistribute to Illuin Technology any improved or enriched version you could make of that corpus.
Request manually download of the data from: https://fquad.illuin.tech/
### Supported Tasks and Leaderboards
- `closed-domain-qa`, `text-retrieval`: This dataset is intended to be used for `closed-domain-qa`, but can also be used for information retrieval tasks.
### Languages
This dataset is exclusively in French, with context data from Wikipedia and questions from French university students (`fr`).
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 3.29 MB
- **Size of the generated dataset:** 6.94 MB
- **Total amount of disk used:** 10.23 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answers_starts": [161, 46, 204],
"texts": ["La Vierge aux rochers", "documents contemporains", "objets de spéculations"]
},
"context": "\"Les deux tableaux sont certes décrits par des documents contemporains à leur création mais ceux-ci ne le font qu'indirectement ...",
"questions": ["Que concerne principalement les documents ?", "Par quoi sont décrit les deux tableaux ?", "Quels types d'objets sont les deux tableaux aux yeux des chercheurs ?"]
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `context`: a `string` feature.
- `questions`: a `list` of `string` features.
- `answers`: a dictionary feature containing:
- `texts`: a `string` feature.
- `answers_starts`: a `int32` feature.
### Data Splits
The FQuAD dataset has 3 splits: _train_, _validation_, and _test_. The _test_ split is however not released publicly at the moment. The splits contain disjoint sets of articles. The following table contains stats about each split.
Dataset Split | Number of Articles in Split | Number of paragraphs in split | Number of questions in split
--------------|------------------------------|--------------------------|-------------------------
Train | 117 | 4921 | 20731
Validation | 768 | 51.0% | 3188
Test | 10 | 532 | 2189
## Dataset Creation
### Curation Rationale
The FQuAD dataset was created by Illuin technology. It was developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles.
### Source Data
The text used for the contexts are from the curated list of French High-Quality Wikipedia [articles](https://fr.wikipedia.org/wiki/Cat%C3%A9gorie:Article_de_qualit%C3%A9).
### Annotations
Annotations (spans and questions) are written by students of the CentraleSupélec school of engineering.
Wikipedia articles were scraped and Illuin used an internally-developped tool to help annotators ask questions and indicate the answer spans.
Annotators were given paragraph sized contexts and asked to generate 4/5 non-trivial questions about information in the context.
### Personal and Sensitive Information
No personal or sensitive information is included in this dataset. This has been manually verified by the dataset curators.
## Considerations for Using the Data
Users should consider this dataset is sampled from Wikipedia data which might not be representative of all QA use cases.
### Social Impact of Dataset
The social biases of this dataset have not yet been investigated.
### Discussion of Biases
The social biases of this dataset have not yet been investigated, though articles have been selected by their quality and objectivity.
### Other Known Limitations
The limitations of the FQuAD dataset have not yet been investigated.
## Additional Information
### Dataset Curators
Illuin Technology: [https://fquad.illuin.tech/](https://fquad.illuin.tech/)
### Licensing Information
The FQuAD dataset is licensed under the [CC BY-NC-SA 3.0](https://creativecommons.org/licenses/by-nc-sa/3.0/fr/) license.
It allows personal and academic research uses of the dataset, but not commercial uses. So concretely, the dataset cannot be used to train a model that is then put into production within a business or a company. For this type of commercial use, we invite FQuAD users to contact [the authors](https://www.illuin.tech/contact/) to discuss possible partnerships.
### Citation Information
```
@ARTICLE{2020arXiv200206071
author = {Martin, d'Hoffschmidt and Maxime, Vidal and
Wacim, Belblidia and Tom, Brendlé},
title = "{FQuAD: French Question Answering Dataset}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = "2020",
month = "Feb",
eid = {arXiv:2002.06071},
pages = {arXiv:2002.06071},
archivePrefix = {arXiv},
eprint = {2002.06071},
primaryClass = {cs.CL}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
Thanks to [@ManuelFay](https://github.com/manuelfay) for providing information on the dataset creation process. | 8,369 | [
[
-0.049957275390625,
-0.06451416015625,
0.01197052001953125,
0.02142333984375,
0.0087432861328125,
0.0033283233642578125,
-0.00669097900390625,
-0.024627685546875,
0.02154541015625,
0.019378662109375,
-0.04095458984375,
-0.043182373046875,
-0.01525115966796875,
... |
lc_quad | 2023-04-05T10:09:15.000Z | [
"task_categories:question-answering",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"knowledge-base-qa",
"region:us"
] | null | LC-QuAD 2.0 is a Large Question Answering dataset with 30,000 pairs of question and its corresponding SPARQL query. The target knowledge base is Wikidata and DBpedia, specifically the 2018 version. Please see our paper for details about the dataset creation process and framework. | @inproceedings{dubey2017lc2,
title={LC-QuAD 2.0: A Large Dataset for Complex Question Answering over Wikidata and DBpedia},
author={Dubey, Mohnish and Banerjee, Debayan and Abdelkawi, Abdelrahman and Lehmann, Jens},
booktitle={Proceedings of the 18th International Semantic Web Conference (ISWC)},
year={2019},
organization={Springer}
} | 5 | 107 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-3.0
multilinguality:
- monolingual
pretty_name: 'LC-QuAD 2.0: Large-scale Complex Question Answering Dataset'
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids: []
paperswithcode_id: lc-quad-2-0
tags:
- knowledge-base-qa
dataset_info:
features:
- name: NNQT_question
dtype: string
- name: uid
dtype: int32
- name: subgraph
dtype: string
- name: template_index
dtype: int32
- name: question
dtype: string
- name: sparql_wikidata
dtype: string
- name: sparql_dbpedia18
dtype: string
- name: template
dtype: string
- name: paraphrased_question
dtype: string
splits:
- name: train
num_bytes: 16637751
num_examples: 19293
- name: test
num_bytes: 4067092
num_examples: 4781
download_size: 3959901
dataset_size: 20704843
---
# Dataset Card for LC-QuAD 2.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://lc-quad.sda.tech/](http://lc-quad.sda.tech/)
- **Repository:** https://github.com/AskNowQA/LC-QuAD2.0
- **Paper:** [LC-QuAD 2.0: A Large Dataset for Complex Question Answering over Wikidata and DBpedia](https://api.semanticscholar.org/CorpusID:198166992)
- **Point of Contact:** [Mohnish Dubey](mailto:dubey@cs.uni-bonn.de) or [Mohnish Dubey](mailto:dubey.mohnish5@gmail.com)
- **Size of downloaded dataset files:** 3.87 MB
- **Size of the generated dataset:** 20.73 MB
- **Total amount of disk used:** 24.60 MB
### Dataset Summary
LC-QuAD 2.0 is a Large Question Answering dataset with 30,000 pairs of question and its corresponding SPARQL query. The target knowledge base is Wikidata and DBpedia, specifically the 2018 version. Please see our paper for details about the dataset creation process and framework.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 3.87 MB
- **Size of the generated dataset:** 20.73 MB
- **Total amount of disk used:** 24.60 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"NNQT_question": "What is the {periodical literature} for {mouthpiece} of {Delta Air Lines}",
"paraphrased_question": "What is Delta Air Line's periodical literature mouthpiece?",
"question": "What periodical literature does Delta Air Lines use as a moutpiece?",
"sparql_dbpedia18": "\"select distinct ?obj where { ?statement <http://www.w3.org/1999/02/22-rdf-syntax-ns#subject> <http://wikidata.dbpedia.org/resou...",
"sparql_wikidata": " select distinct ?obj where { wd:Q188920 wdt:P2813 ?obj . ?obj wdt:P31 wd:Q1002697 } ",
"subgraph": "simple question right",
"template": " <S P ?O ; ?O instanceOf Type>",
"template_index": 65,
"uid": 19719
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `NNQT_question`: a `string` feature.
- `uid`: a `int32` feature.
- `subgraph`: a `string` feature.
- `template_index`: a `int32` feature.
- `question`: a `string` feature.
- `sparql_wikidata`: a `string` feature.
- `sparql_dbpedia18`: a `string` feature.
- `template`: a `string` feature.
- `paraphrased_question`: a `string` feature.
### Data Splits
| name |train|test|
|-------|----:|---:|
|default|19293|4781|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
LC-QuAD 2.0 is licensed under a [Creative Commons Attribution 3.0 Unported License](http://creativecommons.org/licenses/by/3.0/deed.en_US).
### Citation Information
```
@inproceedings{dubey2017lc2,
title={LC-QuAD 2.0: A Large Dataset for Complex Question Answering over Wikidata and DBpedia},
author={Dubey, Mohnish and Banerjee, Debayan and Abdelkawi, Abdelrahman and Lehmann, Jens},
booktitle={Proceedings of the 18th International Semantic Web Conference (ISWC)},
year={2019},
organization={Springer}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. | 7,231 | [
[
-0.060638427734375,
-0.055694580078125,
0.005680084228515625,
0.005886077880859375,
-0.00946807861328125,
-0.0026607513427734375,
-0.0188446044921875,
-0.0274658203125,
0.031951904296875,
0.050994873046875,
-0.06683349609375,
-0.061370849609375,
-0.0240020751953... |
sem_eval_2020_task_11 | 2023-01-25T14:43:56.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:unknown",
"propaganda-span-identification",
... | null | Propagandistic news articles use specific techniques to convey their message,
such as whataboutism, red Herring, and name calling, among many others.
The Propaganda Techniques Corpus (PTC) allows to study automatic algorithms to
detect them. We provide a permanent leaderboard to allow researchers both to
advertise their progress and to be up-to-speed with the state of the art on the
tasks offered (see below for a definition). | @misc{martino2020semeval2020,
title={SemEval-2020 Task 11: Detection of Propaganda Techniques in News Articles},
author={G. Da San Martino and A. Barrón-Cedeño and H. Wachsmuth and R. Petrov and P. Nakov},
year={2020},
eprint={2009.02696},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 5 | 107 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
- token-classification
task_ids: []
pretty_name: SemEval-2020 Task 11
tags:
- propaganda-span-identification
- propaganda-technique-classification
dataset_info:
features:
- name: article_id
dtype: string
- name: text
dtype: string
- name: span_identification
sequence:
- name: start_char_offset
dtype: int64
- name: end_char_offset
dtype: int64
- name: technique_classification
sequence:
- name: start_char_offset
dtype: int64
- name: end_char_offset
dtype: int64
- name: technique
dtype:
class_label:
names:
'0': Appeal_to_Authority
'1': Appeal_to_fear-prejudice
'2': Bandwagon,Reductio_ad_hitlerum
'3': Black-and-White_Fallacy
'4': Causal_Oversimplification
'5': Doubt
'6': Exaggeration,Minimisation
'7': Flag-Waving
'8': Loaded_Language
'9': Name_Calling,Labeling
'10': Repetition
'11': Slogans
'12': Thought-terminating_Cliches
'13': Whataboutism,Straw_Men,Red_Herring
splits:
- name: train
num_bytes: 2358613
num_examples: 371
- name: test
num_bytes: 454100
num_examples: 90
- name: validation
num_bytes: 396410
num_examples: 75
download_size: 0
dataset_size: 3209123
---
# Dataset Card for SemEval-2020 Task 11
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PTC TASKS ON "DETECTION OF PROPAGANDA TECHNIQUES IN NEWS ARTICLES"](https://propaganda.qcri.org/ptc/index.html)
- **Paper:** [SemEval-2020 Task 11: Detection of Propaganda Techniques in News Articles](https://arxiv.org/abs/2009.02696)
- **Leaderboard:** [PTC Tasks Leaderboard](https://propaganda.qcri.org/ptc/leaderboard.php)
- **Point of Contact:** [Task organizers contact](semeval-2020-task-11-organizers@googlegroups.com)
### Dataset Summary
Propagandistic news articles use specific techniques to convey their message, such as whataboutism, red Herring, and name calling, among many others. The Propaganda Techniques Corpus (PTC) allows to study automatic algorithms to detect them. We provide a permanent leaderboard to allow researchers both to advertise their progress and to be up-to-speed with the state of the art on the tasks offered (see below for a definition).
### Supported Tasks and Leaderboards
More information on scoring methodology can be found in [propaganda tasks evaluation document](https://propaganda.qcri.org/ptc/data/propaganda_tasks_evaluation.pdf)
### Languages
This dataset consists of English news articles
## Dataset Structure
### Data Instances
Each example is structured as follows:
```
{
"span_identification": {
"end_char_offset": [720, 6322, ...],
"start_char_offset": [683, 6314, ...]
},
"technique_classification": {
"end_char_offset": [720,6322, ...],
"start_char_offset": [683,6314, ...],
"technique": [7,8, ...]
},
"text": "Newt Gingrich: The truth about Trump, Putin, and Obama\n\nPresident Trump..."
}
```
### Data Fields
- `text`: The full text of the news article.
- `span_identification`: a dictionary feature containing:
- `start_char_offset`: The start character offset of the span for the SI task
- `end_char_offset`: The end character offset of the span for the SI task
- `technique_classification`: a dictionary feature containing:
- `start_char_offset`: The start character offset of the span for the TC task
- `end_char_offset`: The start character offset of the span for the TC task
- `technique`: the propaganda technique classification label, with possible values including `Appeal_to_Authority`, `Appeal_to_fear-prejudice`, `Bandwagon,Reductio_ad_hitlerum`, `Black-and-White_Fallacy`, `Causal_Oversimplification`.
### Data Splits
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Input Sentences | 371 | 75 | 90 |
| Total Annotations SI | 5468 | 940 | 0 |
| Total Annotations TC | 6128 | 1063 | 0 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
In order to build the PTC-SemEval20 corpus, we retrieved a sample of news articles from the period
starting in mid-2017 and ending in early 2019. We selected 13 propaganda and 36 non-propaganda news
media outlets, as labeled by Media Bias/Fact Check,3
and we retrieved articles from these sources. We
deduplicated the articles on the basis of word n-grams matching (Barron-Cede ´ no and Rosso, 2009) and ˜
we discarded faulty entries (e.g., empty entries from blocking websites).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The annotation job consisted of both spotting a propaganda snippet and, at the same time, labeling
it with a specific propaganda technique. The annotation guidelines are shown in the appendix; they
are also available online.4 We ran the annotation in two phases: (i) two annotators label an article
independently and (ii) the same two annotators gather together with a consolidator to discuss dubious
instances (e.g., spotted only by one annotator, boundary discrepancies, label mismatch, etc.). This protocol
was designed after a pilot annotation stage, in which a relatively large number of snippets had been spotted
by one annotator only. The annotation team consisted of six professional annotators from A Data Pro trained to spot and label the propaganda snippets from free text. The job was carried out on an instance of
the Anafora annotation platform (Chen and Styler, 2013), which we tailored for our propaganda annotation
task.
We evaluated the annotation process in terms of γ agreement (Mathet et al., 2015) between each of
the annotators and the final gold labels. The γ agreement on the annotated articles is on average 0.6;
see (Da San Martino et al., 2019b) for a more detailed discussion of inter-annotator agreement. The
training and the development part of the PTC-SemEval20 corpus are the same as the training and the
testing datasets described in (Da San Martino et al., 2019b). The test part of the PTC-SemEval20 corpus
consists of 90 additional articles selected from the same sources as for training and development. For
the test articles, we further extended the annotation process by adding one extra consolidation step: we
revisited all the articles in that partition and we performed the necessary adjustments to the spans and to
the labels as necessary, after a thorough discussion and convergence among at least three experts who
were not involved in the initial annotations.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{martino2020semeval2020,
title={SemEval-2020 Task 11: Detection of Propaganda Techniques in News Articles},
author={G. Da San Martino and A. Barrón-Cedeño and H. Wachsmuth and R. Petrov and P. Nakov},
year={2020},
eprint={2009.02696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@ZacharySBrown](https://github.com/ZacharySBrown) for adding this dataset. | 8,918 | [
[
-0.047515869140625,
-0.059722900390625,
0.015106201171875,
0.032623291015625,
-0.0301055908203125,
-0.0008587837219238281,
-0.01934814453125,
-0.0152130126953125,
0.0201568603515625,
0.03955078125,
-0.03619384765625,
-0.059906005859375,
-0.07470703125,
0.020... |
spanish_billion_words | 2022-11-03T16:16:07.000Z | [
"task_categories:other",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"sour... | null | An unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.
This resources include the spanish portions of SenSem, the Ancora Corpus, some OPUS Project Corpora and the Europarl,
the Tibidabo Treebank, the IULA Spanish LSP Treebank, and dumps from the Spanish Wikipedia, Wikisource and Wikibooks.
This corpus is a compilation of 100 text files. Each line of these files represents one of the 50 million sentences from the corpus. | @misc{cardellinoSBWCE,
author = {Cardellino, Cristian},
title = {Spanish {B}illion {W}ords {C}orpus and {E}mbeddings},
url = {https://crscardellino.github.io/SBWCE/},
month = {August},
year = {2019}
} | 8 | 107 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- es
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- other
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: sbwce
pretty_name: Spanish Billion Word Corpus and Embeddings
dataset_info:
features:
- name: text
dtype: string
config_name: corpus
splits:
- name: train
num_bytes: 8950895954
num_examples: 46925295
download_size: 2024166993
dataset_size: 8950895954
---
# Dataset Card for Spanish Billion Words
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Spanish Billion Words homepage](https://crscardellino.github.io/SBWCE/)
- **Point of Contact:** [Cristian Cardellino](mailto:ccardellino@unc.edu.ar) (Corpus Creator), [María Grandury](mailto:mariagrandury@gmail.com) (Corpus Submitter)
### Dataset Summary
The Spanish Billion Words Corpus is an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.
This resources include the spanish portions of SenSem, the Ancora Corpus, some OPUS Project Corpora and the Europarl,
the Tibidabo Treebank, the IULA Spanish LSP Treebank, and dumps from the Spanish Wikipedia, Wikisource and Wikibooks.
This corpus is a compilation of 100 text files. Each line of these files represents one of the 50 million sentences from the corpus.
### Supported Tasks and Leaderboards
This dataset can be used for language modelling and for pretraining language models.
### Languages
The text in this dataset is in Spanish, BCP-47 code: 'es'.
## Dataset Structure
### Data Instances
Each example in this dataset is a sentence in Spanish:
```
{'text': 'Yo me coloqué en un asiento próximo a una ventana cogí un libro de una mesa y empecé a leer'}
```
### Data Fields
- `text`: a sentence in Spanish
### Data Splits
The dataset is not split.
## Dataset Creation
### Curation Rationale
The Spanish Billion Words Corpus was created to train word embeddings using the word2vect algorithm provided by the gensim package.
### Source Data
#### Initial Data Collection and Normalization
The corpus was created compiling the following resources:
- The Spanish portion of [SenSem]().
- The Spanish portion of the [Ancora Corpus](http://clic.ub.edu/corpus/en).
- [Tibidabo Treebank and IULA Spanish LSP Treebank](http://lod.iula.upf.edu/resources/metadata_TRL_Tibidabo_LSP_treebank_ES).
- The Spanish portion of the following [OPUS Project](http://opus.nlpl.eu/index.php) Corpora:
- The [books](http://opus.nlpl.eu/Books.php) aligned by [Andras Farkas](https://farkastranslations.com/).
- The [JRC-Acquis](http://opus.nlpl.eu/JRC-Acquis.php) collection of legislative text of the European Union.
- The [News Commentary](http://opus.nlpl.eu/News-Commentary.php) corpus.
- The [United Nations](http://opus.nlpl.eu/UN.php) documents compiled by [Alexandre Rafalovitch](https://www.outerthoughts.com/) and [Robert Dale](http://web.science.mq.edu.au/~rdale/).
- The Spanish portion of the [Europarl](http://statmt.org/europarl/) (European Parliament), compiled by [Philipp Koehn](https://homepages.inf.ed.ac.uk/pkoehn/).
- Dumps from the Spanish [Wikipedia](https://es.wikipedia.org/wiki/Wikipedia:Portada), [Wikisource](https://es.wikisource.org/wiki/Portada) and [Wikibooks](https://es.wikibooks.org/wiki/Portada) on date 2015-09-01, parsed with the Wikipedia Extractor.
All the annotated corpora (like Ancora, SenSem and Tibidabo) were untagged and
the parallel corpora (most coming from the OPUS Project) was preprocessed to obtain only the Spanish portions of it.
Once the whole corpus was unannotated, all non-alphanumeric characters were replaced with whitespaces,
all numbers with the token “DIGITO” and all the multiple whitespaces with only one whitespace.
The capitalization of the words remained unchanged.
#### Who are the source language producers?
The data was compiled and processed by Cristian Cardellino.
### Annotations
The dataset is unannotated.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The data was collected and processed by Cristian Cardellino.
### Licensing Information
The dataset is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license
[(CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
```
@misc{cardellinoSBWCE,
author = {Cardellino, Cristian},
title = {Spanish {B}illion {W}ords {C}orpus and {E}mbeddings},
url = {https://crscardellino.github.io/SBWCE/},
month = {August},
year = {2019}
}
```
### Contributions
Thanks to [@mariagrandury](https://github.com/mariagrandury) for adding this dataset. | 6,180 | [
[
-0.033111572265625,
-0.0272216796875,
0.0162200927734375,
0.03179931640625,
-0.022674560546875,
-0.00156402587890625,
-0.03302001953125,
-0.0338134765625,
0.042816162109375,
0.041473388671875,
-0.0316162109375,
-0.06341552734375,
-0.045257568359375,
0.025680... |
Lacito/pangloss | 2022-09-06T18:02:34.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"source_datasets:original",
"language:jya",
"language:nru",
"license:cc-by-nc-sa-4.0",
"region:us"
] | Lacito | These datasets are extracts from the Pangloss collection and have
been preprocessed for ASR experiments in Na and Japhug. | null | 3 | 107 | 2022-03-02T23:29:22 | ---
pretty_name: Pangloss
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- jya
- nru
language_bcp47:
- x-japh1234
- x-yong1288
language_details: jya consists of japh1234 (Glottolog code); nru consists of yong1288 (Glottolog code)
license: cc-by-nc-sa-4.0
multilinguality:
- multilingual
- translation
size_categories:
yong1288:
- 10K<n<100K
japh1234:
- 10K<n<100K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids:
- speech-recognition
---
# Dataset Card for [Needs More Information]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Web interface of the Pangloss Collection, which hosts the data sets](https://pangloss.cnrs.fr/)
- **Repository:** [GithHub repository of the Pangloss Collection, which hosts the data sets](https://github.com/CNRS-LACITO/Pangloss/)
- **Paper:** [A paper about the Pangloss Collection, including a presentation of the Document Type Definition](https://halshs.archives-ouvertes.fr/halshs-01003734)
[A paper in French about the deposit in Zenodo](https://halshs.archives-ouvertes.fr/halshs-03475436)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Benjamin Galliot](mailto:b.g01lyon@gmail.com)
### Dataset Summary
Two audio corpora of minority languages of China (Japhug and Na), with transcriptions, proposed as reference data sets for experiments in Natural Language Processing. The data, collected and transcribed in the course of immersion fieldwork, amount to a total of about 1,900 minutes in Japhug and 200 minutes in Na. By making them available in an easily accessible and usable form, we hope to facilitate the development and deployment of state-of-the-art NLP tools for the full range of human languages. There is an associated tool for assembling datasets from the Pangloss Collection (an open archive) in a way that ensures full reproducibility of experiments conducted on these data.
The Document Type Definition for the XML files is available here:
http://cocoon.huma-num.fr/schemas/Archive.dtd
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Japhug (ISO 639-3 code: jya, Glottolog language code: japh1234) and Yongning Na (ISO 639-3 code: nru, Glottolog language code: yong1288) are two minority languages of China. The documents in the dataset have a transcription in the endangered language. Some of the documents have translations into French, English, and Chinese.
## Dataset Structure
### Data Instances
A typical data row includes the path, audio, sentence, document type and several translations (depending on the sub-corpus).
`
{
"path": "cocoon-db3cf0e1-30bb-3225-b012-019252bb4f4d_C1/Tone_BodyPartsOfAnimals_12_F4_2008_withEGG_069.wav",
"audio": "{'path': 'na/cocoon-db3cf0e1-30bb-3225-b012-019252bb4f4d_C1/Tone_BodyPartsOfAnimals_12_F4_2008_withEGG_069.wav', 'array': array([0.00018311, 0.00015259, 0.00021362, ..., 0.00030518, 0.00030518, 0.00054932], dtype=float32), 'sampling_rate': 16000}",
"sentence": "ʈʂʰɯ˧ | ɖɤ˧mi˧-ɬi˧pi˩ ɲi˩",
"doctype": "WORDLIST",
"translation:zh": "狐狸的耳朵",
"translation:fr": "oreilles de renard",
"translation:en": "fox's ears",
}
`
### Data Fields
path: the path to the audio file;;
audio: a dictionary containing the path to the audio file, the audio array and the sampling rate;
sentence: the sentence the native has pronunced;
doctype: the document type (a text or a word list);
translation:XX: the translation of the sentence in the language XX.
### Data Splits
The train, test and validation splits have all been reviewed and were splitted randomly (ratio 8:1:1) at sentence level (after the extraction from various files).
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
The dataset was collected in immersion fieldwork for language documentation. It contributes to the documentation and study of the world's languages by providing documents of connected, spontaneous speech recorded in their cultural context and transcribed in consultation with native speakers. The impacts concern research, and society at large: a guiding principle of the Pangloss Collection, which hosts the data sets, is that a close association between documentation and research is highly profitable to both. A range of possibilities for uses exist, for the scientific and speaker communities and for the general public.
### Discussion of Biases
The corpora are single-speaker and hence clearly do not reflect the sociolinguistic and dialectal diversity of the languages. No claim is made that the language variety described constitutes a 'standard'.
### Other Known Limitations
The translations are entirely hand-made by experts working on these languages; the amount and type of translations available varies from document to document, as not all documents have translations and not all translated documents have the same translation languages (Chinese, French, English...).
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
| 6,538 | [
[
-0.03472900390625,
-0.02496337890625,
-0.0032405853271484375,
0.038330078125,
-0.0293426513671875,
0.00017881393432617188,
-0.0484619140625,
-0.03936767578125,
0.03546142578125,
0.0465087890625,
-0.0379638671875,
-0.06573486328125,
-0.040191650390625,
0.0329... |
SetFit/hate_speech_offensive | 2022-01-15T21:47:31.000Z | [
"region:us"
] | SetFit | null | null | 1 | 107 | 2022-03-02T23:29:22 | # hate_speech_offensive
This dataset is a version from [hate_speech_offensive](https://huggingface.co/datasets/hate_speech_offensive), splitted into train and test set. | 169 | [
[
-0.0197601318359375,
-0.0460205078125,
-0.03369140625,
0.003082275390625,
-0.01473236083984375,
0.00820159912109375,
-0.0006117820739746094,
-0.0173187255859375,
0.047882080078125,
0.0491943359375,
-0.0579833984375,
-0.011566162109375,
-0.038055419921875,
-0... |
mnazari/nena_speech_1_0_test | 2023-10-27T08:58:56.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"task_categories:translation",
"annotations_creators:crowdsourced",
"annotations_creators:Geoffrey Khan",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"size_categories... | mnazari | null | null | 0 | 107 | 2023-09-20T04:23:27 | ---
pretty_name: NENA Speech Dataset 1.0 (test)
annotations_creators:
- crowdsourced
- Geoffrey Khan
language_creators:
- crowdsourced
language:
- aii
- cld
- huy
- lsd
- trg
- aij
- bhn
- hrt
- kqd
- syn
license:
- cc0-1.0
multilinguality:
- multilingual
task_categories:
- automatic-speech-recognition
- text-to-speech
- translation
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
---
# Dataset Card for NENA Speech Dataset 1.0 (test)
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [How to Use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
<!-- - [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations) -->
- [Building the Dataset](#building-the-dataset)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
<!-- - [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations) -->
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## ⚠️ This is a temperary repository that will be replaced by end of 2023
## Dataset Summary
NENA Speech is a multimodal dataset to help teach machines how real people speak the Northeastern Neo-Aramaic (NENA) dialects.
The NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.
NENA Speech consists of multimodal examples of speech of the NENA dialects. While all documented NENA dialects are included, not all have data yet, and some will never due to recent loss of their final speakers.
## Dataset Description
- **Homepage**: https://crowdsource.nenadb.dev/
- **Point of Contact:** [Matthew Nazari](mailto:matthewnazari@college.harvard.edu)
## Languages
The NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.
Speakers of the Christian dialects call their language Assyrian and Chaldean in English. In their language these speakers use multiple different terms (e.g. suráy, sureth, ḥadiṯan, senaya). Speakers of the Jewish dialects call their language lišana deni, lišanət noshan, lišana nosha, lišana didan, all meaning "our language". Some names reflect the consciousness of it being a specifically Jewish language (e.g. lišan hozaye, hulaula).
NENA Speech has a subset for all of the over 150 NENA dialects. Not all dialects have examples available yet. Some dialects will never have examples available due to the loss of their final speakers in recent years.
## How to Use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, simply specify the corresponding language config name (e.g., "urmi (christian)" for the dialect of the Assyrian Christians of Urmi):
```python
from datasets import load_dataset
nena_speech = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
## Dataset Structure
### Data Instances
The NENA Speech dataset is a multimodal dataset that consists of three different kinds of examples:
1. **Unlabeled speech examples:** these contain audio of speech (`audio`) but no accompanying transcription (`transcription`) or translation (`translation`). This is useful for representation learning.
2. **Transcribed speech examples:** these contain both audio and transcription of speech. These are useful for machine learning tasks like automatic speech recognition and speech synthesis.
3. **Transcribed and translated speech examples:** these kinds of examples contain audio, transcription, and translation of speech. These are useful for tasks like multimodal translation.
Make sure to filter for the kinds of examples you need for your task before before using it.
```json
{
"transcription": "gu-mdìta.ˈ",
"translation": "in the town.",
"audio": {
"path": "et/train/nena_speech_0uk14ofpom196aj.mp3",
"array": array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
"sampling_rate": 48000
},
"locale": "IRN",
"proficiency": "proficient as mom",
"age": "70's",
"crowdsourced": true,
"unlabeled": true,
"interrupted": true,
"client_id": "gwurt1g1ln" ,
"path": "et/train/nena_speech_0uk14ofpom196aj.mp3",
}
```
### Data Fields
- `transcription (string)`: The transcription of what was spoken (e.g. `"beta"`)
- `translation (string)`: The translation of what was spoken in English (e.g. `"house"`)
- `audio (dict)`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
- `locale (string)`: The locale of the speaker
- `proficiency (string)`: The proficiency of the speaker
- `age (string)`: The age of the speaker (e.g. `"20's"`, `"50's"`, `"100+"`)
- `crowdsourced (bool)`: Indicates whether the example was crowdsourced as opposed to collected from existing language documentation resources
- `interrupted (bool)`: Indicates whether the example was interrupted with the speaker making sound effects or switching into another language
- `client_id (string)`: An id for which client (voice) made the recording
- `path (string)`: The path to the audio file
### Data Splits
The examples have been subdivided into three portions:
1. **dev:** the validation split (10%)
3. **test:** the test split (10%)
2. **train:** the train split (80%)
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Dataset Creation
<!-- ### Curation Rationale
[Needs More Information]
### Source Data
#### Language Documentation Resources
[Needs More Information]
#### Webscraping Facebook
[Needs More Information]
#### Crowdsourcing
[Needs More Information]
### Annotations
[Needs More Information] -->
### Building the Dataset
The NENA Speech dataset itself is built using `build.py`.
First, install the necessary requirements.
```
pip install -r requirements.txt
```
Next, build the dataset.
```
python build.py --build
```
Finally, push to the HuggingFace dataset repository.
## Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Data Preprocessing
The dataset consists of three different kinds of examples (see [Data Instances](#data-instances)).
Make sure to filter for the kinds of examples you need for your task before before using it. For example, for automatic speech recognition you will want to filter for examples with transcriptions.
In most tasks, you will want to filter out examples that are interrupted (e.g. by the speaker making sound effects, by the speaker switching into a another language).
```python
from datasets import load_dataset
ds = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")
def filter_for_asr(example):
return example['transcription'] and not example['interrupted']
ds = ds.filter(filter_for_asr, desc="filter dataset")
```
Transcriptions include markers of linguistic and acoustic features which may be removed in certain tasks (e.g. word stress, nuclear stress, intonation group markers, vowel length).
```python
from datasets import load_dataset
ds = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")
def prepare_dataset(batch):
chars_to_remove = ['ˈ', '̀', '́', '̄', '̆', '.', ',', '?', '!']
for char in chars_to_remove:
batch["transcription"] = batch["transcription"].replace(char, "")
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
<!-- ## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information] -->
## Additional Information
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/).
### Citation Information
This work has not yet been published.
| 9,309 | [
[
-0.036773681640625,
-0.060211181640625,
-0.0107421875,
0.0151519775390625,
-0.0087890625,
-0.0078582763671875,
-0.037384033203125,
-0.01470184326171875,
0.0560302734375,
0.044708251953125,
-0.043731689453125,
-0.057159423828125,
-0.032684326171875,
0.0197143... |
bzantium/LongBench | 2023-09-25T04:03:43.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:summarization",
"task_categories:conversational",
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"language:zh",
"Long Context",
"arxiv:2308.14508",
"arxiv:2108.00573",
"... | bzantium | LongBench is a comprehensive benchmark for multilingual and multi-task purposes, with the goal to fully measure and evaluate the ability of pre-trained language models to understand long text. This dataset consists of twenty different tasks, covering key long-text application scenarios such as multi-document QA, single-document QA, summarization, few-shot learning, synthetic tasks, and code completion. | null | 0 | 107 | 2023-09-21T06:13:03 | ---
task_categories:
- question-answering
- text-generation
- summarization
- conversational
- text-classification
language:
- en
- zh
tags:
- Long Context
size_categories:
- 1K<n<10K
---
# Introduction
**LongBench** is the first benchmark for bilingual, multitask, and comprehensive assessment of **long context understanding** capabilities of large language models. LongBench includes different languages (Chinese and English) to provide a more comprehensive evaluation of the large models' multilingual capabilities on long contexts. In addition, LongBench is composed of six major categories and twenty one different tasks, covering key long-text application scenarios such as single-document QA, multi-document QA, summarization, few-shot learning, synthetic tasks and code completion.
We are fully aware of the potentially high costs involved in the model evaluation process, especially in the context of long context scenarios (such as manual annotation costs or API call costs). Therefore, we adopt a fully automated evaluation method, aimed at measuring and evaluating the model's ability to understand long contexts at the lowest cost.
LongBench includes 14 English tasks, 5 Chinese tasks, and 2 code tasks, with the average length of most tasks ranging from 5k to 15k, and a total of 4,750 test data. For detailed statistics and construction methods of LongBench tasks, please refer [here](task.md). In addition, we provide LongBench-E, a test set with a more uniform length distribution constructed by uniform sampling, with comparable amounts of data in the 0-4k, 4k-8k, and 8k+ length intervals to provide an analysis of the model's performance variations at different input lengths.
Github Repo for LongBench: https://github.com/THUDM/LongBench
Arxiv Paper for LongBench: https://arxiv.org/pdf/2308.14508.pdf
# How to use it?
#### Loading Data
```python
from datasets import load_dataset
datasets = ["narrativeqa", "qasper", "multifieldqa_en", "multifieldqa_zh", "hotpotqa", "2wikimqa", "musique", \
"dureader", "gov_report", "qmsum", "multi_news", "vcsum", "trec", "triviaqa", "samsum", "lsht", \
"passage_count", "passage_retrieval_en", "passage_retrieval_zh", "lcc", "repobench-p"]
for dataset in datasets:
data = load_dataset('THUDM/LongBench', dataset, split='test')
```
Similarly, you can load the **LongBench-E** data
```python
from datasets import load_dataset
datasets = ["qasper", "multifieldqa_en", "hotpotqa", "2wikimqa", "gov_report", "multi_news", "trec", \
"triviaqa", "samsum", "passage_count", "passage_retrieval_en", "lcc", "repobench-p"]
for dataset in datasets:
data = load_dataset('THUDM/LongBench', f"{dataset}_e", split='test')
```
Alternatively, you can download the folder from [this link](https://huggingface.co/datasets/THUDM/LongBench/resolve/main/data.zip) to load the data.
#### Data Format
All data in **LongBench** (LongBench-E) are standardized to the following format:
```json
{
"input": "The input/command for the task, usually short, such as questions in QA, queries in Few-shot tasks, etc",
"context": "The long context required for the task, such as documents, cross-file code, few-shot examples in Few-shot tasks",
"answers": "A List of all true answers",
"length": "Total length of the first three items (counted in characters for Chinese and words for English)",
"dataset": "The name of the dataset to which this piece of data belongs",
"language": "The language of this piece of data",
"all_classes": "All categories in classification tasks, null for non-classification tasks",
"_id": "Random id for each piece of data"
}
```
#### Evaluation
This repository provides data download for LongBench. If you wish to use this dataset for automated evaluation, please refer to our [github](https://github.com/THUDM/LongBench).
# Task statistics
| Task | Task Type | Eval metric | Avg len |Language | \#Sample |
| :-------- | :-----------:| :-----------: |:-------: | :-----------: |:--------: |
| HotpotQA | Multi-doc QA | F1 |9,151 |EN |200 |
| 2WikiMultihopQA| Multi-doc QA | F1 |4,887 |EN |200 |
| MuSiQue| Multi-doc QA | F1 |11,214 |EN |200 |
| DuReader| Multi-doc QA | Rouge-L |15,768 |ZH |200 |
| MultiFieldQA-en| Single-doc QA | F1 |4,559 |EN |150 |
| MultiFieldQA-zh| Single-doc QA | F1 |6,701 |ZH |200 |
| NarrativeQA| Single-doc QA | F1 |18,409 |EN |200 |
| Qasper| Single-doc QA | F1 |3,619 |EN |200 |
| GovReport| Summarization | Rouge-L |8,734 |EN |200 |
| QMSum| Summarization | Rouge-L |10,614 |EN |200 |
| MultiNews| Summarization | Rouge-L |2,113 |EN |200 |
| VCSUM| Summarization | Rouge-L |15,380 |ZH |200 |
| TriviaQA| Few shot | F1 |8,209 |EN |200 |
| SAMSum| Few shot | Rouge-L |6,258 |EN |200 |
| TREC| Few shot | Accuracy |5,177 |EN |200 |
| LSHT| Few shot | Accuracy |22,337 |ZH |200 |
| PassageRetrieval-en| Synthetic | Accuracy |9,289 |EN |200 |
| PassageCount| Synthetic | Accuracy |11,141 |EN |200 |
| PassageRetrieval-zh | Synthetic | Accuracy |6,745 |ZH |200 |
| LCC| Code | Edit Sim |1,235 |Python/C#/Java |500 |
| RepoBench-P| Code | Edit Sim |4,206 |Python/Java |500 |
> Note: In order to avoid discrepancies caused by different tokenizers, we use the word count (using Python's split function) to calculate the average length of English datasets and code datasets, and use the character count to calculate the average length of Chinese datasets.
# Task description
| Task | Task Description |
| :---------------- | :----------------------------------------------------------- |
| HotpotQA | Answer related questions based on multiple given documents |
| 2WikiMultihopQA | Answer related questions based on multiple given documents |
| MuSiQue | Answer related questions based on multiple given documents |
| DuReader | Answer related Chinese questions based on multiple retrieved documents |
| MultiFieldQA-en | Answer English questions based on a long article, which comes from a relatively diverse field |
| MultiFieldQA-zh | Answer Chinese questions based on a long article, which comes from a relatively diverse field |
| NarrativeQA | Answer questions based on stories or scripts, including understanding of important elements such as characters, plots, themes, etc. |
| Qasper | Answer questions based on a NLP research paper, questions proposed and answered by NLP practitioners |
| GovReport | A summarization task that requires summarizing government work reports |
| MultiNews | A multi-doc summarization that requires summarizing over multiple news |
| QMSum | A summarization task that requires summarizing meeting records based on user queries |
| VCSUM | A summarization task that requires summarizing Chinese meeting records |
| SAMSum | A dialogue summarization task, providing several few-shot examples |
| TriviaQA | Single document question answering task, providing several few-shot examples |
| NQ | Single document question answering task, providing several few-shot examples |
| TREC | A classification task that requires categorizing questions, includes 50 categories in total |
| LSHT | A Chinese classification task that requires categorizing news, includes 24 categories in total |
| PassageRetrieval-en | Given 30 English Wikipedia paragraphs, determine which paragraph the given summary corresponds to |
| PassageCount | Determine the total number of different paragraphs in a given repetitive article |
| PassageRetrieval-zh | Given several Chinese paragraphs from the C4 data set, determine which paragraph the given abstract corresponds to |
| LCC | Given a long piece of code, predict the next line of code |
| RepoBench-P | Given code in multiple files within a GitHub repository (including cross-file dependencies), predict the next line of code |
# Task construction
> Note: For all tasks constructed from existing datasets, we use data from the validation or test set of the existing dataset (except for VCSUM).
- The tasks of [HotpotQA](https://hotpotqa.github.io/), [2WikiMultihopQA](https://aclanthology.org/2020.coling-main.580/), [MuSiQue](https://arxiv.org/abs/2108.00573), and [DuReader](https://github.com/baidu/DuReader) are built based on the original datasets and processed to be suitable for long context evaluation. Specifically, for questions in the validation set, we select the evidence passage that contains the answer and several distracting articles. These articles together with the original question constitute the input of the tasks.
- The tasks of MultiFiedQA-zh and MultiFieldQA-en consist of long artical data from about 10 sources, including Latex papers, judicial documents, government work reports, and PDF documents indexed by Google. For each long artical, we invite several PhD and master students to annotate, i.e., to ask questions based on the long artical and give the correct answers. To better automate evaluation, we ask the annotators to propose questions with definitive answers as much as possible.
- The tasks of [NarrativeQA](https://arxiv.org/pdf/1712.07040.pdf), [Qasper](https://arxiv.org/pdf/2105.03011.pdf), [GovReport](https://arxiv.org/pdf/2104.02112.pdf), [QMSum](https://arxiv.org/pdf/2104.05938.pdf) and [MultiNews](https://aclanthology.org/P19-1102.pdf) directly use the data provided by the original papers. In the specific construction, we use the template provided by [ZeroSCROLLS](https://www.zero.scrolls-benchmark.com/) to convert the corresponding data into pure text input.
- The [VCSUM](https://arxiv.org/abs/2305.05280) task is built based on the original dataset, and we design a corresponding template to convert the corresponding data into pure text input.
- The [TriviaQA](https://nlp.cs.washington.edu/triviaqa/) task is constructed in the manner of [CoLT5](https://arxiv.org/abs/2303.09752), which provides several examples of question and answering based on documents, and requires the language model to answer related questions based on new documents.
- The tasks of [SAMSum](https://aclanthology.org/D19-5409.pdf), [TREC](https://aclanthology.org/C02-1150.pdf) and [LSHT](http://tcci.ccf.org.cn/conference/2014/dldoc/evatask6.pdf) are built based on the original datasets. For each question in the validation set, we sample several data from the training set to form few-shot examples. These examples together with the questions in the validation set constitute the input for this task.
- The PassageRetrieval-en task is constructed based on English Wikipedia. For each piece of data, we randomly sample 30 paragraphs from English Wikipedia and select one for summarization (using GPT-3.5-Turbo). This task requires the model to give the original paragraph name to which the summary corresponds.
- The PassageCount task is constructed based on the English wiki. For each piece of data, we randomly sample several passages from English Wikipedia, repeat each paragraph at random several times, and finally shuffle the paragraphs. This task requires the model to determine the total number of different paragraphs in the given context.
- The PasskeyRetrieval-zh task is constructed based on [C4](https://arxiv.org/abs/1910.10683). For each piece of data, we randomly sample several Chinese paragraphs from C4 and select one of them for summarization (using GPT-3.5-Turbo). This task requires the model to give the original paragraph name to which the summary corresponds.
- For the [LCC](https://arxiv.org/abs/2306.14893) task, we sample from the original code completion dataset. In the [RepoBench-P](https://arxiv.org/abs/2306.03091) task, we select the most challenging XF-F (Cross-File-First) setting from the original dataset and refer to the Oracle-Filled scenario in the paper. For each original piece of data, we randomly extract multiple cross-file code snippets, including the gold cross-file code snippet, and concatenate them as input, requiring the model to effectively use cross-file code for completion.
# LongBench-E statistics
| Task | Task Type | \#data in 0-4k | \#data in 4-8k | \#data in 8k+|
| :--------- | :-----------:| :-----------: |:---------: | :-------------: |
| HotpotQA | Multi-doc QA | 100 |100 |100 |
| 2WikiMultihopQA| Multi-doc QA | 100 |100 |100 |
| MultiFieldQA-en| Single-doc QA | 67 |70 |13 |
| Qasper| Single-doc QA | 100 |100 |24 |
| GovReport| Summarization | 100 |100 |100 |
| MultiNews| Summarization | 100 |100 |94 |
| TriviaQA| Few shot | 100 |100 |100 |
| SAMSum| Few shot | 100 |100 |100 |
| TREC| Few shot | 100 |100 |100 |
| PassageRetrieval-en| Synthetic | 100 |100 |100 |
| PassageCount| Synthetic | 100 |100 |100 |
| LCC| Code | 100 |100 |100 |
| RepoBench-P| Code | 100 |100 |100 |
# Citation
```
@misc{bai2023longbench,
title={LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding},
author={Yushi Bai and Xin Lv and Jiajie Zhang and Hongchang Lyu and Jiankai Tang and Zhidian Huang and Zhengxiao Du and Xiao Liu and Aohan Zeng and Lei Hou and Yuxiao Dong and Jie Tang and Juanzi Li},
year={2023},
eprint={2308.14508},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 16,055 | [
[
-0.03277587890625,
-0.05731201171875,
0.030914306640625,
0.04095458984375,
-0.0137176513671875,
-0.005733489990234375,
-0.036865234375,
-0.044464111328125,
0.0255584716796875,
0.0236663818359375,
-0.025146484375,
-0.06866455078125,
-0.027496337890625,
0.0172... |
hearmeneigh/e621-rising-v3-curated | 2023-10-24T19:36:28.000Z | [
"size_categories:100K<n<1M",
"furry",
"anthro",
"nsfw",
"e621",
"booru",
"imagebooru",
"imageboard",
"gelbooru",
"danbooru",
"rule34",
"not-for-all-audiences",
"region:us"
] | hearmeneigh | null | null | 3 | 107 | 2023-10-09T18:03:16 | ---
dataset_info:
features:
- name: source_id
dtype: string
- name: source
dtype: string
- name: image
dtype: image
- name: tags
sequence: string
- name: url
dtype: string
- name: text
dtype: string
- name: selector
dtype: string
splits:
- name: train
num_bytes: 53726659168.0
num_examples: 279296
download_size: 53423627875
dataset_size: 53726659168.0
pretty_name: 'E621 Rising V3 Image Dataset'
size_categories:
- 100K<n<1M
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- furry
- anthro
- nsfw
- e621
- booru
- imagebooru
- imageboard
- gelbooru
- danbooru
- rule34
- not-for-all-audiences
---
<div style='background: #ffeef1; border: 1px solid #fd91a4; padding:1em; border-radius:3px; margin-bottom:2em;'>
<h3 style='margin:0'>NSFW</h3>
<p style='margin:0'>This dataset is not suitable for use by minors. The dataset contains X-rated/NFSW content.</p>
</div>
# E621 Rising V3: Curated Image Dataset
* **279,296** images (53GB) downloaded from `e621.net` (90% of samples), `gelbooru.com`, `danbooru.com`, and `rule34.xxx`
* **6,820** [tags](https://huggingface.co/datasets/hearmeneigh/e621-rising-v3-preliminary-data/blob/main/tag-counts.by-name.json)
* Used to train [E621 Rising v3](https://huggingface.co/hearmeneigh/e621-rising-v3) SDXL model
This dataset was created with [Dataset Rising](https://github.com/hearmeneigh/dataset-rising) toolchain and a [custom configuration](https://github.com/hearmeneigh/e621-rising-configs).
You can use these tools to train your own version!
## Image Processing
* Only `jpg` and `png` images were considered
* Image width and height have been clamped to `(0, 1024]px`; larger images have been resized to meet the limit
* Alpha channels have been removed
* All images have been converted to `jpg` format
* All images have been converted to TrueColor `RGB`
* All images have been verified to load with `Pillow`
* Metadata from E621 is [available here](https://huggingface.co/datasets/hearmeneigh/e621-rising-v3-preliminary-data)
## Tags
Comprehensive list of 6,820 tags and counts:
* [By name](https://huggingface.co/datasets/hearmeneigh/e621-rising-v3-preliminary-data/blob/main/tag-counts.by-name.json)
* [By count](https://huggingface.co/datasets/hearmeneigh/e621-rising-v3-preliminary-data/blob/main/tag-counts.by-count.json)
### Additional Tags
* `rating_explicit`
* `rating_questionable`
* `rating_safe`
* `rising_masterpiece`
* `rising_unpopular`
* `favorites_below_X` (25, 50, 100, 250, 500, 1000)
* `favorites_above_X` (250, 500, 1000, 2000, 3000, 4000)
* `score_below_X` (0, 25, 50, 100, 250, 500)
* `score_above_X` (100, 250, 500, 1000, 1500, 2000)
| 2,720 | [
[
-0.044891357421875,
-0.0229034423828125,
0.005367279052734375,
0.0271759033203125,
-0.00890350341796875,
0.0004062652587890625,
0.003925323486328125,
-0.046173095703125,
0.0367431640625,
0.0286712646484375,
-0.06439208984375,
-0.052734375,
-0.043212890625,
0... |
lucas-meyer/asr_af | 2023-10-16T20:51:26.000Z | [
"region:us"
] | lucas-meyer | null | null | 0 | 107 | 2023-10-10T17:08:46 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 1134983208.472
num_examples: 2723
- name: validation
num_bytes: 398459352.0
num_examples: 447
- name: test
num_bytes: 467308235.0
num_examples: 476
download_size: 2232381103
dataset_size: 2000750795.472
---
# Dataset Card for "asr_af"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 715 | [
[
-0.04425048828125,
-0.0153350830078125,
-0.00215911865234375,
0.02496337890625,
-0.01203155517578125,
0.004909515380859375,
0.022705078125,
-0.017791748046875,
0.05645751953125,
0.02801513671875,
-0.05340576171875,
-0.042572021484375,
-0.0501708984375,
-0.00... |
Royal-lobster/Slither-Audited-Solidity-QA | 2023-10-11T16:52:46.000Z | [
"task_categories:question-answering",
"language:en",
"license:mit",
"solidity",
"alpaca",
"smart contracts",
"slither",
"region:us"
] | Royal-lobster | null | null | 2 | 107 | 2023-10-11T16:29:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 519875022.0539211
num_examples: 8611
- name: test
num_bytes: 100783891.24375294
num_examples: 1748
- name: validation
num_bytes: 76457098.65464632
num_examples: 1151
download_size: 98570750
dataset_size: 697116011.9523203
license: mit
task_categories:
- question-answering
language:
- en
tags:
- solidity
- alpaca
- smart contracts
- slither
---
# Dataset Card for "Simple-Solidity-Slither-Vulnerabilities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 954 | [
[
-0.031768798828125,
-0.034942626953125,
0.0252532958984375,
-0.0079345703125,
-0.0208587646484375,
-0.00785064697265625,
0.0250396728515625,
-0.0226287841796875,
0.059783935546875,
0.05511474609375,
-0.045074462890625,
-0.039581298828125,
-0.0247802734375,
-... |
result-kand2-sdxl-wuerst-karlo/2f525ab2 | 2023-10-18T07:34:14.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 107 | 2023-10-18T07:34:13 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 242
num_examples: 10
download_size: 1429
dataset_size: 242
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "2f525ab2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.0426025390625,
-0.0031452178955078125,
0.00426483154296875,
0.0330810546875,
-0.017425537109375,
-0.010040283203125,
0.039215087890625,
-0.0275115966796875,
0.0440673828125,
0.02947998046875,
-0.056640625,
-0.040130615234375,
-0.039306640625,
-0.012039184... |
result-kand2-sdxl-wuerst-karlo/3d24f339 | 2023-10-18T16:03:55.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 107 | 2023-10-18T16:03:54 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 168
num_examples: 10
download_size: 1326
dataset_size: 168
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "3d24f339"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.06024169921875,
-0.00821685791015625,
0.0171966552734375,
0.03350830078125,
-0.0051422119140625,
-0.001300811767578125,
0.03729248046875,
-0.023223876953125,
0.04248046875,
0.036956787109375,
-0.061187744140625,
-0.044525146484375,
-0.0372314453125,
-0.01... |
result-kand2-sdxl-wuerst-karlo/bdb16990 | 2023-10-20T17:11:05.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 107 | 2023-10-20T17:11:04 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 173
num_examples: 10
download_size: 1326
dataset_size: 173
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "bdb16990"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.0433349609375,
-0.0174560546875,
0.0159759521484375,
0.017425537109375,
-0.0160369873046875,
0.004009246826171875,
0.019195556640625,
-0.0207061767578125,
0.06329345703125,
0.035980224609375,
-0.05902099609375,
-0.0474853515625,
-0.031646728515625,
-0.012... |
result-kand2-sdxl-wuerst-karlo/1467d461 | 2023-10-20T17:41:05.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 107 | 2023-10-20T17:41:04 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 168
num_examples: 10
download_size: 1319
dataset_size: 168
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "1467d461"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.05108642578125,
-0.0027027130126953125,
0.01546478271484375,
0.0182342529296875,
-0.028167724609375,
-0.01824951171875,
0.0225067138671875,
-0.006683349609375,
0.06329345703125,
0.0270233154296875,
-0.0601806640625,
-0.044677734375,
-0.03997802734375,
-0.... |
dutch_social | 2023-01-25T14:29:36.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:multi-label-classification",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"la... | null | The dataset contains around 271,342 tweets. The tweets are filtered via the official Twitter API to
contain tweets in Dutch language or by users who have specified their location information within Netherlands
geographical boundaries. Using natural language processing we have classified the tweets for their HISCO codes.
If the user has provided their location within Dutch boundaries, we have also classified them to their respective
provinces The objective of this dataset is to make research data available publicly in a FAIR (Findable, Accessible,
Interoperable, Reusable) way. Twitter's Terms of Service Licensed under Attribution-NonCommercial 4.0 International
(CC BY-NC 4.0) (2020-10-27) | @data{FK2/MTPTL7_2020,
author = {Gupta, Aakash},
publisher = {COVID-19 Data Hub},
title = {{Dutch social media collection}},
year = {2020},
version = {DRAFT VERSION},
doi = {10.5072/FK2/MTPTL7},
url = {https://doi.org/10.5072/FK2/MTPTL7}
} | 5 | 106 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en
- nl
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
- multi-label-classification
pretty_name: Dutch Social Media Collection
dataset_info:
features:
- name: full_text
dtype: string
- name: text_translation
dtype: string
- name: screen_name
dtype: string
- name: description
dtype: string
- name: desc_translation
dtype: string
- name: location
dtype: string
- name: weekofyear
dtype: int64
- name: weekday
dtype: int64
- name: month
dtype: int64
- name: year
dtype: int64
- name: day
dtype: int64
- name: point_info
dtype: string
- name: point
dtype: string
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: altitude
dtype: float64
- name: province
dtype: string
- name: hisco_standard
dtype: string
- name: hisco_code
dtype: string
- name: industry
dtype: bool_
- name: sentiment_pattern
dtype: float64
- name: subjective_pattern
dtype: float64
- name: label
dtype:
class_label:
names:
'0': neg
'1': neu
'2': pos
config_name: dutch_social
splits:
- name: train
num_bytes: 105569586
num_examples: 162805
- name: test
num_bytes: 35185351
num_examples: 54268
- name: validation
num_bytes: 34334756
num_examples: 54269
download_size: 68740666
dataset_size: 175089693
---
# Dataset Card for Dutch Social Media Collection
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Dutch Social Media Collection](http://datasets.coronawhy.org/dataset.xhtml?persistentId=doi:10.5072/FK2/MTPTL7)
- **Repository:**
- **Paper:** *(in-progress)* https://doi.org/10.5072/FK2/MTPTL7
- **Leaderboard:**
- **Point of Contact:** [Aakash Gupta](mailto:aakashg80@gmail.com)
### Dataset Summary
The dataset contains 10 files with around 271,342 tweets. The tweets are filtered via the official Twitter API to contain tweets in Dutch language or by users who have specified their location information within Netherlands geographical boundaries. Using natural language processing we have classified the tweets for their HISCO codes. If the user has provided their location within Dutch boundaries, we have also classified them to their respective provinces The objective of this dataset is to make research data available publicly in a FAIR (Findable, Accessible, Interoperable, Reusable) way. Twitter's Terms of Service Licensed under Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) (2020-10-27)
### Supported Tasks and Leaderboards
`sentiment analysis`, `multi-label classification`, `entity-extraction`
### Languages
The text is primarily in Dutch with some tweets in English and other languages. The BCP 47 code is `nl` and `en`
## Dataset Structure
### Data Instances
An example of the data field will be:
```
{
"full_text": "@pflegearzt @Friedelkorn @LAguja44 Pardon, wollte eigentlich das zitieren: \nhttps://t.co/ejO7bIMyj8\nMeine mentions sind inzw komplett undurchschaubar weil da Leute ihren supporterclub zwecks Likes zusammengerufen haben.",
"text_translation": "@pflegearzt @Friedelkorn @ LAguja44 Pardon wollte zitieren eigentlich das:\nhttps://t.co/ejO7bIMyj8\nMeine mentions inzw sind komplett undurchschaubar weil da Leute ihren supporter club Zwecks Likes zusammengerufen haben.",
"created_at": 1583756789000,
"screen_name": "TheoRettich",
"description": "I ❤️science, therefore a Commie. ☭ FALGSC: Part of a conspiracy which wants to achieve world domination. Tankie-Cornucopian. Ecology is a myth",
"desc_translation": "I ❤️science, Therefore a Commie. ☭ FALGSC: Part of a conspiracy How many followers wants to Achieve World Domination. Tankie-Cornucopian. Ecology is a myth",
"weekofyear": 11,
"weekday": 0,
"day": 9,
"month": 3,
"year": 2020,
"location": "Netherlands",
"point_info": "Nederland",
"point": "(52.5001698, 5.7480821, 0.0)",
"latitude": 52.5001698,
"longitude": 5.7480821,
"altitude": 0,
"province": "Flevoland",
"hisco_standard": null,
"hisco_code": null,
"industry": false,
"sentiment_pattern": 0,
"subjective_pattern": 0
}
```
### Data Fields
| Column Name | Description |
| --- | --- |
| full_text | Original text in the tweet |
| text_translation | English translation of the full text |
| created_at | Date of tweet creation |
| screen_name | username of the tweet author |
| description | description as provided in the users bio |
| desc_translation | English translation of user's bio/ description |
| location | Location information as provided in the user's bio |
| weekofyear | week of the year |
| weekday | Day of the week information; Monday=0....Sunday = 6|
| month | Month of tweet creation |
| year | year of tweet creation |
| day | day of tweet creation |
| point_info | point information from location columnd |
| point | tuple giving lat, lon & altitude information |
| latitude | geo-referencing information derived from location data |
| longitude | geo-referencing information derived from location data |
| altitude | geo-referencing information derived from location data|
| province | Province given location data of user |
| hisco_standard | HISCO standard key word; if available in tweet |
| hisco_code| HISCO standard code as derived from `hisco_standard`|
| industry | Whether the tweet talks about industry `(True/False)` |
| sentiment_score | Sentiment score -1.0 to 1.0 |
| subjectivity_score | Subjectivity scores 0 to 1 |
Missing values are replaced with empty strings or -1 (-100 for missing sentiment_score).
### Data Splits
Data has been split into Train: 60%, Validation: 20% and Test: 20%
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The tweets were hydrated using Twitter's API and then filtered for those which were in Dutch language and/or for users who had mentioned that they were from within Netherlands geographical borders.
#### Who are the source language producers?
The language producers are twitter users who have identified their location within the geographical boundaries of Netherland. Or those who have tweeted in the dutch language!
### Annotations
Using Natural language processing, we have classified the tweets on industry and for HSN HISCO codes.
Depending on the user's location, their provincial information is also added. Please check the file/column for detailed information.
The tweets are also classified on the sentiment & subjectivity scores.
Sentiment scores are between -1 to +1
Subjectivity scores are between 0 to 1
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
As of writing this data card no anonymization has been carried out on the tweets or user data. As such, if the twitter user has shared any personal & sensitive information, then it may be available in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[Aakash Gupta](mailto:aakashg80@gmail.com)
*Th!nkEvolve Consulting* and Researcher at CoronaWhy
### Licensing Information
CC BY-NC 4.0
### Citation Information
@data{FK2/MTPTL7_2020,
author = {Gupta, Aakash},
publisher = {COVID-19 Data Hub},
title = {{Dutch social media collection}},
year = {2020},
version = {DRAFT VERSION},
doi = {10.5072/FK2/MTPTL7},
url = {https://doi.org/10.5072/FK2/MTPTL7}
}
### Contributions
Thanks to [@skyprince999](https://github.com/skyprince999) for adding this dataset. | 9,114 | [
[
-0.0268707275390625,
-0.038787841796875,
0.017608642578125,
0.039825439453125,
-0.03179931640625,
0.01511383056640625,
-0.019989013671875,
-0.035247802734375,
0.0482177734375,
0.020965576171875,
-0.0491943359375,
-0.0865478515625,
-0.052734375,
0.01180267333... |
ronec | 2023-01-25T14:43:21.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ro",
"license:mit"... | null | RONEC - the Romanian Named Entity Corpus, at version 2.0, holds 12330 sentences with over 0.5M tokens, annotated with 15 classes, to a total of 80.283 distinctly annotated entities. It is used for named entity recognition and represents the largest Romanian NER corpus to date. | @article{dumitrescu2019introducing,
title={Introducing RONEC--the Romanian Named Entity Corpus},
author={Dumitrescu, Stefan Daniel and Avram, Andrei-Marius},
journal={arXiv preprint arXiv:1909.01247},
year={2019}
} | 0 | 106 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
- found
language:
- ro
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: ronec
pretty_name: RONEC
dataset_info:
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: ner_ids
sequence: int32
- name: space_after
sequence: bool
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PERSON
'2': I-PERSON
'3': B-ORG
'4': I-ORG
'5': B-GPE
'6': I-GPE
'7': B-LOC
'8': I-LOC
'9': B-NAT_REL_POL
'10': I-NAT_REL_POL
'11': B-EVENT
'12': I-EVENT
'13': B-LANGUAGE
'14': I-LANGUAGE
'15': B-WORK_OF_ART
'16': I-WORK_OF_ART
'17': B-DATETIME
'18': I-DATETIME
'19': B-PERIOD
'20': I-PERIOD
'21': B-MONEY
'22': I-MONEY
'23': B-QUANTITY
'24': I-QUANTITY
'25': B-NUMERIC
'26': I-NUMERIC
'27': B-ORDINAL
'28': I-ORDINAL
'29': B-FACILITY
'30': I-FACILITY
config_name: ronec
splits:
- name: train
num_bytes: 8701577
num_examples: 9000
- name: validation
num_bytes: 1266490
num_examples: 1330
- name: test
num_bytes: 1902224
num_examples: 2000
download_size: 14675943
dataset_size: 11870291
---
# Dataset Card for RONEC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/dumitrescustefan/ronec
- **Repository:** https://github.com/dumitrescustefan/ronec
- **Paper:** https://arxiv.org/abs/1909.01247
- **Leaderboard:** https://lirobenchmark.github.io/
- **Point of Contact:** [Stefan](dumitrescu.stefan@gmail.com) and [Andrei-Marius](avram.andreimarius@gmail.com)
### Dataset Summary
RONEC, at version 2.0, holds 12330 sentences with over 0.5M tokens, annotated with 15 classes, to a total of 80.283 distinctly annotated entities.
The corpus has the following classes and distribution in the train/valid/test splits:
| Classes | Total | Train | | Valid | | Test | |
|------------- |:------: |:------: |:-------: |:------: |:-------: |:------: |:-------: |
| | # | # | % | # | % | # | % |
| PERSON | **26130** | 19167 | 73.35 | 2733 | 10.46 | 4230 | 16.19 |
| GPE | **11103** | 8193 | 73.79 | 1182 | 10.65 | 1728 | 15.56 |
| LOC | **2467** | 1824 | 73.94 | 270 | 10.94 | 373 | 15.12 |
| ORG | **7880** | 5688 | 72.18 | 880 | 11.17 | 1312 | 16.65 |
| LANGUAGE | **467** | 342 | 73.23 | 52 | 11.13 | 73 | 15.63 |
| NAT_REL_POL | **4970** | 3673 | 73.90 | 516 | 10.38 | 781 | 15.71 |
| DATETIME | **9614** | 6960 | 72.39 | 1029 | 10.7 | 1625 | 16.9 |
| PERIOD | **1188** | 862 | 72.56 | 129 | 10.86 | 197 | 16.58 |
| QUANTITY | **1588** | 1161 | 73.11 | 181 | 11.4 | 246 | 15.49 |
| MONEY | **1424** | 1041 | 73.10 | 159 | 11.17 | 224 | 15.73 |
| NUMERIC | **7735** | 5734 | 74.13 | 814 | 10.52 | 1187 | 15.35 |
| ORDINAL | **1893** | 1377 | 72.74 | 212 | 11.2 | 304 | 16.06 |
| FACILITY | **1126** | 840 | 74.6 | 113 | 10.04 | 173 | 15.36 |
| WORK_OF_ART | **1596** | 1157 | 72.49 | 176 | 11.03 | 263 | 16.48 |
| EVENT | **1102** | 826 | 74.95 | 107 | 9.71 | 169 | 15.34 |
### Supported Tasks and Leaderboards
The corpus is meant to train Named Entity Recognition models for the Romanian language.
Please see the leaderboard here : [https://lirobenchmark.github.io/](https://lirobenchmark.github.io/)
### Languages
RONEC is in Romanian (`ro`)
## Dataset Structure
### Data Instances
The dataset is a list of instances. For example, an instance looks like:
```json
{
"id": 10454,
"tokens": ["Pentru", "a", "vizita", "locația", "care", "va", "fi", "pusă", "la", "dispoziția", "reprezentanților", "consiliilor", "județene", ",", "o", "delegație", "a", "U.N.C.J.R.", ",", "din", "care", "a", "făcut", "parte", "și", "dl", "Constantin", "Ostaficiuc", ",", "președintele", "C.J.T.", ",", "a", "fost", "prezentă", "la", "Bruxelles", ",", "între", "1-3", "martie", "."],
"ner_tags": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "B-PERSON", "O", "O", "O", "O", "O", "O", "B-ORG", "O", "O", "O", "O", "O", "O", "O", "B-PERSON", "I-PERSON", "I-PERSON", "I-PERSON", "I-PERSON", "B-ORG", "O", "O", "O", "O", "O", "B-GPE", "O", "B-PERIOD", "I-PERIOD", "I-PERIOD", "O"],
"ner_ids": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 2, 3, 0, 0, 0, 0, 0, 5, 0, 19, 20, 20, 0],
"space_after": [true, true, true, true, true, true, true, true, true, true, true, true, false, true, true, true, true, false, true, true, true, true, true, true, true, true, true, false, true, true, false, true, true, true, true, true, false, true, true, true, false, false]
}
```
### Data Fields
The fields of each examples are:
- ``tokens`` are the words of the sentence.
- ``ner_tags`` are the string tags assigned to each token, following the BIO2 format. For example, the span ``"între", "1-3", "martie"`` has three tokens, but is a single class ``PERIOD``, marked as ``"B-PERIOD", "I-PERIOD", "I-PERIOD"``.
- ``ner_ids`` are the integer encoding of each tag, to be compatible with the standard and to be quickly used for model training. Note that each ``B``-starting tag is odd, and each ``I``-starting tag is even.
- ``space_after`` is used to help if there is a need to detokenize the dataset. A ``true`` value means that there is a space after the token on that respective position.
### Data Splits
The dataset is split in train: 9000 sentences, dev: 1330 sentence and test: 2000 sentences.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
*The corpus data source represents sentences that are free of copyright, taken from older datasets like the freely available SEETimes and more recent datasources like the Romanian Wikipedia or the Common Crawl.*
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
The corpus was annotated with the following classes:
1. PERSON - proper nouns, including common nouns or pronouns if they refer to a person. (e.g. 'sister')
2. GPE - geo political entity, like a city or a country; has to have a governance form
3. LOC - location, like a sea, continent, region, road, address, etc.
4. ORG - organization
5. LANGUAGE - language (e.g. Romanian, French, etc.)
6. NAT_REL_POL - national, religious or political organizations
7. DATETIME - a time and date in any format, including references to time (e.g. 'yesterday')
8. PERIOD - a period that is precisely bounded by two date times
9. QUANTITY - a quantity that is not numerical; it has a unit of measure
10. MONEY - a monetary value, numeric or otherwise
11. NUMERIC - a simple numeric value, represented as digits or words
12. ORDINAL - an ordinal value like 'first', 'third', etc.
13. FACILITY - a named place that is easily recognizable
14. WORK_OF_ART - a work of art like a named TV show, painting, etc.
15. EVENT - a named recognizable or periodic major event
#### Annotation process
The corpus was annotated by 3 language experts, and was cross-checked for annotation consistency. The annotation took several months to complete, but the result is a high quality dataset.
#### Who are the annotators?
Stefan Dumitrescu (lead).
### Personal and Sensitive Information
All the source data is already freely downloadable and usable online, so there are no privacy concerns.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
MIT License
### Citation Information
```bibtex
@article{dumitrescu2019introducing,
title={Introducing RONEC--the Romanian Named Entity Corpus},
author={Dumitrescu, Stefan Daniel and Avram, Andrei-Marius},
journal={arXiv preprint arXiv:1909.01247},
year={2019}
}
```
### Contributions
Thanks to [@iliemihai](https://github.com/iliemihai) for adding v1.0 of the dataset. | 9,975 | [
[
-0.046051025390625,
-0.044281005859375,
0.0185394287109375,
0.00441741943359375,
-0.0150146484375,
0.005859375,
-0.021697998046875,
-0.0313720703125,
0.040191650390625,
0.0299072265625,
-0.03369140625,
-0.071533203125,
-0.0438232421875,
0.013275146484375,
... |
tunizi | 2023-01-25T14:54:36.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:aeb",
"license:unknown",
"arxiv:2004.14303",
"region:us... | null | On social media, Arabic speakers tend to express themselves in their own local dialect. To do so, Tunisians use "Tunisian Arabizi", which consists in supplementing numerals to the Latin script rather than the Arabic alphabet. TUNIZI is the first Tunisian Arabizi Dataset including 3K sentences, balanced, covering different topics, preprocessed and annotated as positive and negative. | @inproceedings{Chayma2020,
title={TUNIZI: a Tunisian Arabizi sentiment analysis Dataset},
author={Fourati, Chayma and Messaoudi, Abir and Haddad, Hatem},
booktitle={AfricaNLP Workshop, Putting Africa on the NLP Map. ICLR 2020, Virtual Event},
volume = {arXiv:3091079},
year = {2020},
url = {https://arxiv.org/submit/3091079},
} | 0 | 106 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- aeb
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: tunizi
pretty_name: TUNIZI
dataset_info:
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: target
dtype:
class_label:
names:
'0': '1'
'1': '-1'
splits:
- name: train
num_bytes: 211166
num_examples: 3000
download_size: 162781
dataset_size: 211166
---
# Dataset Card for TUNIZI
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/chaymafourati/TUNIZI-Sentiment-Analysis-Tunisian-Arabizi-Dataset
- **Repository:** https://github.com/chaymafourati/TUNIZI-Sentiment-Analysis-Tunisian-Arabizi-Dataset
- **Paper:** https://arxiv.org/abs/2004.14303
- **Point of Contact:** Chayma Fourati (chayma@icompass.digital)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset uses Tunisian Arabic written with latin script (BCP-47: aeb-Latn)
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. | 3,395 | [
[
-0.040069580078125,
-0.0226593017578125,
0.0057525634765625,
0.0301971435546875,
-0.0236663818359375,
0.013580322265625,
-0.02569580078125,
-0.0219268798828125,
0.03369140625,
0.0249786376953125,
-0.0657958984375,
-0.08392333984375,
-0.050689697265625,
0.001... |
keremberke/satellite-building-segmentation | 2023-01-18T09:41:34.000Z | [
"task_categories:image-segmentation",
"roboflow",
"roboflow2huggingface",
"Aerial",
"Logistics",
"Construction",
"Damage Risk",
"Other",
"region:us"
] | keremberke | null | @misc{ buildings-instance-segmentation_dataset,
title = { Buildings Instance Segmentation Dataset },
type = { Open Source Dataset },
author = { Roboflow Universe Projects },
howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation } },
url = { https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { jan },
note = { visited on 2023-01-18 },
} | 6 | 106 | 2023-01-16T21:09:30 | ---
task_categories:
- image-segmentation
tags:
- roboflow
- roboflow2huggingface
- Aerial
- Logistics
- Construction
- Damage Risk
- Other
---
<div align="center">
<img width="640" alt="keremberke/satellite-building-segmentation" src="https://huggingface.co/datasets/keremberke/satellite-building-segmentation/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['building']
```
### Number of Images
```json
{'train': 6764, 'valid': 1934, 'test': 967}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/satellite-building-segmentation", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation/dataset/1](https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ buildings-instance-segmentation_dataset,
title = { Buildings Instance Segmentation Dataset },
type = { Open Source Dataset },
author = { Roboflow Universe Projects },
howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation } },
url = { https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { jan },
note = { visited on 2023-01-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 16, 2023 at 9:09 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 9665 images.
Buildings are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| 2,529 | [
[
-0.036529541015625,
-0.04852294921875,
0.036529541015625,
-0.005489349365234375,
-0.0163726806640625,
-0.007381439208984375,
-0.0026149749755859375,
-0.0258026123046875,
0.008819580078125,
0.011688232421875,
-0.040069580078125,
-0.06365966796875,
-0.019714355468... |
TobiTob/CityLearn | 2023-06-27T11:14:53.000Z | [
"region:us"
] | TobiTob | The dataset consists of tuples of (observations, actions, rewards, dones) sampled by agents
interacting with the CityLearn 2022 Phase 1 environment (only first 5 buildings) | null | 1 | 106 | 2023-02-16T12:16:52 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset CityLearn
This dataset is used to train a decision Transformer for the CityLearn 2022 environment https://www.aicrowd.com/challenges/neurips-2022-citylearn-challenge.
You can load data from this dataset via:
datasets.load_dataset('TobiTob/CityLearn', 'data_name')
A short description of all data sets can be found in file CityLearn.py | 566 | [
[
-0.034698486328125,
-0.00206756591796875,
-0.0020503997802734375,
0.006183624267578125,
-0.0030536651611328125,
0.0079345703125,
0.0245513916015625,
-0.006702423095703125,
-0.0101165771484375,
0.057830810546875,
-0.055572509765625,
-0.0264892578125,
-0.019088745... |
Den4ikAI/russian_dialogues | 2023-03-12T07:58:54.000Z | [
"task_categories:conversational",
"size_categories:1M<n<10M",
"language:ru",
"license:mit",
"region:us"
] | Den4ikAI | null | null | 8 | 106 | 2023-03-12T06:54:22 | ---
license: mit
task_categories:
- conversational
language:
- ru
size_categories:
- 1M<n<10M
---
Датасет русских диалогов собранных с Telegram чатов.
Диалоги имеют разметку по релевантности.
Также были сгенерированы негативные примеры с помощью перемешивания похожих ответов.
Количество диалогов - 2 миллиона
Формат датасета:
```
{
'question': 'Привет',
'answer': 'Привет, как дела?'
'relevance': 1
}
```
Программа парсинга: https://github.com/Den4ikAI/telegram_chat_parser
### Citation:
```
@MISC{russian_instructions,
author = {Denis Petrov},
title = {Russian dialogues dataset for conversational agents},
url = {https://huggingface.co/datasets/Den4ikAI/russian_dialogues},
year = 2023
}
``` | 729 | [
[
-0.00870513916015625,
-0.0635986328125,
0.0268096923828125,
0.0248260498046875,
-0.0400390625,
-0.005252838134765625,
0.01049041748046875,
-0.01898193359375,
0.02203369140625,
0.01139068603515625,
-0.05804443359375,
-0.051025390625,
-0.025543212890625,
0.012... |
RussianNLP/rucola | 2023-03-27T18:47:12.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:ru",
"license:apache-2.0",
"arxiv:2210.12814",
"arxiv:2008.00401",
"region:us"
] | RussianNLP | Russian Corpus of Linguistic Acceptability (RuCoLA) is a novel benchmark of 13.4k sentences labeled as acceptable or not. RuCoLA combines in-domain sentences manually collected from linguistic literature and out-of-domain sentences produced by nine machine translation and paraphrase generation models. The motivation behind the out-of-domain set is to facilitate the practical use of acceptability judgments for improving language generation. Each unacceptable sentence is additionally labeled with four standard and machine-specific coarse-grained categories: morphology, syntax, semantics, and hallucinations. | @inproceedings{mikhailov-etal-2022-rucola,
title = "{R}u{C}o{LA}: {R}ussian Corpus of Linguistic Acceptability",
author = "Mikhailov, Vladislav and
Shamardina, Tatiana and
Ryabinin, Max and
Pestova, Alena and
Smurov, Ivan and
Artemova, Ekaterina",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.348",
pages = "5207--5227",
abstract = "Linguistic acceptability (LA) attracts the attention of the research community due to its many uses, such as testing the grammatical knowledge of language models and filtering implausible texts with acceptability classifiers.However, the application scope of LA in languages other than English is limited due to the lack of high-quality resources.To this end, we introduce the Russian Corpus of Linguistic Acceptability (RuCoLA), built from the ground up under the well-established binary LA approach. RuCoLA consists of 9.8k in-domain sentences from linguistic publications and 3.6k out-of-domain sentences produced by generative models. The out-of-domain set is created to facilitate the practical use of acceptability for improving language generation.Our paper describes the data collection protocol and presents a fine-grained analysis of acceptability classification experiments with a range of baseline approaches.In particular, we demonstrate that the most widely used language models still fall behind humans by a large margin, especially when detecting morphological and semantic errors. We release RuCoLA, the code of experiments, and a public leaderboard to assess the linguistic competence of language models for Russian.",
} | 1 | 106 | 2023-03-27T18:35:06 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- ru
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** https://rucola-benchmark.com
- **Repository:** https://github.com/RussianNLP/RuCoLA
- **Paper:** https://aclanthology.org/2022.emnlp-main.348/
- **ArXiv:** https://arxiv.org/abs/2210.12814
- **Leaderboard:** https://rucola-benchmark.com/leaderboard
- **Point of Contact:** vmikhailovhse@gmail.com
- **Language:** Russian
### Dataset Summary

Russian Corpus of Linguistic Acceptability (RuCoLA) is a novel benchmark of 13.4k sentences labeled as acceptable or not. RuCoLA combines in-domain sentences manually collected from linguistic literature and out-of-domain sentences produced by nine machine translation and paraphrase generation models.
The motivation behind the out-of-domain set is to facilitate the practical use of acceptability judgments for improving language generation.
Each unacceptable sentence is additionally labeled with four standard and machine-specific coarse-grained categories: morphology, syntax, semantics, and hallucinations.
## Dataset Structure
### Supported Tasks and Leaderboards
- **Task:** binary classification.
- **Metrics:** MCC/Acc.
- **Leaderboard:** https://rucola-benchmark.com/leaderboard
### Languages
Russian.
### Data Instances
```
{
"id": 19,
"sentence": "Люк останавливает удачу от этого.",
"label": 0,
"error_type": "Hallucination",
"detailed_source": "WikiMatrix"}
}
```
The example in English for illustration purposes:
```
{
"id": 19,
"sentence": "Luck stops luck from doing this.",
"label": 0,
"error_type": "Hallucination",
"detailed_source": "WikiMatrix"}
}
```
### Data Fields
- ```id (int64)```: the sentence's id.
- ```sentence (str)```: the sentence.
- ```label (str)```: the target class. "1" refers to "acceptable", while "0" corresponds to "unacceptable".
- ```error_type (str)```: the coarse-grained violation category (Morphology, Syntax, Semantics, or Hallucination); "0" if the sentence is acceptable.
- ```detailed_source```: the data source.
### Data Splits
RuCoLA consists of the training, development, and private test sets organised under two subsets: in-domain (linguistic publications) and out-of-domain (texts produced by natural language generation models).
- ```train```: 7869 in-domain samples (```"data/in_domain_train.csv"```).
- ```validation```: 2787 in-domain and out-of-domain samples. The in-domain (```"data/in_domain_dev.csv"```) and out-of-domain (```"data/out_of_domain_dev.csv"```) validation sets are merged into ```"data/dev.csv"``` for convenience.
- ```test```: 2789 in-domain and out-of-domain samples (```"data/test.csv"```).
## Dataset Creation
### Curation Rationale
- **In-domain Subset:** The in-domain sentences and the corresponding authors’ acceptability judgments are *manually* drawn from fundamental linguistic textbooks, academic publications, and methodological materials.
- **Out-of-domain Subset:** The out-of-domain sentences are produced by nine open-source MT and paraphrase generation models.
### Source Data
<details>
<summary>Linguistic publications and resources</summary>
|Original source |Transliterated source |Source id |
|---|---|---|
|[Проект корпусного описания русской грамматики](http://rusgram.ru) | [Proekt korpusnogo opisaniya russkoj grammatiki](http://rusgram.ru/)|Rusgram |
|Тестелец, Я.Г., 2001. *Введение в общий синтаксис*. Федеральное государственное бюджетное образовательное учреждение высшего образования Российский государственный гуманитарный университет.|Yakov Testelets. 2001. Vvedeniye v obschiy sintaksis. Russian State University for the Humanities. |Testelets |
|Лютикова, Е.А., 2010. *К вопросу о категориальном статусе именных групп в русском языке*. Вестник Московского университета. Серия 9. Филология, (6), pp.36-76. |Ekaterina Lutikova. 2010. K voprosu o kategorial’nom statuse imennykh grup v russkom yazyke. Moscow University Philology Bulletin. |Lutikova |
|Митренина, О.В., Романова, Е.Е. and Слюсарь, Н.А., 2017. *Введение в генеративную грамматику*. Общество с ограниченной ответственностью "Книжный дом ЛИБРОКОМ". |Olga Mitrenina et al. 2017. Vvedeniye v generativnuyu grammatiku. Limited Liability Company “LIBROCOM”. |Mitrenina |
|Падучева, Е.В., 2004. *Динамические модели в семантике лексики*. М.: Языки славянской культуры.| Elena Paducheva. 2004. Dinamicheskiye modeli v semantike leksiki. Languages of Slavonic culture. |Paducheva2004 |
|Падучева, Е.В., 2010. *Семантические исследования: Семантика времени и вида в русском языке; Семантика нарратива*. М.: Языки славянской культуры. | Elena Paducheva. 2010. Semanticheskiye issledovaniya: Semantika vremeni i vida v russkom yazyke; Semantika narrativa. Languages of Slavonic culture.|Paducheva2010 |
|Падучева, Е.В., 2013. *Русское отрицательное предложение*. М.: Языки славянской культуры |Elena Paducheva. 2013. Russkoye otritsatel’noye predlozheniye. Languages of Slavonic culture. |Paducheva2013 |
|Селиверстова, О.Н., 2004. *Труды по семантике*. М.: Языки славянской культуры | Olga Seliverstova. 2004. Trudy po semantike. Languages of Slavonic culture.|Seliverstova |
| Набор данных ЕГЭ по русскому языку | Shavrina et al. 2020. [Humans Keep It One Hundred: an Overview of AI Journey](https://aclanthology.org/2020.lrec-1.277/) |USE5, USE7, USE8 |
</details>
<details>
<summary>Machine-generated sentences</summary>
<br>
**Datasets**
|Original source |Source id|
|---|---|
|Mikel Artetxe and Holger Schwenk. 2019. [Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00288/43523/Massively-Multilingual-Sentence-Embeddings-for)|Tatoeba |
|Holger Schwenk et al. 2021. [WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia](https://aclanthology.org/2021.eacl-main.115/)|WikiMatrix |
|Ye Qi et al. 2018. [When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?](https://aclanthology.org/N18-2084/)|TED |
|Alexandra Antonova and Alexey Misyurev. 2011. [Building a Web-Based Parallel Corpus and Filtering Out Machine-Translated Text](https://aclanthology.org/W11-1218/)|YandexCorpus |
**Models**
[EasyNMT models](https://github.com/UKPLab/EasyNMT):
1. OPUS-MT. Jörg Tiedemann and Santhosh Thottingal. 2020. [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/)
2. M-BART50. Yuqing Tang et al. 2020. [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401)
3. M2M-100. Angela Fan et al. 2021. [Beyond English-Centric Multilingual Machine Translation](https://jmlr.org/papers/volume22/20-1307/20-1307.pdf)
[Paraphrase generation models](https://github.com/RussianNLP/russian_paraphrasers):
1. [ruGPT2-Large](https://huggingface.co/sberbank-ai/rugpt2large)
2. [ruT5](https://huggingface.co/cointegrated/rut5-base-paraphraser)
3. mT5. Linting Xue et al. 2021. [mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer](https://aclanthology.org/2021.naacl-main.41/)
</details>
### Annotations
#### Annotation process
The out-of-domain sentences undergo a two-stage annotation procedure on [Toloka](https://toloka.ai), a crowd-sourcing platform for data labeling.
Each stage includes an unpaid training phase with explanations, control tasks for tracking annotation quality, and the main annotation task. Before starting, the worker is given detailed instructions describing the task, explaining the labels, and showing plenty of examples.
The instruction is available at any time during both the training and main annotation phases. To get access to the main phase, the worker should first complete the training phase by labeling more than 70% of its examples correctly. Each trained worker receives a page with five sentences, one of which is a control one.
We collect the majority vote labels via a dy- namic overlap from three to five workers after filtering them by response time and performance on control tasks.
- **Stage 1: Acceptability Judgments**
The first annotation stage defines whether a given sentence is acceptable or not. Access to the project is granted to workers certified as native speakers of Russian by Toloka and ranked top-60% workers according to the Toloka rating system.
Each worker answers 30 examples in the training phase. Each training example is accompanied by an explanation that appears in an incorrect answer.
The main annotation phase counts 3.6k machine-generated sentences. The pay rate is on average $2.55/hr, which is twice the amount of the hourly minimum wage in Russia. Each of 1.3k trained workers get paid, but we keep votes from only 960 workers whose annotation quality rate on the control sentences is more than 50%.
- **Stage 2: Violation Categories**
The second stage includes validation and annotation of sentences labeled unacceptable on Stage 1 according to five answer options: “Morphology”, “Syntax”, “Semantics”, “Hallucinations” and “Other”. The task is framed as a multi-label classification, i.e., the sentence may contain more than one violation in some rare cases or be re-labeled as acceptable.
We create a team of 30 annotators who are undergraduate BA and MA in philology and linguistics from several Russian universities. The students are asked to study the works on CoLA, TGEA, and hallucinations. We also hold an online seminar to discuss the works and clarify the task specifics. Each student undergoes platform-based training on 15 examples before moving onto the main phase of 1.3k sentences.
The students are paid on average $5.42/hr and are eligible to get credits for an academic course or an internship. This stage provides direct interaction between authors and students in a group chat. We keep submissions with more than 30 seconds of response time per page and collect the majority vote labels for each answer independently.
Sentences having more than one violation category or labeled as “Other” by the majority are filtered out.
### Personal and Sensitive Information
The annotators are warned about potentially sensitive topics in data (e.g., politics, culture, and religion).
## Considerations for Using the Data
### Social Impact of Dataset
RuCoLA may serve as training data for acceptability classifiers, which may benefit the quality of generated texts.
We recognize that such improvements in text generation may lead to misuse of LMs for malicious purposes. However, our corpus can be used to train adversarial defense and artificial text detection models.
We introduce a novel dataset for **research and development needs**, and the potential negative uses are not lost on us.
### Discussion of Biases
Although we aim to control the number of high-frequency tokens in the RuCoLA’s sentences, we assume that potential word frequency distribution shift between LMs’ pretraining corpora and our corpus can introduce bias in the evaluation.
Furthermore, linguistic publications represent a specific domain as the primary source of acceptability judgments. On the one hand, it can lead to a domain shift when using RuCoLA for practical purposes.
On the other hand, we observe moderate acceptability classification performance on the out-of-domain test, which spans multiple domains, ranging from subtitles to Wikipedia.
### Other Known Limitations
- **Data Collection**
Acceptability judgments datasets require a source of unacceptable sentences.
Collecting judgments from linguistic literature has become a standard practice replicated in multiple languages. However, this approach has several limitations. First, many studies raise concerns about the reliability and reproducibility of acceptability judgments. Second, the linguists’ judgments may limit data representativeness, as they may not reflect the errors that speakers tend to produce. Third, enriching acceptability judgments datasets is time-consuming, while creating new ones can be challenging due to limited resources, e.g., in low-resource languages.
- **Expert vs. Non-expert**
One of the open methodological questions on acceptability judgments is whether they should be collected from expert or non-expert speakers.
On the one hand, prior linguistic knowledge can introduce bias in reporting judgments. On the other hand, expertise may increase the quality of the linguists’ judgments over the ones of non-linguists. At the same time, the latter tend to be influenced by an individual’s exposure to ungrammatical language use.
The objective of involving students with a linguistic background is to maximize the annotation quality.
- **Fine-grained Annotation**
The coarse-grained annotation scheme of the RuCoLA’s unacceptable sentences relies on four major categories. While the annotation can be helpful for model error analysis, it limits the scope of LMs’ diagnostic evaluation concerning linguistic and machine-specific phenomena.
## Additional Information
### Dataset Curators
Correspondence: ```vmikhailovhse@gmail.com```
### Licensing Information
Our baseline code and acceptability labels are available under the Apache 2.0 license. The copyright (where applicable) of texts from the linguistic publications and resources remains with the original authors or publishers.
### Citation Information
```
@inproceedings{mikhailov-etal-2022-rucola,
title = "{R}u{C}o{LA}: {R}ussian Corpus of Linguistic Acceptability",
author = "Mikhailov, Vladislav and
Shamardina, Tatiana and
Ryabinin, Max and
Pestova, Alena and
Smurov, Ivan and
Artemova, Ekaterina",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.348",
pages = "5207--5227",
abstract = "Linguistic acceptability (LA) attracts the attention of the research community due to its many uses, such as testing the grammatical knowledge of language models and filtering implausible texts with acceptability classifiers.However, the application scope of LA in languages other than English is limited due to the lack of high-quality resources.To this end, we introduce the Russian Corpus of Linguistic Acceptability (RuCoLA), built from the ground up under the well-established binary LA approach. RuCoLA consists of 9.8k in-domain sentences from linguistic publications and 3.6k out-of-domain sentences produced by generative models. The out-of-domain set is created to facilitate the practical use of acceptability for improving language generation.Our paper describes the data collection protocol and presents a fine-grained analysis of acceptability classification experiments with a range of baseline approaches.In particular, we demonstrate that the most widely used language models still fall behind humans by a large margin, especially when detecting morphological and semantic errors. We release RuCoLA, the code of experiments, and a public leaderboard to assess the linguistic competence of language models for Russian.",
}
```
### Other
Please refer to our [paper](https://aclanthology.org/2022.emnlp-main.348/) for more details. | 15,477 | [
[
-0.0299530029296875,
-0.061309814453125,
0.030120849609375,
0.0258331298828125,
-0.0298614501953125,
-0.022491455078125,
-0.0254974365234375,
-0.0249786376953125,
0.0214996337890625,
0.030792236328125,
-0.0274200439453125,
-0.046600341796875,
-0.037078857421875,... |
ammarnasr/the-stack-java-clean | 2023-08-14T21:18:42.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:code",
"license:openrail",
"code",
"region:us"
] | ammarnasr | null | null | 0 | 106 | 2023-06-29T23:50:04 | ---
license: openrail
dataset_info:
features:
- name: hexsha
dtype: string
- name: size
dtype: int64
- name: content
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
splits:
- name: train
num_bytes: 3582248477.9086223
num_examples: 806789
- name: test
num_bytes: 394048264.9973618
num_examples: 88747
- name: valid
num_bytes: 3982797.09401595
num_examples: 897
download_size: 1323156008
dataset_size: 3980279540
task_categories:
- text-generation
language:
- code
tags:
- code
pretty_name: TheStack-Java
size_categories:
- 1M<n<10M
---
## Dataset 1: TheStack - Java - Cleaned
**Description**: This dataset is drawn from TheStack Corpus, an open-source code dataset with over 3TB of GitHub data covering 48 programming languages. We selected a small portion of this dataset to optimize smaller language models for Java, a popular statically typed language.
**Target Language**: Java
**Dataset Size**:
- Training: 900,000 files
- Validation: 50,000 files
- Test: 50,000 files
**Preprocessing**:
1. Selected Java as the target language due to its popularity on GitHub.
2. Filtered out files with average line length > 100 characters, maximum line length > 1000 characters, and alphabet ratio < 25%.
3. Split files into 90% training, 5% validation, and 5% test sets.
**Tokenizer**: Byte Pair Encoding (BPE) tokenizer with tab and whitespace tokens. GPT-2 vocabulary extended with special tokens.
**Training Sequences**: Sequences constructed by joining training data text to reach a context length of 2048 tokens (1024 tokens for full fine-tuning). | 1,707 | [
[
-0.035797119140625,
-0.0391845703125,
0.0147857666015625,
-0.0185699462890625,
-0.0362548828125,
0.01320648193359375,
-0.0234222412109375,
-0.02105712890625,
0.0235443115234375,
0.043426513671875,
-0.036468505859375,
-0.05926513671875,
-0.04205322265625,
0.0... |
griffin/chain_of_density | 2023-09-08T00:43:00.000Z | [
"region:us"
] | griffin | null | null | 43 | 106 | 2023-09-08T00:42:55 | ---
dataset_info:
- config_name: annotated
features:
- name: article
dtype: string
- name: highlights
dtype: string
- name: id
dtype: string
- name: prediction
sequence: string
- name: missing
sequence: string
- name: model
dtype: string
- name: annotations
sequence: int64
- name: num_tokens
sequence: int64
- name: num_entities
sequence: int64
- name: fusion
sequence: float64
- name: entity_density
sequence: float64
- name: inverse_lead_bias
sequence: float64
- name: extractive_density
sequence: float64
- name: extractive_coverage
sequence: float64
- name: unique_unigrams
sequence: float64
- name: unique_bigrams
sequence: float64
- name: unique_trigrams
sequence: float64
- name: rouge1
sequence: float64
- name: rouge2
sequence: float64
- name: rougeL
sequence: float64
- name: rougeLsum
sequence: float64
- name: gpt4_informative
sequence: float64
- name: gpt4_quality
sequence: float64
- name: gpt4_attributable
sequence: float64
- name: gpt4_coherence
sequence: float64
- name: gpt4_overall
sequence: float64
splits:
- name: test
num_bytes: 750471
num_examples: 100
download_size: 452599
dataset_size: 750471
- config_name: unannotated
features:
- name: article
dtype: string
- name: highlights
dtype: string
- name: id
dtype: string
- name: prediction
sequence: string
- name: missing
sequence: string
- name: model
dtype: string
- name: num_tokens
sequence: int64
- name: num_entities
sequence: int64
- name: fusion
sequence: float64
- name: entity_density
sequence: float64
- name: inverse_lead_bias
sequence: float64
- name: extractive_density
sequence: float64
- name: extractive_coverage
sequence: float64
- name: unique_unigrams
sequence: float64
- name: unique_bigrams
sequence: float64
- name: unique_trigrams
sequence: float64
- name: rouge1
sequence: float64
- name: rouge2
sequence: float64
- name: rougeL
sequence: float64
- name: rougeLsum
sequence: float64
splits:
- name: train
num_bytes: 6948744
num_examples: 1000
download_size: 3719092
dataset_size: 6948744
configs:
- config_name: annotated
data_files:
- split: test
path: annotated/test-*
- config_name: unannotated
data_files:
- split: train
path: unannotated/train-*
---
# Dataset Card for "chain_of_density"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,654 | [
[
-0.044464111328125,
-0.029296875,
0.019500732421875,
0.0247955322265625,
-0.022308349609375,
0.00832366943359375,
0.0275421142578125,
-0.008331298828125,
0.0802001953125,
0.040252685546875,
-0.04327392578125,
-0.04315185546875,
-0.03277587890625,
-0.02622985... |
jlh-ibm/earnings_call | 2023-09-15T21:34:39.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc0-1.0",
"finance",
"region:us"
] | jlh-ibm | The dataset reports a collection of earnings call transcripts, the related stock prices, and the sector index In terms of volume, there is a total of 188 transcripts, 11970 stock prices, and 1196 sector index values. Furthermore, all of these data originated in the period 2016-2020 and are related to the NASDAQ stock market. Furthermore, the data collection was made possible by Yahoo Finance and Thomson Reuters Eikon. Specifically, Yahoo Finance enabled the search for stock values and Thomson Reuters Eikon provided the earnings call transcripts. Lastly, the dataset can be used as a benchmark for the evaluation of several NLP techniques to understand their potential for financial applications. Moreover, it is also possible to expand the dataset by extending the period in which the data originated following a similar procedure. | @data{TJE0D0_2021,
author = {Roozen, Dexter and Lelli, Francesco},
publisher = {DataverseNL},
title = {{Stock Values and Earnings Call Transcripts: a Sentiment Analysis Dataset}},
year = {2021},
version = {V1},
doi = {10.34894/TJE0D0},
url = {https://doi.org/10.34894/TJE0D0}
} | 0 | 106 | 2023-09-15T20:25:43 | ---
license: cc0-1.0
task_categories:
- text-classification
language:
- en
tags:
- finance
pretty_name: Earnings Calls Dataset
size_categories:
- 10K<n<100K
dataset_info:
- config_name: stock_prices
features:
- name: date
dtype: date64
- name: open
dtype: float32
- name: high
dtype: float32
- name: low
dtype: float32
- name: close
dtype: float32
- name: adj_close
dtype: float32
- name: volume
dtype: int64
- name: company
dtype: string
splits:
- name: train
num_bytes: 578818
num_examples: 13155
download_size: 290243
dataset_size: 578818
- config_name: transcript-sentiment
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
- name: company
dtype: string
- name: date
dtype: date64
- name: para_no
dtype: int32
splits:
- name: train
num_bytes: 7414686
num_examples: 6851
- name: test
num_bytes: 1928515
num_examples: 1693
download_size: 3868059
dataset_size: 9343201
- config_name: transcripts
features:
- name: company
dtype: string
- name: date
dtype: date64
- name: transcript
dtype: string
splits:
- name: train
num_bytes: 9592380
num_examples: 150
- name: test
num_bytes: 2458569
num_examples: 38
download_size: 3577816
dataset_size: 12050949
---
# Dataset Card for Earnings Calls Dataset
## Dataset Description
- **Homepage:** https://dataverse.nl/dataset.xhtml?persistentId=doi:10.34894/TJE0D0
- **Paper:** https://www.preprints.org/manuscript/202102.0424/v1
- **Point of Contact:** [Francesco Lelli](https://francescolelli.info/)
### Dataset Summary
The dataset reports a collection of earnings call transcripts, the related stock prices, and the sector index In terms of volume,
there is a total of 188 transcripts, 11970 stock prices, and 1196 sector index values. Furthermore, all of these data originated
in the period 2016-2020 and are related to the NASDAQ stock market. Furthermore, the data collection was made possible by Yahoo
Finance and Thomson Reuters Eikon. Specifically, Yahoo Finance enabled the search for stock values and Thomson Reuters Eikon
provided the earnings call transcripts. Lastly, the dataset can be used as a benchmark for the evaluation of several NLP techniques
to understand their potential for financial applications. Moreover, it is also possible to expand the dataset by extending the period
in which the data originated following a similar procedure.
### Citation Information
```bibtex
@data{TJE0D0_2021,
author = {Roozen, Dexter and Lelli, Francesco},
publisher = {DataverseNL},
title = {{Stock Values and Earnings Call Transcripts: a Sentiment Analysis Dataset}},
year = {2021},
version = {V1},
doi = {10.34894/TJE0D0},
url = {https://doi.org/10.34894/TJE0D0}
}
```
| 2,890 | [
[
0.005443572998046875,
-0.03167724609375,
0.00830841064453125,
0.0147552490234375,
-0.033203125,
0.018798828125,
-0.00803375244140625,
-0.0386962890625,
0.0560302734375,
0.01493072509765625,
-0.05438232421875,
-0.0587158203125,
-0.028350830078125,
-0.00154209... |
natyou/freshqa_10_06 | 2023-10-11T15:26:10.000Z | [
"region:us"
] | natyou | null | null | 0 | 106 | 2023-10-11T15:23:22 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: dev
path: data/dev-*
dataset_info:
features:
- name: id
dtype: int64
- name: split
dtype: string
- name: question
dtype: string
- name: effective_year
dtype: string
- name: next_review
dtype: string
- name: false_premise
dtype: bool
- name: num_hops
dtype: string
- name: fact_type
dtype: string
- name: source
dtype: string
- name: answer_0
dtype: string
- name: answer_1
dtype: string
- name: answer_2
dtype: string
- name: answer_3
dtype: string
- name: answer_4
dtype: string
- name: answer_5
dtype: string
- name: answer_6
dtype: string
- name: answer_7
dtype: string
- name: answer_8
dtype: string
- name: answer_9
dtype: string
- name: note
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 192891
num_examples: 500
- name: dev
num_bytes: 39203
num_examples: 100
download_size: 129810
dataset_size: 232094
---
# Dataset Card for "freshqa_10_06"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,277 | [
[
-0.03448486328125,
-0.007366180419921875,
0.01384735107421875,
0.00608062744140625,
-0.0186767578125,
-0.01483917236328125,
0.028656005859375,
-0.0121002197265625,
0.054046630859375,
0.030548095703125,
-0.06195068359375,
-0.0421142578125,
-0.02362060546875,
... |
result-kand2-sdxl-wuerst-karlo/b0d16951 | 2023-10-21T15:08:01.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 106 | 2023-10-21T15:08:00 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 168
num_examples: 10
download_size: 1367
dataset_size: 168
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "b0d16951"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.044158935546875,
-0.0095672607421875,
0.0212860107421875,
0.02105712890625,
-0.01141357421875,
0.0019817352294921875,
0.022216796875,
-0.017974853515625,
0.067626953125,
0.03131103515625,
-0.06622314453125,
-0.0380859375,
-0.034423828125,
-0.01025390625,
... |
result-kand2-sdxl-wuerst-karlo/488ac4b8 | 2023-10-21T18:57:31.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 106 | 2023-10-21T18:57:30 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 159
num_examples: 10
download_size: 1330
dataset_size: 159
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "488ac4b8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.044464111328125,
-0.0027103424072265625,
0.0177459716796875,
0.0217437744140625,
-0.0141448974609375,
0.0015821456909179688,
0.0278472900390625,
-0.017303466796875,
0.06646728515625,
0.034454345703125,
-0.056488037109375,
-0.0509033203125,
-0.032958984375,
... |
result-kand2-sdxl-wuerst-karlo/70fd4f5c | 2023-10-22T05:34:11.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 106 | 2023-10-22T05:34:10 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 204
num_examples: 10
download_size: 1419
dataset_size: 204
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "70fd4f5c"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.05145263671875,
-0.00801849365234375,
0.02099609375,
0.017425537109375,
-0.0163116455078125,
-0.007480621337890625,
0.029754638671875,
-0.0193328857421875,
0.0440673828125,
0.03375244140625,
-0.053436279296875,
-0.06451416015625,
-0.044677734375,
0.007164... |
result-kand2-sdxl-wuerst-karlo/002953b6 | 2023-10-22T07:58:35.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 106 | 2023-10-22T07:58:34 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 186
num_examples: 10
download_size: 1369
dataset_size: 186
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "002953b6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.038177490234375,
-0.0007581710815429688,
0.017730712890625,
0.03070068359375,
-0.0149383544921875,
-0.0088958740234375,
0.030029296875,
-0.013641357421875,
0.0614013671875,
0.0310821533203125,
-0.06573486328125,
-0.040496826171875,
-0.031463623046875,
-0.... |
result-kand2-sdxl-wuerst-karlo/606de66e | 2023-10-22T07:58:38.000Z | [
"region:us"
] | result-kand2-sdxl-wuerst-karlo | null | null | 0 | 106 | 2023-10-22T07:58:37 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 186
num_examples: 10
download_size: 1369
dataset_size: 186
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "606de66e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 455 | [
[
-0.051727294921875,
-0.007648468017578125,
0.01025390625,
0.01056671142578125,
-0.0171356201171875,
-0.0162506103515625,
0.034423828125,
-0.02142333984375,
0.0692138671875,
0.027984619140625,
-0.06488037109375,
-0.05828857421875,
-0.035186767578125,
-0.02085... |
bigIR/ar_cov19 | 2023-09-19T06:52:17.000Z | [
"task_categories:other",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ar",
"data-mining",
"arxiv:2004.05861",
"region:us"
] | bigIR | ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 30th of April 2020. ArCOV-19 is designed to enable research under several domains including natural language processing, information retrieval, and social computing, among others | @article{haouari2020arcov19,
title={ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks},
author={Fatima Haouari and Maram Hasanain and Reem Suwaileh and Tamer Elsayed},
journal={arXiv preprint arXiv:2004.05861},
year={2020} | 1 | 105 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ar
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: arcov-19
pretty_name: ArCOV19
tags:
- data-mining
dataset_info:
config_name: ar_cov19
features:
- name: tweetID
dtype: string
splits:
- name: train
num_bytes: 72223634
num_examples: 3140158
download_size: 23678407
dataset_size: 72223634
---
# Dataset Card for ArCOV19
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://gitlab.com/bigirqu/ArCOV-19
- **Paper:** [ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks](https://arxiv.org/abs/2004.05861)
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [Fatima Haouari](mailto:200159617@qu.edu.qa)
### Dataset Summary
ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 5th of May 2021.
ArCOV-19 is the first publicly-available Arabic Twitter dataset covering COVID-19 pandemic that includes about 3.2M
tweets alongside the propagation networks of the most-popular subset of them (i.e., most-retweeted and-liked).
The propagation networks include both retweets and conversational threads (i.e., threads of replies).
ArCOV-19 is designed to enable research under several domains including natural language processing, information
retrieval, and social computing, among others. Preliminary analysis shows that ArCOV-19 captures rising discussions
associated with the first reported cases of the disease as they appeared in the Arab world. In addition to the source
tweets and the propagation networks, we also release the search queries and the language-independent crawler used to
collect the tweets to encourage the curation of similar datasets.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Arabic
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
tweet_id: the Twitter assigned ID for the tweet object.
### Data Splits
[More Information Needed]
## Dataset Creation
The dataset collection approach is presented in the following paper: [ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks](https://arxiv.org/abs/2004.05861)
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
No annotation was provided with the dataset.
#### Annotation process
No annotation was provided with the dataset.
#### Who are the annotators?
No annotation was provided with the dataset.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
**Team:** [bigIR](https://sites.google.com/view/bigir) from Qatar University ([@bigIR_group](https://twitter.com/bigIR_group))
- [Fatima Haouari](mailto:200159617@qu.edu.qa)
- [Maram Hasanain](mailto:maram.hasanain@qu.edu.qa)
- [Reem Suwaileh](mailto:rs081123@qu.edu.qa)
- [Dr. Tamer Elsayed](mailto:telsayed@qu.edu.qa)
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{haouari2020arcov19,
title={ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks},
author={Fatima Haouari and Maram Hasanain and Reem Suwaileh and Tamer Elsayed},
year={2021},
eprint={2004.05861},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@Fatima-Haouari](https://github.com/Fatima-Haouari) for adding this dataset. | 4,961 | [
[
-0.031829833984375,
-0.04119873046875,
-0.006229400634765625,
0.0234375,
-0.026824951171875,
0.03399658203125,
-0.006618499755859375,
-0.0277099609375,
0.022430419921875,
0.01201629638671875,
-0.048492431640625,
-0.07659912109375,
-0.044342041015625,
-0.0015... |
cdt | 2023-01-25T14:27:46.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:bsd-3-clause",
"region:us"
] | null | The Cyberbullying Detection task was part of 2019 edition of PolEval competition. The goal is to predict if a given Twitter message contains a cyberbullying (harmful) content. | @article{ptaszynski2019results,
title={Results of the PolEval 2019 Shared Task 6: First Dataset and Open Shared Task for Automatic Cyberbullying Detection in Polish Twitter},
author={Ptaszynski, Michal and Pieciukiewicz, Agata and Dybala, Pawel},
journal={Proceedings of the PolEval 2019 Workshop},
publisher={Institute of Computer Science, Polish Academy of Sciences},
pages={89},
year={2019}
} | 0 | 105 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- pl
license:
- bsd-3-clause
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: cdt
dataset_info:
features:
- name: sentence
dtype: string
- name: target
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 1104322
num_examples: 10041
- name: test
num_bytes: 109681
num_examples: 1000
download_size: 375476
dataset_size: 1214003
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
http://2019.poleval.pl/index.php/tasks/
- **Repository:**
https://github.com/ptaszynski/cyberbullying-Polish
- **Paper:**
- **Leaderboard:**
https://klejbenchmark.com/leaderboard/
- **Point of Contact:**
### Dataset Summary
The Cyberbullying Detection task was part of 2019 edition of PolEval competition. The goal is to predict if a given Twitter message contains a cyberbullying (harmful) content.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Polish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- sentence: an anonymized tweet in polish
- target: 1 if tweet is described as bullying, 0 otherwise. The test set doesn't have labels so -1 is used instead.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
BSD 3-Clause
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset. | 3,452 | [
[
-0.03143310546875,
-0.0650634765625,
0.00927734375,
0.033172607421875,
-0.0199432373046875,
0.0265960693359375,
-0.01505279541015625,
-0.0273590087890625,
0.039581298828125,
0.03155517578125,
-0.06298828125,
-0.08233642578125,
-0.05865478515625,
0.0030422210... |
etalab-ia/piaf | 2022-11-03T16:31:15.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:fr",
"license:mit",
"region:us"
] | etalab-ia | Piaf is a reading comprehension dataset. This version, published in February 2020, contains 3835 questions on French Wikipedia. | @InProceedings{keraron-EtAl:2020:LREC,
author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},
title = {Project PIAF: Building a Native French Question-Answering Dataset},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5483--5492},
abstract = {Motivated by the lack of data for non-English languages, in particular for the evaluation of downstream tasks such as Question Answering, we present a participatory effort to collect a native French Question Answering Dataset. Furthermore, we describe and publicly release the annotation tool developed for our collection effort, along with the data obtained and preliminary baselines.},
url = {https://www.aclweb.org/anthology/2020.lrec-1.673}
} | 7 | 105 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- fr
language_bcp47:
- fr-FR
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
paperswithcode_id: null
pretty_name: Piaf
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
config_name: plain_text
splits:
- name: train
num_bytes: 3332905
num_examples: 3835
download_size: 1370384
dataset_size: 3332905
---
# Dataset Card for Piaf
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://piaf.etalab.studio](https://piaf.etalab.studio)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.31 MB
- **Size of the generated dataset:** 3.18 MB
- **Total amount of disk used:** 4.49 MB
### Dataset Summary
Piaf is a reading comprehension dataset. This version, published in February 2020, contains 3835 questions on French Wikipedia.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 1.31 MB
- **Size of the generated dataset:** 3.18 MB
- **Total amount of disk used:** 4.49 MB
An example of 'train' looks as follows.
```
{
"answers": {
"answer_start": [0],
"text": ["Voici"]
},
"context": "Voici le contexte du premier paragraphe du deuxième article.",
"id": "p140295460356960",
"question": "Suis-je la troisième question ?",
"title": "Jakob Böhme"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name | train |
|------------|------:|
| plain_text | 3835 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{keraron-EtAl:2020:LREC,
author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},
title = {Project PIAF: Building a Native French Question-Answering Dataset},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5483--5492},
abstract = {Motivated by the lack of data for non-English languages, in particular for the evaluation of downstream tasks such as Question Answering, we present a participatory effort to collect a native French Question Answering Dataset. Furthermore, we describe and publicly release the annotation tool developed for our collection effort, along with the data obtained and preliminary baselines.},
url = {https://www.aclweb.org/anthology/2020.lrec-1.673}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@RachelKer](https://github.com/RachelKer) for adding this dataset. | 7,265 | [
[
-0.048004150390625,
-0.047698974609375,
0.01259613037109375,
0.0196533203125,
-0.0061492919921875,
-0.007289886474609375,
-0.0235137939453125,
-0.021392822265625,
0.038726806640625,
0.03973388671875,
-0.05621337890625,
-0.056060791015625,
-0.036834716796875,
... |
swda | 2023-01-25T14:45:15.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|other-Switchboard-1 Telephone Speech Corpus, Release 2",
"language:en",
"licens... | null | The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2 with
turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information about the
associated turn. The SwDA project was undertaken at UC Boulder in the late 1990s.
The SwDA is not inherently linked to the Penn Treebank 3 parses of Switchboard, and it is far from straightforward to
align the two resources. In addition, the SwDA is not distributed with the Switchboard's tables of metadata about the
conversations and their participants. | @techreport{Jurafsky-etal:1997,
Address = {Boulder, CO},
Author = {Jurafsky, Daniel and Shriberg, Elizabeth and Biasca, Debra},
Institution = {University of Colorado, Boulder Institute of Cognitive Science},
Number = {97-02},
Title = {Switchboard {SWBD}-{DAMSL} Shallow-Discourse-Function Annotation Coders Manual, Draft 13},
Year = {1997}}
@article{Shriberg-etal:1998,
Author = {Shriberg, Elizabeth and Bates, Rebecca and Taylor, Paul and Stolcke, Andreas and Jurafsky, Daniel and Ries, Klaus and Coccaro, Noah and Martin, Rachel and Meteer, Marie and Van Ess-Dykema, Carol},
Journal = {Language and Speech},
Number = {3--4},
Pages = {439--487},
Title = {Can Prosody Aid the Automatic Classification of Dialog Acts in Conversational Speech?},
Volume = {41},
Year = {1998}}
@article{Stolcke-etal:2000,
Author = {Stolcke, Andreas and Ries, Klaus and Coccaro, Noah and Shriberg, Elizabeth and Bates, Rebecca and Jurafsky, Daniel and Taylor, Paul and Martin, Rachel and Meteer, Marie and Van Ess-Dykema, Carol},
Journal = {Computational Linguistics},
Number = {3},
Pages = {339--371},
Title = {Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech},
Volume = {26},
Year = {2000}} | 7 | 105 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|other-Switchboard-1 Telephone Speech Corpus, Release 2
task_categories:
- text-classification
task_ids:
- multi-label-classification
pretty_name: The Switchboard Dialog Act Corpus (SwDA)
dataset_info:
features:
- name: swda_filename
dtype: string
- name: ptb_basename
dtype: string
- name: conversation_no
dtype: int64
- name: transcript_index
dtype: int64
- name: act_tag
dtype:
class_label:
names:
'0': b^m^r
'1': qw^r^t
'2': aa^h
'3': br^m
'4': fa^r
'5': aa,ar
'6': sd^e(^q)^r
'7': ^2
'8': sd;qy^d
'9': oo
'10': bk^m
'11': aa^t
'12': cc^t
'13': qy^d^c
'14': qo^t
'15': ng^m
'16': qw^h
'17': qo^r
'18': aa
'19': qy^d^t
'20': qrr^d
'21': br^r
'22': fx
'23': sd,qy^g
'24': ny^e
'25': ^h^t
'26': fc^m
'27': qw(^q)
'28': co
'29': o^t
'30': b^m^t
'31': qr^d
'32': qw^g
'33': ad(^q)
'34': qy(^q)
'35': na^r
'36': am^r
'37': qr^t
'38': ad^c
'39': qw^c
'40': bh^r
'41': h^t
'42': ft^m
'43': ba^r
'44': qw^d^t
'45': '%'
'46': t3
'47': nn
'48': bd
'49': h^m
'50': h^r
'51': sd^r
'52': qh^m
'53': ^q^t
'54': sv^2
'55': ft
'56': ar^m
'57': qy^h
'58': sd^e^m
'59': qh^r
'60': cc
'61': fp^m
'62': ad
'63': qo
'64': na^m^t
'65': fo^c
'66': qy
'67': sv^e^r
'68': aap
'69': 'no'
'70': aa^2
'71': sv(^q)
'72': sv^e
'73': nd
'74': '"'
'75': bf^2
'76': bk
'77': fp
'78': nn^r^t
'79': fa^c
'80': ny^t
'81': ny^c^r
'82': qw
'83': qy^t
'84': b
'85': fo
'86': qw^r
'87': am
'88': bf^t
'89': ^2^t
'90': b^2
'91': x
'92': fc
'93': qr
'94': no^t
'95': bk^t
'96': bd^r
'97': bf
'98': ^2^g
'99': qh^c
'100': ny^c
'101': sd^e^r
'102': br
'103': fe
'104': by
'105': ^2^r
'106': fc^r
'107': b^m
'108': sd,sv
'109': fa^t
'110': sv^m
'111': qrr
'112': ^h^r
'113': na
'114': fp^r
'115': o
'116': h,sd
'117': t1^t
'118': nn^r
'119': cc^r
'120': sv^c
'121': co^t
'122': qy^r
'123': sv^r
'124': qy^d^h
'125': sd
'126': nn^e
'127': ny^r
'128': b^t
'129': ba^m
'130': ar
'131': bf^r
'132': sv
'133': bh^m
'134': qy^g^t
'135': qo^d^c
'136': qo^d
'137': nd^t
'138': aa^r
'139': sd^2
'140': sv;sd
'141': qy^c^r
'142': qw^m
'143': qy^g^r
'144': no^r
'145': qh(^q)
'146': sd;sv
'147': bf(^q)
'148': +
'149': qy^2
'150': qw^d
'151': qy^g
'152': qh^g
'153': nn^t
'154': ad^r
'155': oo^t
'156': co^c
'157': ng
'158': ^q
'159': qw^d^c
'160': qrr^t
'161': ^h
'162': aap^r
'163': bc^r
'164': sd^m
'165': bk^r
'166': qy^g^c
'167': qr(^q)
'168': ng^t
'169': arp
'170': h
'171': bh
'172': sd^c
'173': ^g
'174': o^r
'175': qy^c
'176': sd^e
'177': fw
'178': ar^r
'179': qy^m
'180': bc
'181': sv^t
'182': aap^m
'183': sd;no
'184': ng^r
'185': bf^g
'186': sd^e^t
'187': o^c
'188': b^r
'189': b^m^g
'190': ba
'191': t1
'192': qy^d(^q)
'193': nn^m
'194': ny
'195': ba,fe
'196': aa^m
'197': qh
'198': na^m
'199': oo(^q)
'200': qw^t
'201': na^t
'202': qh^h
'203': qy^d^m
'204': ny^m
'205': fa
'206': qy^d
'207': fc^t
'208': sd(^q)
'209': qy^d^r
'210': bf^m
'211': sd(^q)^t
'212': ft^t
'213': ^q^r
'214': sd^t
'215': sd(^q)^r
'216': ad^t
- name: damsl_act_tag
dtype:
class_label:
names:
'0': ad
'1': qo
'2': qy
'3': arp_nd
'4': sd
'5': h
'6': bh
'7': 'no'
'8': ^2
'9': ^g
'10': ar
'11': aa
'12': sv
'13': bk
'14': fp
'15': qw
'16': b
'17': ba
'18': t1
'19': oo_co_cc
'20': +
'21': ny
'22': qw^d
'23': x
'24': qh
'25': fc
'26': fo_o_fw_"_by_bc
'27': aap_am
'28': '%'
'29': bf
'30': t3
'31': nn
'32': bd
'33': ng
'34': ^q
'35': br
'36': qy^d
'37': fa
'38': ^h
'39': b^m
'40': ft
'41': qrr
'42': na
- name: caller
dtype: string
- name: utterance_index
dtype: int64
- name: subutterance_index
dtype: int64
- name: text
dtype: string
- name: pos
dtype: string
- name: trees
dtype: string
- name: ptb_treenumbers
dtype: string
- name: talk_day
dtype: string
- name: length
dtype: int64
- name: topic_description
dtype: string
- name: prompt
dtype: string
- name: from_caller
dtype: int64
- name: from_caller_sex
dtype: string
- name: from_caller_education
dtype: int64
- name: from_caller_birth_year
dtype: int64
- name: from_caller_dialect_area
dtype: string
- name: to_caller
dtype: int64
- name: to_caller_sex
dtype: string
- name: to_caller_education
dtype: int64
- name: to_caller_birth_year
dtype: int64
- name: to_caller_dialect_area
dtype: string
splits:
- name: train
num_bytes: 128498512
num_examples: 213543
- name: validation
num_bytes: 34749819
num_examples: 56729
- name: test
num_bytes: 2560127
num_examples: 4514
download_size: 14456364
dataset_size: 165808458
---
# Dataset Card for SwDA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [The Switchboard Dialog Act Corpus](http://compprag.christopherpotts.net/swda.html)
- **Repository:** [NathanDuran/Switchboard-Corpus](https://github.com/cgpotts/swda)
- **Paper:** [The Switchboard Dialog Act Corpus](http://compprag.christopherpotts.net/swda.html)
= **Leaderboard: [Dialogue act classification](https://github.com/sebastianruder/NLP-progress/blob/master/english/dialogue.md#dialogue-act-classification)**
- **Point of Contact:** [Christopher Potts](https://web.stanford.edu/~cgpotts/)
### Dataset Summary
The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2 with
turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information about the
associated turn. The SwDA project was undertaken at UC Boulder in the late 1990s.
The SwDA is not inherently linked to the Penn Treebank 3 parses of Switchboard, and it is far from straightforward to
align the two resources. In addition, the SwDA is not distributed with the Switchboard's tables of metadata about the
conversations and their participants.
### Supported Tasks and Leaderboards
| Model | Accuracy | Paper / Source | Code |
| ------------- | :-----:| --- | --- |
| H-Seq2seq (Colombo et al., 2020) | 85.0 | [Guiding attention in Sequence-to-sequence models for Dialogue Act prediction](https://ojs.aaai.org/index.php/AAAI/article/view/6259/6115)
| SGNN (Ravi et al., 2018) | 83.1 | [Self-Governing Neural Networks for On-Device Short Text Classification](https://www.aclweb.org/anthology/D18-1105.pdf)
| CASA (Raheja et al., 2019) | 82.9 | [Dialogue Act Classification with Context-Aware Self-Attention](https://www.aclweb.org/anthology/N19-1373.pdf)
| DAH-CRF (Li et al., 2019) | 82.3 | [A Dual-Attention Hierarchical Recurrent Neural Network for Dialogue Act Classification](https://www.aclweb.org/anthology/K19-1036.pdf)
| ALDMN (Wan et al., 2018) | 81.5 | [Improved Dynamic Memory Network for Dialogue Act Classification with Adversarial Training](https://arxiv.org/pdf/1811.05021.pdf)
| CRF-ASN (Chen et al., 2018) | 81.3 | [Dialogue Act Recognition via CRF-Attentive Structured Network](https://arxiv.org/abs/1711.05568)
| Pretrained H-Transformer (Chapuis et al., 2020) | 79.3 | [Hierarchical Pre-training for Sequence Labelling in Spoken Dialog] (https://www.aclweb.org/anthology/2020.findings-emnlp.239)
| Bi-LSTM-CRF (Kumar et al., 2017) | 79.2 | [Dialogue Act Sequence Labeling using Hierarchical encoder with CRF](https://arxiv.org/abs/1709.04250) | [Link](https://github.com/YanWenqiang/HBLSTM-CRF) |
| RNN with 3 utterances in context (Bothe et al., 2018) | 77.34 | [A Context-based Approach for Dialogue Act Recognition using Simple Recurrent Neural Networks](https://arxiv.org/abs/1805.06280) | |
### Languages
The language supported is English.
## Dataset Structure
Utterance are tagged with the [SWBD-DAMSL](https://web.stanford.edu/~jurafsky/ws97/manual.august1.html) DA.
### Data Instances
An example from the dataset is:
`{'act_tag': 115, 'caller': 'A', 'conversation_no': 4325, 'damsl_act_tag': 26, 'from_caller': 1632, 'from_caller_birth_year': 1962, 'from_caller_dialect_area': 'WESTERN', 'from_caller_education': 2, 'from_caller_sex': 'FEMALE', 'length': 5, 'pos': 'Okay/UH ./.', 'prompt': 'FIND OUT WHAT CRITERIA THE OTHER CALLER WOULD USE IN SELECTING CHILD CARE SERVICES FOR A PRESCHOOLER. IS IT EASY OR DIFFICULT TO FIND SUCH CARE?', 'ptb_basename': '4/sw4325', 'ptb_treenumbers': '1', 'subutterance_index': 1, 'swda_filename': 'sw00utt/sw_0001_4325.utt', 'talk_day': '03/23/1992', 'text': 'Okay. /', 'to_caller': 1519, 'to_caller_birth_year': 1971, 'to_caller_dialect_area': 'SOUTH MIDLAND', 'to_caller_education': 1, 'to_caller_sex': 'FEMALE', 'topic_description': 'CHILD CARE', 'transcript_index': 0, 'trees': '(INTJ (UH Okay) (. .) (-DFL- E_S))', 'utterance_index': 1}`
### Data Fields
* `swda_filename`: (str) The filename: directory/basename.
* `ptb_basename`: (str) The Treebank filename: add ".pos" for POS and ".mrg" for trees
* `conversation_no`: (int) The conversation Id, to key into the metadata database.
* `transcript_index`: (int) The line number of this item in the transcript (counting only utt lines).
* `act_tag`: (list of str) The Dialog Act Tags (separated by ||| in the file). Check Dialog act annotations for more details.
* `damsl_act_tag`: (list of str) The Dialog Act Tags of the 217 variation tags.
* `caller`: (str) A, B, @A, @B, @@A, @@B
* `utterance_index`: (int) The encoded index of the utterance (the number in A.49, B.27, etc.)
* `subutterance_index`: (int) Utterances can be broken across line. This gives the internal position.
* `text`: (str) The text of the utterance
* `pos`: (str) The POS tagged version of the utterance, from PtbBasename+.pos
* `trees`: (str) The tree(s) containing this utterance (separated by ||| in the file). Use `[Tree.fromstring(t) for t in row_value.split("|||")]` to convert to (list of nltk.tree.Tree).
* `ptb_treenumbers`: (list of int) The tree numbers in the PtbBasename+.mrg
* `talk_day`: (str) Date of talk.
* `length`: (int) Length of talk in seconds.
* `topic_description`: (str) Short description of topic that's being discussed.
* `prompt`: (str) Long decription/query/instruction.
* `from_caller`: (int) The numerical Id of the from (A) caller.
* `from_caller_sex`: (str) MALE, FEMALE.
* `from_caller_education`: (int) Called education level 0, 1, 2, 3, 9.
* `from_caller_birth_year`: (int) Caller birth year YYYY.
* `from_caller_dialect_area`: (str) MIXED, NEW ENGLAND, NORTH MIDLAND, NORTHERN, NYC, SOUTH MIDLAND, SOUTHERN, UNK, WESTERN.
* `to_caller`: (int) The numerical Id of the to (B) caller.
* `to_caller_sex`: (str) MALE, FEMALE.
* `to_caller_education`: (int) Called education level 0, 1, 2, 3, 9.
* `to_caller_birth_year`: (int) Caller birth year YYYY.
* `to_caller_dialect_area`: (str) MIXED, NEW ENGLAND, NORTH MIDLAND, NORTHERN, NYC, SOUTH MIDLAND, SOUTHERN, UNK, WESTERN.
### Dialog act annotations
| | name | act_tag | example | train_count | full_count |
|----- |------------------------------- |---------------- |-------------------------------------------------- |------------- |------------ |
| 1 | Statement-non-opinion | sd | Me, I'm in the legal department. | 72824 | 75145 |
| 2 | Acknowledge (Backchannel) | b | Uh-huh. | 37096 | 38298 |
| 3 | Statement-opinion | sv | I think it's great | 25197 | 26428 |
| 4 | Agree/Accept | aa | That's exactly it. | 10820 | 11133 |
| 5 | Abandoned or Turn-Exit | % | So, - | 10569 | 15550 |
| 6 | Appreciation | ba | I can imagine. | 4633 | 4765 |
| 7 | Yes-No-Question | qy | Do you have to have any special training? | 4624 | 4727 |
| 8 | Non-verbal | x | [Laughter], [Throat_clearing] | 3548 | 3630 |
| 9 | Yes answers | ny | Yes. | 2934 | 3034 |
| 10 | Conventional-closing | fc | Well, it's been nice talking to you. | 2486 | 2582 |
| 11 | Uninterpretable | % | But, uh, yeah | 2158 | 15550 |
| 12 | Wh-Question | qw | Well, how old are you? | 1911 | 1979 |
| 13 | No answers | nn | No. | 1340 | 1377 |
| 14 | Response Acknowledgement | bk | Oh, okay. | 1277 | 1306 |
| 15 | Hedge | h | I don't know if I'm making any sense or not. | 1182 | 1226 |
| 16 | Declarative Yes-No-Question | qy^d | So you can afford to get a house? | 1174 | 1219 |
| 17 | Other | fo_o_fw_by_bc | Well give me a break, you know. | 1074 | 883 |
| 18 | Backchannel in question form | bh | Is that right? | 1019 | 1053 |
| 19 | Quotation | ^q | You can't be pregnant and have cats | 934 | 983 |
| 20 | Summarize/reformulate | bf | Oh, you mean you switched schools for the kids. | 919 | 952 |
| 21 | Affirmative non-yes answers | na | It is. | 836 | 847 |
| 22 | Action-directive | ad | Why don't you go first | 719 | 746 |
| 23 | Collaborative Completion | ^2 | Who aren't contributing. | 699 | 723 |
| 24 | Repeat-phrase | b^m | Oh, fajitas | 660 | 688 |
| 25 | Open-Question | qo | How about you? | 632 | 656 |
| 26 | Rhetorical-Questions | qh | Who would steal a newspaper? | 557 | 575 |
| 27 | Hold before answer/agreement | ^h | I'm drawing a blank. | 540 | 556 |
| 28 | Reject | ar | Well, no | 338 | 346 |
| 29 | Negative non-no answers | ng | Uh, not a whole lot. | 292 | 302 |
| 30 | Signal-non-understanding | br | Excuse me? | 288 | 298 |
| 31 | Other answers | no | I don't know | 279 | 286 |
| 32 | Conventional-opening | fp | How are you? | 220 | 225 |
| 33 | Or-Clause | qrr | or is it more of a company? | 207 | 209 |
| 34 | Dispreferred answers | arp_nd | Well, not so much that. | 205 | 207 |
| 35 | 3rd-party-talk | t3 | My goodness, Diane, get down from there. | 115 | 117 |
| 36 | Offers, Options, Commits | oo_co_cc | I'll have to check that out | 109 | 110 |
| 37 | Self-talk | t1 | What's the word I'm looking for | 102 | 103 |
| 38 | Downplayer | bd | That's all right. | 100 | 103 |
| 39 | Maybe/Accept-part | aap_am | Something like that | 98 | 105 |
| 40 | Tag-Question | ^g | Right? | 93 | 92 |
| 41 | Declarative Wh-Question | qw^d | You are what kind of buff? | 80 | 80 |
| 42 | Apology | fa | I'm sorry. | 76 | 79 |
| 43 | Thanking | ft | Hey thanks a lot | 67 | 78 |
### Data Splits
I used info from the [Probabilistic-RNN-DA-Classifier](https://github.com/NathanDuran/Probabilistic-RNN-DA-Classifier) repo:
The same training and test splits as used by [Stolcke et al. (2000)](https://web.stanford.edu/~jurafsky/ws97).
The development set is a subset of the training set to speed up development and testing used in the paper [Probabilistic Word Association for Dialogue Act Classification with Recurrent Neural Networks](https://www.researchgate.net/publication/326640934_Probabilistic_Word_Association_for_Dialogue_Act_Classification_with_Recurrent_Neural_Networks_19th_International_Conference_EANN_2018_Bristol_UK_September_3-5_2018_Proceedings).
|Dataset |# Transcripts |# Utterances |
|-----------|:-------------:|:-------------:|
|Training |1115 |192,768 |
|Validation |21 |3,196 |
|Test |19 |4,088 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The SwDA is not inherently linked to the Penn Treebank 3 parses of Switchboard, and it is far from straightforward to align the two resources Calhoun et al. 2010, §2.4. In addition, the SwDA is not distributed with the Switchboard's tables of metadata about the conversations and their participants.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Christopher Potts](https://web.stanford.edu/~cgpotts/), Stanford Linguistics.
### Licensing Information
This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.](http://creativecommons.org/licenses/by-nc-sa/3.0/)
### Citation Information
```
@techreport{Jurafsky-etal:1997,
Address = {Boulder, CO},
Author = {Jurafsky, Daniel and Shriberg, Elizabeth and Biasca, Debra},
Institution = {University of Colorado, Boulder Institute of Cognitive Science},
Number = {97-02},
Title = {Switchboard {SWBD}-{DAMSL} Shallow-Discourse-Function Annotation Coders Manual, Draft 13},
Year = {1997}}
@article{Shriberg-etal:1998,
Author = {Shriberg, Elizabeth and Bates, Rebecca and Taylor, Paul and Stolcke, Andreas and Jurafsky, Daniel and Ries, Klaus and Coccaro, Noah and Martin, Rachel and Meteer, Marie and Van Ess-Dykema, Carol},
Journal = {Language and Speech},
Number = {3--4},
Pages = {439--487},
Title = {Can Prosody Aid the Automatic Classification of Dialog Acts in Conversational Speech?},
Volume = {41},
Year = {1998}}
@article{Stolcke-etal:2000,
Author = {Stolcke, Andreas and Ries, Klaus and Coccaro, Noah and Shriberg, Elizabeth and Bates, Rebecca and Jurafsky, Daniel and Taylor, Paul and Martin, Rachel and Meteer, Marie and Van Ess-Dykema, Carol},
Journal = {Computational Linguistics},
Number = {3},
Pages = {339--371},
Title = {Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech},
Volume = {26},
Year = {2000}}
```
### Contributions
Thanks to [@gmihaila](https://github.com/gmihaila) for adding this dataset. | 25,471 | [
[
-0.0179290771484375,
-0.058685302734375,
0.010162353515625,
0.01012420654296875,
-0.01445770263671875,
-0.000007271766662597656,
-0.005931854248046875,
-0.0293426513671875,
0.0225677490234375,
0.052398681640625,
-0.049560546875,
-0.05859375,
-0.034393310546875,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.