id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
DISCOX/DISCO-10K-random | 2023-06-20T14:25:17.000Z | [
"license:cc-by-4.0",
"region:us"
] | DISCOX | null | null | 1 | 127 | 2023-06-10T19:17:26 | ---
license: cc-by-4.0
dataset_info:
features:
- name: video_url_youtube
dtype: string
- name: video_title_youtube
dtype: string
- name: track_name_spotify
dtype: string
- name: video_duration_youtube_sec
dtype: float64
- name: preview_url_spotify
dtype: string
- name: video_view_count_youtube
dtype: float64
- name: video_thumbnail_url_youtube
dtype: string
- name: search_query_youtube
dtype: string
- name: video_description_youtube
dtype: string
- name: track_id_spotify
dtype: string
- name: album_id_spotify
dtype: string
- name: artist_id_spotify
sequence: string
- name: track_duration_spotify_ms
dtype: int64
- name: primary_artist_name_spotify
dtype: string
- name: track_release_date_spotify
dtype: string
- name: explicit_content_spotify
dtype: bool
- name: similarity_duration
dtype: float64
- name: similarity_query_video_title
dtype: float64
- name: similarity_query_description
dtype: float64
- name: similarity_audio
dtype: float64
- name: audio_embedding_spotify
sequence: float32
- name: audio_embedding_youtube
sequence: float32
splits:
- name: train
num_bytes: 47861223.0
num_examples: 10000
download_size: 57725964
dataset_size: 47861223.0
---
### Getting Started
You can download the dataset using HuggingFace:
```python
from datasets import load_dataset
ds = load_dataset("DISCOX/DISCO-10K-random")
```
The dataset contains 10,000 random samples from the DISCO-10M dataset found [here](https://huggingface.co/datasets/DISCOX/DISCO-10M).
## Dataset Structure
The dataset contains the following features:
```json
{
'video_url_youtube',
'video_title_youtube',
'track_name_spotify',
'video_duration_youtube_sec',
'preview_url_spotify',
'video_view_count_youtube',
'video_thumbnail_url_youtube',
'search_query_youtube',
'video_description_youtube',
'track_id_spotify',
'album_id_spotify',
'artist_id_spotify',
'track_duration_spotify_ms',
'primary_artist_name_spotify',
'track_release_date_spotify',
'explicit_content_spotify',
'similarity_duration',
'similarity_query_video_title',
'similarity_query_description',
'similarity_audio',
'audio_embedding_spotify',
'audio_embedding_youtube',
}
```
More details about the dataset can be found [here](https://huggingface.co/datasets/DISCOX/DISCO-10M).
<!--
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
--> | 2,517 | [
[
-0.05059814453125,
-0.04095458984375,
0.0028400421142578125,
0.03509521484375,
-0.0034275054931640625,
0.0046539306640625,
-0.0084991455078125,
0.0005030632019042969,
0.05206298828125,
0.04400634765625,
-0.0811767578125,
-0.05340576171875,
-0.02813720703125,
... |
jason-lee08/TinyStoriesWithExclamationsSmall | 2023-08-20T03:52:44.000Z | [
"region:us"
] | jason-lee08 | null | null | 0 | 127 | 2023-08-03T01:20:07 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 23826331
num_examples: 21197
- name: validation
num_bytes: 236180
num_examples: 220
download_size: 8127925
dataset_size: 24062511
---
# Dataset Card for "TinyStoriesWithExclamationsSmall"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 629 | [
[
-0.039276123046875,
-0.0104827880859375,
0.02545166015625,
0.0222015380859375,
-0.015167236328125,
-0.0030059814453125,
0.00970458984375,
-0.003978729248046875,
0.05615234375,
0.02166748046875,
-0.06231689453125,
-0.049102783203125,
-0.04052734375,
-0.008544... |
magnifi/contextual-tiny-v1 | 2023-09-13T17:22:57.000Z | [
"region:us"
] | magnifi | null | null | 0 | 127 | 2023-09-13T17:22:53 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: user_text
dtype: string
- name: true_intent
dtype: string
- name: chat_history
dtype: string
- name: contextual
dtype: bool
- name: in_regression_test
dtype: bool
- name: synthetic
dtype: bool
- name: prompt
dtype: string
- name: completion
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 106909.92835858747
num_examples: 100
- name: validation
num_bytes: 10722.453155139157
num_examples: 10
download_size: 42788
dataset_size: 117632.38151372662
---
# Dataset Card for "contextual-tiny-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 914 | [
[
-0.04608154296875,
-0.028900146484375,
0.0262603759765625,
0.015380859375,
-0.02215576171875,
-0.0318603515625,
0.0079345703125,
-0.00943756103515625,
0.06646728515625,
0.0260009765625,
-0.0712890625,
-0.043365478515625,
-0.030731201171875,
-0.02162170410156... |
hahminlew/kream-product-blip-captions | 2023-10-16T10:33:42.000Z | [
"task_categories:text-to-image",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-sa-4.0",
"fashion",
"cloth",
"computer-vision",
"region:us"
] | hahminlew | null | null | 2 | 127 | 2023-10-10T23:39:49 | ---
license: cc-by-nc-sa-4.0
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1363424468
num_examples: 14904
download_size: 1328309729
dataset_size: 1363424468
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-to-image
language:
- en
size_categories:
- 10K<n<100K
images_reference:
- KREAM (https://kream.co.kr/)
pretty_name: KREAM Product Blip Capitions
tags:
- fashion
- cloth
- computer-vision
---
## KREAM Product Blip Captions Dataset Information

**KREAM Product Blip Captions Dataset** is a dataset card for finetuning a text-to-image generative model collected from [KREAM](https://kream.co.kr/), one of the best online-resell market in Korea.
This dataset consists of 'image' and 'text' key pairs.
The format of 'text' is 'category (e.g. outer), product original name (e.g. The North Face 1996 Eco Nuptse Jacket Black), blip captions (e.g. a photography of the north face black down jacket)'.
You can easily construct this dataset and finetune stable diffusion from scratch using [easy-finetuning-stable-diffusion](https://github.com/hahminlew/easy-finetuning-stable-diffusion).
## Usage
```
from datasets import load_dataset
dataset = load_dataset("hahminlew/kream-product-blip-captions", split="train")
sample = dataset[0]
display(sample["image"].resize((256, 256)))
print(sample["text"])
```

```
outer, The North Face 1996 Eco Nuptse Jacket Black, a photography of the north face black down jacket
```
## Application
You can inference the finetuned Stable Diffusion XL with LoRA based on the dataset here: [hahminlew/sdxl-kream-model-lora-2.0](https://huggingface.co/hahminlew/sdxl-kream-model-lora-2.0)
## Citation
If you use KREAM Product Dataset in your research or projects, please cite it as:
```
@misc{lew2023kream,
author = {Lew, Hah Min},
title = {KREAM Product BLIP Captions},
year={2023},
howpublished= {\url{https://huggingface.co/datasets/hahminlew/kream-product-blip-captions/}}
}
``` | 2,152 | [
[
-0.01409912109375,
-0.04962158203125,
0.007049560546875,
0.04022216796875,
-0.025726318359375,
0.0027484893798828125,
-0.0025691986083984375,
-0.0228729248046875,
0.017059326171875,
0.05279541015625,
-0.041290283203125,
-0.06097412109375,
-0.036376953125,
-0... |
GEM/turku_hockey_data2text | 2022-10-24T15:30:33.000Z | [
"task_categories:table-to-text",
"annotations_creators:expert-created",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:fi",
"license:cc-by-nc-sa-4.0",
"data-to-text",
"region:us"
] | GEM | The Turku Hockey Data2Text corpus was developed as a benchmark for evaluating template-free, machine learning methods on Finnish news generation in the area of ice hockey reporting. This dataset is a collection of 3,454 ice hockey games, each including game statistics and a news article describing the game. Each game includes manual alignment of events (such as goals or penalties) and sentences describing the specific event in natural language extracted from the news article. The corpus includes 12,827 annotated events. The natural language passages are manually curated not to include any information not derivable from the input data or world knowledge. | @inproceedings{kanerva2019newsgen,
Title = {Template-free Data-to-Text Generation of Finnish Sports News},
Author = {Jenna Kanerva and Samuel R{\"o}nnqvist and Riina Kekki and Tapio Salakoski and Filip Ginter},
booktitle = {Proceedings of the 22nd Nordic Conference on Computational Linguistics (NoDaLiDa’19)},
year={2019}
} | 0 | 126 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-created
language_creators:
- unknown
language:
- fi
license:
- cc-by-nc-sa-4.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- table-to-text
task_ids: []
pretty_name: turku_hockey_data2text
tags:
- data-to-text
---
# Dataset Card for GEM/turku_hockey_data2text
## Dataset Description
- **Homepage:** https://turkunlp.org/hockey_data2text.html
- **Repository:** https://github.com/TurkuNLP/Turku-hockey-data2text
- **Paper:** https://aclanthology.org/W19-6125/
- **Leaderboard:** N/A
- **Point of Contact:** Jenna Kanerva, Filip Ginter
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/turku_hockey_data2text).
### Dataset Summary
This is a Finnish data-to-text dataset in which the input is structured information about a hockey game and the output a description of the game.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/turku_hockey_data2text')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/turku_hockey_data2text).
#### website
[Website](https://turkunlp.org/hockey_data2text.html)
#### paper
[ACL anthology](https://aclanthology.org/W19-6125/)
#### authors
Jenna Kanerva, Samuel Rönnqvist, Riina Kekki, Tapio Salakoski, Filip Ginter (TurkuNLP / University of Turku)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Website](https://turkunlp.org/hockey_data2text.html)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/TurkuNLP/Turku-hockey-data2text)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ACL anthology](https://aclanthology.org/W19-6125/)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{kanerva2019newsgen,
Title = {Template-free Data-to-Text Generation of Finnish Sports News},
Author = {Jenna Kanerva and Samuel R{\"o}nnqvist and Riina Kekki and Tapio Salakoski and Filip Ginter},
booktitle = {Proceedings of the 22nd Nordic Conference on Computational Linguistics (NoDaLiDa’19)},
year={2019}
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Jenna Kanerva, Filip Ginter
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
jmnybl@utu.fi, figint@utu.fi
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
written standard language
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`Finnish`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
The original news articles are written by professional journalists. The text passages extracted in the annotation may be slightly edited compared to the original language during the corpus annotation.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-nc-sa-4.0: Creative Commons Attribution Non Commercial Share Alike 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
This dataset was developed as a benchmark for evaluating template-free, machine learning methods on Finnish news generation in the area of ice hockey reporting.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Data-to-Text
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Describe an event from an ice hockey game based on the given structural data.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
University of Turku
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Jenna Kanerva, Samuel Rönnqvist, Riina Kekki, Tapio Salakoski, Filip Ginter (TurkuNLP / University of Turku)
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
The project was supported by the Google Digital News Innovation Fund.
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Jenna Kanerva, Filip Ginter (TurkuNLP / University of Turku)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
The dataset is constructed of games, where each game is a list of events. If the event was annotated (corresponding sentence was found from the news article), it includes `text` field with value other than empty string ("").
For each game (dict), there are keys `gem_id` (string), `id` (string), `news_article` (string), and `events` (list).
For each event (dict), there are different, relevant keys available with non empty values depending on the event type (e.g. goal or penalty). The mandatory keys for each event are `event_id` (string), `event_type` (string), `text` (string, empty string if not annotated), and `multi_reference` (bool). The keys not relevant for the specific event type are left empty.
The relevant keys in the event dictionary are:
For each event type, the following keys are relevant:
`event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)
`event_type`: Type of the event, possible values are `game result`, `goal`, `penalty`, or `saves` (string)
`text`: Natural language description of the event, or empty string if not available (string)
`multi_reference`: Does this event refer to a text passage describing multiple events? (bool)
The rest of the fields are specific to the event type. The relevant fields for each event type are:
game result:
`event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)
`event_type`: Type of the event (string)
`home_team`: Name of the home team (string)
`guest_team`: Name of the guest team (string)
`score`: Final score of the game, in the form of home–guest (string)
`periods`: Scores for individual periods, each in the form of home–guest score in that period (list of strings)
`features`: Additional features, such as overtime win or shoot out (list of strings)
`text`: Natural language description of the event, or empty string if not available (string)
`multi_reference`: Does this event refer to a text passage describing multiple events? (bool)
goal:
`event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)
`event_type`: Type of the event (string)
`player`: Name of the player scoring (string)
`assist`: Names of the players assisting, at most two players (list of strings)
`team`: Team scoring with possible values of `home` or `guest` (string)
`team_name`: Name of the team scoring (string)
`score`: Score after the goal, in the form of home–guest (string)
`time`: Time of the goal, minutes and seconds from the beginning (string)
`features`: Additional features, such as power play or short-handed goal (list of strings)
`text`: Natural language description of the event, or empty string if not available (string)
`multi_reference`: Does this event refer to a text passage describing multiple events? (bool)
penalty:
`event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)
`event_type`: Type of the event (string)
`player`: Name of the player getting the penalty (string)
`team`: Team getting the penalty with possible values of `home` or `guest` (string)
`team_name`: Name of the team getting the penalty (string)
`penalty_minutes`: Penalty minutes (string)
`time`: Time of the penalty, minutes and seconds from the beginning (string)
`text`: Natural language description of the event, or empty string if not available (string)
`multi_reference`: Does this event refer to a text passage describing multiple events? (bool)
saves:
`event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)
`event_type`: Type of the event (string)
`player`: Name of the goalkeeper (string)
`team`: Team of the goalkeeper with possible values of `home` or `guest` (string)
`team_name`: Name of the team (string)
`saves`: Number of saves in the game (string)
`text`: Natural language description of the event, or empty string if not available (string)
`multi_reference`: Does this event refer to a text passage describing multiple events? (bool)
Text passages describing multiple events (multi_reference):
Some text passages refer to multiple events in such way that separating them to individual statements is not adequate (e.g. "The home team received two penalties towards the end of the first period."). In these cases, multiple events are aligned to the same text passage so that the first event (in chronological order) include the annotated text passage, while the rest of the events referring to the same text passage include the identifier of the first event in the annotated text field (e.g. `text`: "E4").
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{
'gem_id': 'gem-turku_hockey_data2text-train-0',
'id': '20061031-TPS-HPK',
'news_article': 'HPK:n hyvä syysvire jatkuu jääkiekon SM-liigassa. Tiistaina HPK kukisti mainiolla liikkeellä ja tehokkaalla ylivoimapelillä TPS:n vieraissa 1–0 (1–0, 0–0, 0–0).\nHPK hyödynsi ylivoimaa mennen jo ensimmäisessä erässä Mikko Mäenpään maalilla 1–0 -johtoon.\nToisessa ja kolmannessa erässä HPK tarjosi edelleen TPS:lle runsaasti tilanteita, mutta maalia eivät turkulaiset millään ilveellä saaneet. Pahin este oli loistavan pelin Hämeenlinnan maalilla pelannut Mika Oksa.\nTPS:n maalissa Jani Hurme ei osumille mitään mahtanut. Joukkueen suuri yksinäinen kenttäpelaaja oli Kai Nurminen, mutta hänelläkään ei ollut onnea maalitilanteissa.',
'events':
{
'event_id': ['E1', 'E2', 'E3'],
'event_type': ['game result', 'penalty', 'goal'],
'text': ['HPK kukisti TPS:n vieraissa 1–0 (1–0, 0–0, 0–0).', '', 'HPK hyödynsi ylivoimaa mennen jo ensimmäisessä erässä Mikko Mäenpään maalilla 1–0 -johtoon.'],
'home_team': ['TPS', '', ''],
'guest_team': ['HPK', '', ''],
'score': ['0–1', '', '0–1'],
'periods': [['0–1', '0–0', '0–0'], [], []],
'features': [[], [], ['power play']],
'player': ['', 'Fredrik Svensson', 'Mikko Mäenpää'],
'assist': [[], [], ['Jani Keinänen', 'Toni Mäkiaho']],
'team': ['', 'guest', 'guest'],
'team_name': ['', 'HPK', 'HPK'],
'time': ['', '9.28', '14.57'],
'penalty_minutes': ['', '2', ''],
'saves': ['', '', ''],
'multi_reference': [false, false, false]
}
}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
The corpus include 3 splits: train, validation, and test.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
The dataset was created to develop machine learned text generation models for Finnish ice hockey news, where the generation would reflect the natural language variation found from the game reports written by professional journalists. While the original game reports often include additional information not derivable from the game statistics, the corpus was fully manually curated to remove all such information from the natural language descriptions. The rationale of such curation was to prevent model 'hallucinating' additional facts.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
yes
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
This is the only data2text corpus for Finnish in GEM.
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
morphological inflection, language variation
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`data points modified`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
Structural data was translated into English.
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`BLEU`, `METEOR`, `ROUGE`, `WER`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
Automatic evaluation: BLEU, NIST, METEOR, ROUGE-L, CIDEr
Manual evaluation: factual mistakes, grammatical errors, minimum edit distance to an acceptable game report (using WER)
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The dataset is designed for text generation (data2text), where the original source of natural language descriptions is news articles written by journalists. While the link between structural data (ice hockey game statistics) and the news articles describing the game was quite weak (news articles including a lot of information not derivable from the statistics, while leaving many events unmentioned), the corpus includes full manual annotation aligning the events extracted from game statistics and the corresponding natural language passages extracted from the news articles.
Each event is manually aligned into a sentence-like passage, and in case a suitable passage was not found, the annotation is left empty (with value `None`). The extracted passages were manually modified not to include additional information not derivable from the game statistics, or not considered as world knowledge. The manual curation of passages is designed to prevent model hallucination, i.e. model learning to generate facts not derivable from the input data.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
Describing the given events (structural data) in natural language, and therefore generating ice hockey game reports.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Other`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
The initial data, both game statistics and news articles, were obtained from the Finnish News Agency STT news archives released for academic use (http://urn.fi/urn:nbn:fi:lb-2019041501). The original news articles are written by professional journalists.
We (TurkuNLP) gratefully acknowledge the collaboration of Maija Paikkala, Salla Salmela and Pihla Lehmusjoki from the Finnish News Agency STT while creating the corpus.
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
Ice hockey, news
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
algorithmically
#### Filter Criteria
<!-- info: What were the selection criteria? -->
<!-- scope: microscope -->
Include only games, where both game statistics and a news article describing the game were available (based on timestamps and team names).
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
expert created
#### Number of Raters
<!-- info: What is the number of raters -->
<!-- scope: telescope -->
1
#### Rater Qualifications
<!-- info: Describe the qualifications required of an annotator. -->
<!-- scope: periscope -->
Members of the TurkuNLP research group, native speakers of Finnish.
#### Raters per Training Example
<!-- info: How many annotators saw each training example? -->
<!-- scope: periscope -->
1
#### Raters per Test Example
<!-- info: How many annotators saw each test example? -->
<!-- scope: periscope -->
1
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
Manual alignment of events and their natural language descriptions. Removing information not derivable from the input data or world knowledge in order to prevent the model 'hallucination'.
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
validated by data curators
#### Quality Control Details
<!-- info: Describe the quality control measures that were taken. -->
<!-- scope: microscope -->
Manual inspection of examples during the initial annotation training phrase.
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
The corpus license was agreed with the providers of the source material.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
yes/very likely
#### Categories of PII
<!-- info: What categories of PII are present or suspected in the data? -->
<!-- scope: periscope -->
`generic PII`
#### Any PII Identification?
<!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
<!-- scope: periscope -->
no identification
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
The dataset represents only written standard language.
## Considerations for Using the Data
### PII Risks and Liability
#### Potential PII Risk
<!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
<!-- scope: microscope -->
None
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`non-commercial use only`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`non-commercial use only`
### Known Technical Limitations
| 23,163 | [
[
-0.0247802734375,
-0.050323486328125,
0.033721923828125,
0.00510406494140625,
-0.0244293212890625,
0.0021381378173828125,
-0.027252197265625,
-0.034820556640625,
0.032562255859375,
0.01666259765625,
-0.05694580078125,
-0.06939697265625,
-0.0401611328125,
0.0... |
Hellisotherpeople/DebateSum | 2022-12-03T04:14:45.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-retrieval",
"task_categories:text-generation",
"task_ids:abstractive-qa",
"task_ids:document-retrieval",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
... | Hellisotherpeople | null | null | 8 | 126 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
- summarization
- text-retrieval
- text-generation
task_ids:
- abstractive-qa
- document-retrieval
- extractive-qa
pretty_name: 'DebateSum: A large-scale argument mining and summarization dataset'
language_bcp47:
- en-US
tags:
- conditional-text-generation
---
# DebateSum
Corresponding code repo for the upcoming paper at ARGMIN 2020: "DebateSum: A large-scale argument mining and summarization dataset"
Arxiv pre-print available here: https://arxiv.org/abs/2011.07251
Check out the presentation date and time here: https://argmining2020.i3s.unice.fr/node/9
Full paper as presented by the ACL is here: https://www.aclweb.org/anthology/2020.argmining-1.1/
Video of presentation at COLING 2020: https://underline.io/lecture/6461-debatesum-a-large-scale-argument-mining-and-summarization-dataset
The dataset is distributed as csv files.
A search engine over DebateSum (as well as some additional evidence not included in DebateSum) is available as [debate.cards](http://debate.cards/). It's very good quality and allows for the evidence to be viewed in the format that debaters use.
# Data
DebateSum consists of **187328** debate documents, arguements (also can be thought of as abstractive summaries, or queries), word-level extractive summaries, citations, and associated metadata organized by topic-year. This data is ready for analysis by NLP systems.
## Download
All data is accesable in a parsed format organized by topic year [here](https://mega.nz/folder/ZdQGmK6b#-0hoBWc5fLYuxQuH25feXg)
Addtionally, the trained word-vectors for [debate2vec](https://github.com/Hellisotherpeople/debate2vec) are also found in that folder.
## Regenerating it yourself
This is useful as the debaters who produce the evidence release their work every year. Soon enough I will update to include the 2020-2021 topic.
*Step 1: Download all open evidence files from [Open Evidence](https://openev.debatecoaches.org/) and unzip them into a directory. The links are as follows:*
* [2019](https://s3.amazonaws.com/openev/2019OpenEv.zip) - Resolved: The United States federal government should substantially reduce Direct Commercial Sales and/or Foreign Military Sales of arms from the United States.
* [2018](https://s3.amazonaws.com/openev/2018OpenEv.zip) - Resolved: The United States federal government should substantially reduce its restrictions on legal immigration to the United States.
* [2017](https://s3.amazonaws.com/openev/2017OpenEv.zip) - Resolved: The United States federal government should substantially increase its funding and/or regulation of elementary and/or secondary education in the United States.
* [2016](https://s3.amazonaws.com/openev/2016OpenEv.zip) - Resolved: The United States federal government should substantially increase its economic and/or diplomatic engagement with the People’s Republic of China.
* [2015](https://s3.amazonaws.com/openev/2015OpenEv.zip) - Resolved: The United States federal government should substantially curtail its domestic surveil-lance.
* [2014](https://s3.amazonaws.com/openev/2014OpenEv.zip) - Resolved: The United States federal government should substantially increase its non-military exploration and/or development of the Earth’s oceans.
* [2013](https://s3.amazonaws.com/openev/2013OpenEv.zip) - Resolved: The United States federal government should substantially increase its economic en-gagement toward Cuba, Mexico or Venezuela.
*Step 2: Convert all evidence from docx files to html5 files using [pandoc](https://pandoc.org/) with this command:*
```
for f in *.docx; do pandoc "$f" -s -o "${f%.docx}.html5"; done
```
*Step 3: install the dependencies for make_debate_dataset.py.*
```
pip install -r requirements.txt
```
*Step 4: Modify the folder and file locations as needed for your system, and run make_debate_dataset.py*
```
python3 make_debate_dataset.py
```
# Credits
Huge thanks to [Arvind Balaji](https://github.com/arvind-balaji) for making debate.cards and being second author on this paper!
| 4,242 | [
[
-0.050689697265625,
-0.054168701171875,
0.03900146484375,
-0.00679779052734375,
-0.0142059326171875,
-0.022735595703125,
-0.0169677734375,
-0.0176544189453125,
0.0165252685546875,
0.04632568359375,
-0.00421142578125,
-0.032745361328125,
-0.048095703125,
0.00... |
SocialGrep/one-year-of-r-india | 2022-07-01T18:48:19.000Z | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | SocialGrep | This corpus contains the complete data for the activity of the subreddit /r/India from Sep 30, 2020 to Sep 30, 2021. | null | 1 | 126 | 2022-03-02T23:29:22 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for one-year-of-r-india
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=oneyearofrindia)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=oneyearofrindia)
### Dataset Summary
This corpus contains the complete data for the activity of the subreddit /r/India from Sep 30, 2020 to Sep 30, 2021.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | 3,777 | [
[
-0.041015625,
-0.049407958984375,
0.005535125732421875,
0.032867431640625,
-0.0372314453125,
0.006561279296875,
-0.009368896484375,
-0.0271148681640625,
0.049407958984375,
0.026123046875,
-0.06842041015625,
-0.0596923828125,
-0.05230712890625,
0.018844604492... |
SocialGrep/ten-million-reddit-answers | 2022-07-01T17:38:25.000Z | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | SocialGrep | A spiritual successor to our One Million Questions, this NLP dataset contains an outstanding ten million of /r/AskReddit answers, going back from the end of November of 2020. | null | 6 | 126 | 2022-03-02T23:29:22 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for ten-million-reddit-answers
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=tenmillionanswers)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=tenmillionanswers)
### Dataset Summary
This corpus contains ten million question-answer pairs, labeled with score and pre-packaged with results of a basic sentiment predictor.
The data was procured from /r/AskReddit using [SocialGrep](https://socialgrep.com/?utm_source=huggingface&utm_medium=link&utm_campaign=tenmillionanswers).
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | 3,965 | [
[
-0.054962158203125,
-0.06585693359375,
0.0231170654296875,
0.032684326171875,
-0.026092529296875,
0.00293731689453125,
-0.0211029052734375,
-0.0214385986328125,
0.05572509765625,
0.037078857421875,
-0.06884765625,
-0.0699462890625,
-0.05206298828125,
0.02384... |
diwank/silicone-merged | 2022-03-06T11:30:57.000Z | [
"license:mit",
"region:us"
] | diwank | Merged and simplified dialog act datasets from the silicone collection. | null | 1 | 126 | 2022-03-02T23:29:22 | ---
license: mit
---
# diwank/silicone-merged
> Merged and simplified dialog act datasets from the [silicone collection](https://huggingface.co/datasets/silicone/)
All of the subsets of the original collection have been filtered (for errors and ambiguous classes), merged together and grouped into pairs of dialog turns. It is hypothesized that training dialog act classifier by including the previous utterance can help models pick up additional contextual cues and be better at inference esp if an utterance pair is provided.
## Example training script
```python
from datasets import load_dataset
from simpletransformers.classification import (
ClassificationModel, ClassificationArgs
)
# Get data
silicone_merged = load_dataset("diwank/silicone-merged")
train_df = silicone_merged["train"]
eval_df = silicone_merged["validation"]
model_args = ClassificationArgs(
num_train_epochs=8,
model_type="deberta",
model_name="microsoft/deberta-large",
use_multiprocessing=False,
evaluate_during_training=True,
)
# Create a ClassificationModel
model = ClassificationModel("deberta", "microsoft/deberta-large", args=model_args, num_labels=11) # 11 labels in this dataset
# Train model
model.train_model(train_df, eval_df=eval_df)
```
## Balanced variant of the training set
**Note**: This dataset is highly imbalanced and it is recommended to use a library like [imbalanced-learn](https://imbalanced-learn.org/stable/) before proceeding with training.
Since, balancing can be complicated and resource-intensive, we have shared a balanced variant of the train set that was created via oversampling using the _imbalanced-learn_ library. The balancing used the `SMOTEN` algorithm to deal with categorical data clustering and was resampled on a 16-core, 60GB RAM machine. You can access it using:
```load_dataset("diwank/silicone-merged", "balanced")```
## Feature description
- `text_a`: The utterance prior to the utterance being classified. (Say for dialog with turns 1-2-3, if we are trying to find the dialog act for 2, text_a is 1)
- `text_b`: The utterance to be classified
- `labels`: Dialog act label (as integer between 0-10, as mapped below)
## Labels map
```python
[
(0, 'acknowledge')
(1, 'answer')
(2, 'backchannel')
(3, 'reply_yes')
(4, 'exclaim')
(5, 'say')
(6, 'reply_no')
(7, 'hold')
(8, 'ask')
(9, 'intent')
(10, 'ask_yes_no')
]
```
*****
## Appendix
### How the original datasets were mapped:
```python
mapping = {
"acknowledge": {
"swda": [
"aap_am",
"b",
"bk"
],
"mrda": [],
"oasis": [
"ackn",
"accept",
"complete"
],
"maptask": [
"acknowledge",
"align"
],
"dyda_da": [
"commissive"
]
},
"answer": {
"swda": [
"bf",
],
"mrda": [],
"oasis": [
"answ",
"informCont",
"inform",
"answElab",
"directElab",
"refer"
],
"maptask": [
"reply_w",
"explain"
],
"dyda_da": [
"inform"
]
},
"backchannel": {
"swda": [
"ad",
"bh",
"bd",
"b^m"
],
"mrda": [
"b"
],
"oasis": [
"backch",
"selfTalk",
"init"
],
"maptask": ["ready"],
"dyda_da": []
},
"reply_yes": {
"swda": [
"na",
"aa"
],
"mrda": [],
"oasis": [
"confirm"
],
"maptask": [
"reply_y"
],
"dyda_da": []
},
"exclaim": {
"swda": [
"ft",
"fa",
"fc",
"fp"
],
"mrda": [],
"oasis": [
"appreciate",
"bye",
"exclaim",
"greet",
"thank",
"pardon",
"thank-identitySelf",
"expressRegret"
],
"maptask": [],
"dyda_da": []
},
"say": {
"swda": [
"qh",
"sd"
],
"mrda": ["s"],
"oasis": [
"expressPossibility",
"expressOpinion",
"suggest"
],
"maptask": [],
"dyda_da": []
},
"reply_no": {
"swda": [
"nn",
"ng",
"ar"
],
"mrda": [],
"oasis": [
"refuse",
"negate"
],
"maptask": [
"reply_n"
],
"dyda_da": []
},
"hold": {
"swda": [
"^h",
"t1"
],
"mrda": [
"f"
],
"oasis": [
"hold"
],
"maptask": [],
"dyda_da": []
},
"ask": {
"swda": [
"qw",
"qo",
"qw^d",
"br",
"qrr"
],
"mrda": [
"q"
],
"oasis": [
"reqInfo",
"reqDirect",
"offer"
],
"maptask": [
"query_w"
],
"dyda_da": [
"question"
]
},
"intent": {
"swda": [],
"mrda": [],
"oasis": [
"informIntent",
"informIntent-hold",
"expressWish",
"direct",
"raiseIssue",
"correct"
],
"maptask": [
"instruct",
"clarify"
],
"dyda_da": [
"directive"
]
},
"ask_yes_no": {
"swda": [
"qy^d",
"^g"
],
"mrda": [],
"oasis": [
"reqModal"
],
"maptask": [
"query_yn",
"check"
],
"dyda_da": []
}
}
``` | 6,324 | [
[
-0.0433349609375,
-0.041168212890625,
0.01470184326171875,
0.0012769699096679688,
-0.0094451904296875,
0.004337310791015625,
-0.0095672607421875,
-0.007076263427734375,
0.0352783203125,
0.042816162109375,
-0.07568359375,
-0.051910400390625,
-0.050872802734375,
... |
codeparrot/xlcost-text-to-code | 2022-10-25T09:30:47.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"license:cc-by-sa-4.0",
"arxiv:2206.08474",
"region:us"
] | codeparrot | XLCoST is a machine learning benchmark dataset that contains fine-grained parallel data in 7 commonly used programming languages (C++, Java, Python, C#, Javascript, PHP, C), and natural language (English). | @misc{zhu2022xlcost,
title = {XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence},
url = {https://arxiv.org/abs/2206.08474},
author = {Zhu, Ming and Jain, Aneesh and Suresh, Karthik and Ravindran, Roshan and Tipirneni, Sindhu and Reddy, Chandan K.},
year = {2022},
eprint={2206.08474},
archivePrefix={arXiv}
} | 24 | 126 | 2022-07-13T18:13:17 | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: xlcost-text-to-code
---
# XLCost for text-to-code synthesis
## Dataset Description
This is a subset of [XLCoST benchmark](https://github.com/reddy-lab-code-research/XLCoST), for text-to-code generation at snippet level and program level for **7** programming languages: `Python, C, C#, C++, Java, Javascript and PHP`.
## Languages
The dataset contains text in English and its corresponding code translation. Each program is divided into several code snippets, so the snipppet-level subsets contain these code snippets with their corresponding comments, for program-level subsets, the comments were concatenated in one long description. Moreover, programs in all the languages are aligned at the snippet level and the comment for a particular snippet is the same across all the languages.
## Dataset Structure
To load the dataset you need to specify a subset among the **14 exiting instances**: `LANGUAGE-snippet-level/LANGUAGE-program-level` for `LANGUAGE` in `[Python, C, Csharp, C++, Java, Javascript and PHP]`. By default `Python-snippet-level` is loaded.
```python
from datasets import load_dataset
load_dataset("codeparrot/xlcost-text-to-code", "Python-program-level")
DatasetDict({
train: Dataset({
features: ['text', 'code'],
num_rows: 9263
})
test: Dataset({
features: ['text', 'code'],
num_rows: 887
})
validation: Dataset({
features: ['text', 'code'],
num_rows: 472
})
})
```
```python
next(iter(data["train"]))
{'text': 'Maximum Prefix Sum possible by merging two given arrays | Python3 implementation of the above approach ; Stores the maximum prefix sum of the array A [ ] ; Traverse the array A [ ] ; Stores the maximum prefix sum of the array B [ ] ; Traverse the array B [ ] ; Driver code',
'code': 'def maxPresum ( a , b ) : NEW_LINE INDENT X = max ( a [ 0 ] , 0 ) NEW_LINE for i in range ( 1 , len ( a ) ) : NEW_LINE INDENT a [ i ] += a [ i - 1 ] NEW_LINE X = max ( X , a [ i ] ) NEW_LINE DEDENT Y = max ( b [ 0 ] , 0 ) NEW_LINE for i in range ( 1 , len ( b ) ) : NEW_LINE INDENT b [ i ] += b [ i - 1 ] NEW_LINE Y = max ( Y , b [ i ] ) NEW_LINE DEDENT return X + Y NEW_LINE DEDENT A = [ 2 , - 1 , 4 , - 5 ] NEW_LINE B = [ 4 , - 3 , 12 , 4 , - 3 ] NEW_LINE print ( maxPresum ( A , B ) ) NEW_LINE'}
```
Note that the data undergo some tokenization hence the additional whitespaces and the use of NEW_LINE instead of `\n` and INDENT instead of `\t`, DEDENT to cancel indentation...
## Data Fields
* text: natural language description/comment
* code: code at snippet/program level
## Data Splits
Each subset has three splits: train, test and validation.
## Citation Information
```
@misc{zhu2022xlcost,
title = {XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence},
url = {https://arxiv.org/abs/2206.08474},
author = {Zhu, Ming and Jain, Aneesh and Suresh, Karthik and Ravindran, Roshan and Tipirneni, Sindhu and Reddy, Chandan K.},
year = {2022},
eprint={2206.08474},
archivePrefix={arXiv}
}
``` | 3,322 | [
[
-0.02264404296875,
-0.0301971435546875,
0.025665283203125,
0.026153564453125,
-0.01055908203125,
0.0169525146484375,
-0.03515625,
-0.0138397216796875,
0.01837158203125,
0.0195770263671875,
-0.0303497314453125,
-0.0537109375,
-0.031982421875,
0.02729797363281... |
mozilla-foundation/common_voice_10_0 | 2023-07-29T16:00:14.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | mozilla-foundation | null | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
} | 17 | 126 | 2022-07-22T15:10:26 | ---
pretty_name: Common Voice Corpus 10.0
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language_bcp47:
- ab
- ar
- as
- ast
- az
- ba
- bas
- be
- bg
- bn
- br
- ca
- ckb
- cnh
- cs
- cv
- cy
- da
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy-NL
- ga-IE
- gl
- gn
- ha
- hi
- hsb
- hu
- hy-AM
- ia
- id
- ig
- it
- ja
- ka
- kab
- kk
- kmr
- ky
- lg
- lt
- lv
- mdf
- mhr
- mk
- ml
- mn
- mr
- mt
- myv
- nan-tw
- ne-NP
- nl
- nn-NO
- or
- pa-IN
- pl
- pt
- rm-sursilv
- rm-vallader
- ro
- ru
- rw
- sah
- sat
- sc
- sk
- sl
- sr
- sv-SE
- sw
- ta
- th
- tig
- tok
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- vot
- yue
- zh-CN
- zh-HK
- zh-TW
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- 10K<n<100K
ar:
- 100K<n<1M
as:
- 1K<n<10K
ast:
- n<1K
az:
- n<1K
ba:
- 100K<n<1M
bas:
- 1K<n<10K
be:
- 100K<n<1M
bg:
- 1K<n<10K
bn:
- 100K<n<1M
br:
- 10K<n<100K
ca:
- 1M<n<10M
ckb:
- 100K<n<1M
cnh:
- 1K<n<10K
cs:
- 10K<n<100K
cv:
- 10K<n<100K
cy:
- 100K<n<1M
da:
- 1K<n<10K
de:
- 100K<n<1M
dv:
- 10K<n<100K
el:
- 10K<n<100K
en:
- 1M<n<10M
eo:
- 1M<n<10M
es:
- 100K<n<1M
et:
- 10K<n<100K
eu:
- 100K<n<1M
fa:
- 100K<n<1M
fi:
- 10K<n<100K
fr:
- 100K<n<1M
fy-NL:
- 10K<n<100K
ga-IE:
- 1K<n<10K
gl:
- 10K<n<100K
gn:
- 1K<n<10K
ha:
- 1K<n<10K
hi:
- 10K<n<100K
hsb:
- 1K<n<10K
hu:
- 10K<n<100K
hy-AM:
- 1K<n<10K
ia:
- 10K<n<100K
id:
- 10K<n<100K
ig:
- 1K<n<10K
it:
- 100K<n<1M
ja:
- 10K<n<100K
ka:
- 1K<n<10K
kab:
- 100K<n<1M
kk:
- 1K<n<10K
kmr:
- 10K<n<100K
ky:
- 10K<n<100K
lg:
- 100K<n<1M
lt:
- 10K<n<100K
lv:
- 1K<n<10K
mdf:
- n<1K
mhr:
- 10K<n<100K
mk:
- n<1K
ml:
- 1K<n<10K
mn:
- 10K<n<100K
mr:
- 10K<n<100K
mt:
- 10K<n<100K
myv:
- 1K<n<10K
nan-tw:
- 10K<n<100K
ne-NP:
- n<1K
nl:
- 10K<n<100K
nn-NO:
- n<1K
or:
- 1K<n<10K
pa-IN:
- 1K<n<10K
pl:
- 100K<n<1M
pt:
- 100K<n<1M
rm-sursilv:
- 1K<n<10K
rm-vallader:
- 1K<n<10K
ro:
- 10K<n<100K
ru:
- 100K<n<1M
rw:
- 1M<n<10M
sah:
- 1K<n<10K
sat:
- n<1K
sc:
- n<1K
sk:
- 10K<n<100K
sl:
- 10K<n<100K
sr:
- 1K<n<10K
sv-SE:
- 10K<n<100K
sw:
- 100K<n<1M
ta:
- 100K<n<1M
th:
- 100K<n<1M
tig:
- n<1K
tok:
- 1K<n<10K
tr:
- 10K<n<100K
tt:
- 10K<n<100K
ug:
- 10K<n<100K
uk:
- 10K<n<100K
ur:
- 100K<n<1M
uz:
- 100K<n<1M
vi:
- 10K<n<100K
vot:
- n<1K
yue:
- 10K<n<100K
zh-CN:
- 100K<n<1M
zh-HK:
- 100K<n<1M
zh-TW:
- 100K<n<1M
source_datasets:
- extended|common_voice
task_categories:
- automatic-speech-recognition
paperswithcode_id: common-voice
extra_gated_prompt: "By clicking on “Access repository” below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."
---
# Dataset Card for Common Voice Corpus 10.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 20817 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 15234 validated hours in 96 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Toki Pona, Turkish, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_10_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
| 12,053 | [
[
-0.040618896484375,
-0.053497314453125,
0.011077880859375,
0.031951904296875,
-0.02008056640625,
0.0032176971435546875,
-0.043304443359375,
-0.01557159423828125,
0.033416748046875,
0.03961181640625,
-0.056427001953125,
-0.074462890625,
-0.033538818359375,
0.... |
joelniklaus/legal_case_document_summarization | 2023-02-02T23:52:54.000Z | [
"region:us"
] | joelniklaus | null | null | 9 | 126 | 2022-12-30T20:54:10 | # Dataset Card for LegalCaseDocumentSummarization
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub](https://github.com/Law-AI/summarization)
- **Repository:** [Zenodo](https://zenodo.org/record/7152317#.Y69PkeKZODW)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset.
| 2,596 | [
[
-0.02587890625,
-0.03277587890625,
0.0148773193359375,
0.01375579833984375,
-0.0325927734375,
0.0185394287109375,
-0.0279998779296875,
-0.019561767578125,
0.03814697265625,
0.05950927734375,
-0.048919677734375,
-0.09112548828125,
-0.050079345703125,
-0.00302... |
heegyu/news-category-balanced-top10 | 2023-02-13T02:56:31.000Z | [
"license:cc-by-4.0",
"region:us"
] | heegyu | null | null | 1 | 126 | 2023-02-13T02:45:28 | ---
license: cc-by-4.0
---
### Top10 sampled news category dataset
randomly sampled news data
original dataset: https://www.kaggle.com/datasets/rmisra/news-category-dataset
### Value Counts per Category
```
ENTERTAINMENT 10000
POLITICS 10000
WELLNESS 10000
TRAVEL 9900
STYLE & BEAUTY 9814
PARENTING 8791
HEALTHY LIVING 6694
QUEER VOICES 6347
FOOD & DRINK 6340
BUSINESS 5992
``` | 452 | [
[
-0.01403045654296875,
-0.032806396484375,
0.00162506103515625,
0.0234222412109375,
-0.0305023193359375,
0.016845703125,
0.00974273681640625,
0.0101776123046875,
0.0654296875,
0.0313720703125,
-0.04620361328125,
-0.05303955078125,
-0.057647705078125,
0.000837... |
wangrui6/Zhihu-KOL | 2023-04-23T13:26:03.000Z | [
"task_categories:question-answering",
"language:zh",
"region:us"
] | wangrui6 | null | null | 95 | 126 | 2023-02-25T00:21:29 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 2295601241
num_examples: 1006218
download_size: 1501204472
dataset_size: 2295601241
task_categories:
- question-answering
language:
- zh
---
# Dataset Card for "Zhihu-KOL"
Zhihu data for training Open Assitant
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 571 | [
[
-0.032073974609375,
-0.0220184326171875,
-0.0009794235229492188,
-0.0004379749298095703,
-0.021484375,
-0.017303466796875,
0.003597259521484375,
-0.01407623291015625,
0.0433349609375,
0.032470703125,
-0.060791015625,
-0.0574951171875,
-0.0244598388671875,
-0... |
TrainingDataPro/anti-spoofing_replay | 2023-09-14T16:49:15.000Z | [
"task_categories:video-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"finance",
"legal",
"code",
"region:us"
] | TrainingDataPro | The dataset consists of 40,000 videos and selfies with unique people. 15,000
attack replays from 4,000 unique devices. 10,000 attacks with A4 printouts and
10,000 attacks with cut-out printouts. | @InProceedings{huggingface:dataset,
title = {anti-spoofing_replay},
author = {TrainingDataPro},
year = {2023}
} | 1 | 126 | 2023-04-28T12:15:43 | ---
license: cc-by-nc-nd-4.0
task_categories:
- video-classification
language:
- en
tags:
- finance
- legal
- code
dataset_info:
features:
- name: live_video_id
dtype: string
- name: phone
dtype: string
- name: video_file
dtype: string
- name: phone_video_playback
dtype: string
- name: worker_id
dtype: string
splits:
- name: train
num_bytes: 5063
num_examples: 30
download_size: 735628032
dataset_size: 5063
---
# Anti-Spoofing dataset: replay
The dataset consists of 40,000 videos and selfies with unique people. 15,000 attack replays from 4,000 unique devices. 10,000 attacks with A4 printouts and 10,000 attacks with cut-out printouts.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=anti-spoofing_replay) to discuss your requirements, learn about the price and buy the dataset.
# File with the extension .csv
includes the following information for each media file:
- **live_video_id**: the unique identifier of the "Antispoofing Live" video
- **phone**: the device used to capture the replay video,
- **link**: the URL to access the replay video,
- **phone_video_payback**: the device used to play the "Antispoofing Live" video,
- **worker_id**: the identifier of the person who provided the media file,
# Folder "img" with media files
- containg all the photos and videos
- which correspond to the data in the .csv file
**How it works**: *go to the first folder and you will make sure that it contains media files taken by a person whose parameters are specified in the first line of the .csv file.*
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=anti-spoofing_replay) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | 2,114 | [
[
-0.0188446044921875,
-0.053558349609375,
-0.01904296875,
0.0137481689453125,
-0.0282440185546875,
0.0193023681640625,
0.01324462890625,
-0.0116119384765625,
0.061126708984375,
0.045440673828125,
-0.045440673828125,
-0.039337158203125,
-0.040496826171875,
-0.... |
talgatzh/xsum-kk3 | 2023-11-02T07:37:59.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:xsum",
"license:unknown",
"arxiv:1808.08745",
"region:us"
] | talgatzh | Extreme Summarization (XSum) Dataset.
There are three features:
- document: Input news article.
- summary: One sentence summary of the article.
- id: BBC ID of the article. | @article{Narayan2018DontGM,
title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
journal={ArXiv},
year={2018},
volume={abs/1808.08745}
} | 0 | 126 | 2023-05-29T04:09:52 | ---
annotations_creators:
- found
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: Extreme Summarization (XSum)
paperswithcode_id: xsum
size_categories:
- 100K<n<1M
source_datasets:
- xsum
task_categories:
- summarization
task_ids:
- news-articles-summarization
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
dataset_info:
features:
- name: document
dtype: string
- name: summary
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 139159410
num_examples: 5
download_size: 139159410
dataset_size: 139159410
---
# Dataset Card for "xsum"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/EdinburghNLP/XSum
- **Paper:** [Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization](https://arxiv.org/abs/1808.08745)
- **Point of Contact:** [Shashi Narayan](mailto:shashi.narayan@ed.ac.uk)
- **Size of downloaded dataset files:** 257.30 MB
- **Size of the generated dataset:** 532.26 MB
- **Total amount of disk used:** 789.56 MB
### Dataset Summary
Extreme Summarization (XSum) Dataset.
There are three features:
- document: Input news article.
- summary: One sentence summary of the article.
- id: BBC ID of the article.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 257.30 MB
- **Size of the generated dataset:** 532.26 MB
- **Total amount of disk used:** 789.56 MB
An example of 'validation' looks as follows.
```
{
"document": "some-body",
"id": "29750031",
"summary": "some-sentence"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `document`: a `string` feature.
- `summary`: a `string` feature.
- `id`: a `string` feature.
### Data Splits
| name |train |validation|test |
|-------|-----:|---------:|----:|
|default|204045| 11332|11334|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Narayan2018DontGM,
title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
journal={ArXiv},
year={2018},
volume={abs/1808.08745}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@jbragg](https://github.com/jbragg), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. | 6,091 | [
[
-0.0484619140625,
-0.036163330078125,
0.005504608154296875,
0.006877899169921875,
-0.0223846435546875,
-0.002254486083984375,
-0.03369140625,
-0.0260162353515625,
0.05126953125,
0.0306243896484375,
-0.05255126953125,
-0.06439208984375,
-0.04620361328125,
-0.... |
FredZhang7/toxi-text-3M | 2023-07-20T21:33:29.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:zero-shot-classification",
"size_categories:1M<n<10M",
"language:ar",
"language:es",
"language:pa",
"language:th",
"language:et",
"language:fr",
"language:fi",
"language:hu",
"language:lt",
"lan... | FredZhang7 | null | null | 5 | 126 | 2023-06-28T23:28:34 | ---
license: apache-2.0
task_categories:
- text-classification
- token-classification
- zero-shot-classification
size_categories:
- 1M<n<10M
language:
- ar
- es
- pa
- th
- et
- fr
- fi
- hu
- lt
- ur
- so
- pl
- el
- mr
- sk
- gu
- he
- af
- te
- ro
- lv
- sv
- ne
- kn
- it
- mk
- cs
- en
- de
- da
- ta
- bn
- pt
- sq
- tl
- uk
- bg
- ca
- sw
- hi
- zh
- ja
- hr
- ru
- vi
- id
- sl
- cy
- ko
- nl
- ml
- tr
- fa
- 'no'
- multilingual
tags:
- nlp
- moderation
---
[A demo for a model finetuned on this and other datasets](https://huggingface.co/spaces/aivance/one-for-all-toxicity-v3)
This is a large multilingual toxicity dataset with 3M rows of text data from 55 natural languages, all of which are written/sent by humans, not machine translation models.
The preprocessed training data alone consists of 2,880,667 rows of comments, tweets, and messages. Among these rows, 416,529 are classified as toxic, while the remaining 2,463,773 are considered neutral. Below is a table to illustrate the data composition:
| | Toxic | Neutral | Total |
|-------|----------|----------|----------|
| [multilingual-train-deduplicated.csv](./train/multilingual-train-deduplicated.csv) | 416,529 | 2,464,138 | 2,880,667 |
| [mulilingual-validation(new).csv](./validation/mulilingual-validation(new).csv) | 10,613 | 19,028 | 29,641 |
| [multilingual-test.csv](./test/multilingual-test.csv) | 14,410 | 49,402 | 63,812 |
Each CSV file has three columns: `text`, `is_toxic`, and `lang`.
Supported types of toxicity:
- Identity Hate/Homophobia
- Misogyny
- Violent Extremism
- Hate Speech
- Offensive Insults
- Sexting
- Obscene
- Threats
- Harassment
- Racism
- Trolling
- Doxing
- Others
Supported languages:
- Afrikaans
- Albanian
- Arabic
- Bengali
- Bulgarian
- Catalan
- Chinese (Simplified)
- Chinese (Traditional)
- Croatian
- Czech
- Danish
- Dutch
- English
- Estonian
- Finnish
- French
- German
- Greek
- Gujarati
- Hebrew
- Hindi
- Hungarian
- Indonesian
- Italian
- Japanese
- Kannada
- Korean
- Latvian
- Lithuanian
- Macedonian
- Malayalam
- Marathi
- Nepali
- Norwegian
- Persian
- Polish
- Portuguese
- Punjabi
- Romanian
- Russian
- Slovak
- Slovenian
- Somali
- Spanish
- Swahili
- Swedish
- Tagalog
- Tamil
- Telugu
- Thai
- Turkish
- Ukrainian
- Urdu
- Vietnamese
- Welsh
<br>
### Original Source?
Around 11 months ago, I downloaded and preprocessed 2.7M rows of text data, but completely forgot the original source of these datasets...
All I remember is that I downloaded datasets from everywhere I could: HuggingFace, research papers, GitHub, Kaggle, SurgeAI, and Google search. I even fetched 20K+ tweets using the Twitter API.
Recently, I came across 6 datasets, so I remembered to credit them below.
Known datasets:
- tomekkorbak/pile-toxicity-balanced2 (HuggingFace)
- datasets/thai_toxicity_tweet (HuggingFace)
- datasets/ethos (HuggingFace)
- inspection-ai/japanese-toxic-dataset (GitHub)
- mathigatti/sexting-dataset (GitHub)
- omar-sharif03/BAD-Bangla-Aggressive-Text-Dataset (GitHub)
I manually collected and wrote 100 rows of data.
<br>
### Limitations
Limitations include:
- All labels were rounded to the nearest integer. If a text was classified as 46%-54% toxic, the text itself might not be noticeably toxic or neutral.
- There were disagreements among moderators on some labels, due to ambiguity and lack of context.
- When there're only URL(s), emojis, or anything that's unrecognizable as natural language in the "text" column, the corresponding "lang" is "unknown".
Have fun modelling! | 3,573 | [
[
0.0022296905517578125,
-0.03125,
0.024810791015625,
0.033447265625,
-0.0004940032958984375,
-0.01534271240234375,
-0.01021575927734375,
-0.02032470703125,
0.00887298583984375,
0.041717529296875,
-0.040313720703125,
-0.06927490234375,
-0.035858154296875,
0.03... |
SaffalPoosh/deepFashion-with-masks | 2023-07-06T12:21:40.000Z | [
"license:apache-2.0",
"code",
"region:us"
] | SaffalPoosh | null | null | 0 | 126 | 2023-07-02T12:20:16 | ---
license: apache-2.0
tags:
- code
pretty_name: fashion clothes segmentation
dataset_info:
features:
- name: images
dtype: image
- name: gender
dtype: string
- name: pose
dtype: string
- name: cloth_type
dtype: string
- name: pid
dtype: string
- name: caption
dtype: string
- name: mask
dtype: image
- name: mask_overlay
dtype: image
splits:
- name: train
num_bytes: 1821511821.448
num_examples: 40658
download_size: 1449380618
dataset_size: 1821511821.448
---
# Dataset
Dataset name is deepfashion2 datasest, the dataset is in raw form with annotations, for original dataset repo. see `https://github.com/switchablenorms/DeepFashion2`
This dataset is just the extracted version of original deepfashion2 dataset and can be used for training **Controlnet Model**. | 836 | [
[
-0.004428863525390625,
-0.02410888671875,
-0.03204345703125,
-0.0129241943359375,
-0.025115966796875,
0.016204833984375,
-0.0022945404052734375,
-0.0145263671875,
0.0205230712890625,
0.0751953125,
-0.07122802734375,
-0.01751708984375,
-0.033233642578125,
-0.... |
andersonbcdefg/math | 2023-07-21T01:39:49.000Z | [
"region:us"
] | andersonbcdefg | null | null | 5 | 126 | 2023-07-21T01:39:10 | ---
dataset_info:
features:
- name: role_1
dtype: string
- name: topic;
dtype: string
- name: sub_topic
dtype: string
- name: message_1
dtype: string
- name: message_2
dtype: string
splits:
- name: train
num_bytes: 75291197
num_examples: 50000
download_size: 35174383
dataset_size: 75291197
---
# Dataset Card for "math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 501 | [
[
-0.045135498046875,
-0.026824951171875,
0.0092620849609375,
0.022491455078125,
-0.0058135986328125,
0.00258636474609375,
0.0161895751953125,
-0.0003261566162109375,
0.055023193359375,
0.0244598388671875,
-0.061187744140625,
-0.047119140625,
-0.0413818359375,
... |
alfredplpl/simple-zundamon | 2023-10-21T16:10:17.000Z | [
"language:ja",
"license:other",
"region:us"
] | alfredplpl | null | null | 1 | 126 | 2023-10-21T15:16:58 | ---
license: other
license_name: view-read-more
license_link: https://zunko.jp/guideline.html
language:
- ja
---
# シンプルずんだもんデータセット

## はじめに
ずんだもんの設定が詰まったシンプルなデータセットです。
作者がインターネットで調べたり、運営の人からもらったデータから作成しました。
キャラクターLLMを作るための動作確認にお使いください。
ただし、可能な限り動作確認でもライセンスをよく読んでください。
他の用途はライセンスをよく読んでください。
## 各種フォーマット
- LLM-jp: [zmnjp.jsonl](zmnjp.jsonl)
- ChatGPT: [zmn.jsonl](zmn.jsonl)
## ライセンス
- [(ず・ω・きょ)](https://zunko.jp/guideline.html)
| 451 | [
[
-0.034149169921875,
-0.043304443359375,
0.0205078125,
0.0535888671875,
-0.056304931640625,
0.0141754150390625,
0.000873565673828125,
-0.0164794921875,
0.046905517578125,
0.04296875,
-0.07275390625,
-0.049285888671875,
-0.04888916015625,
0.0228118896484375,
... |
consumer-finance-complaints | 2023-01-25T14:28:37.000Z | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"region:us"
] | null | null | \ | 10 | 125 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- topic-classification
pretty_name: consumer-finance-complaints
dataset_info:
features:
- name: Date Received
dtype: timestamp[s]
- name: Product
dtype:
class_label:
names:
'0': Credit reporting, credit repair services, or other personal consumer
reports
'1': Debt collection
'2': Mortgage
'3': Credit card or prepaid card
'4': Checking or savings account
'5': Credit reporting
'6': Student loan
'7': Money transfer, virtual currency, or money service
'8': Credit card
'9': Vehicle loan or lease
'10': Bank account or service
'11': Payday loan, title loan, or personal loan
'12': Consumer Loan
'13': Payday loan
'14': Money transfers
'15': Prepaid card
'16': Other financial service
'17': Virtual currency
- name: Sub Product
dtype:
class_label:
names:
'0': Credit reporting
'1': General-purpose credit card or charge card
'2': Checking account
'3': Other debt
'4': Second mortgage
'5': Conventional home mortgage
'6': I do not know
'7': Credit card debt
'8': Medical debt
'9': Federal student loan servicing
'10': FHA mortgage
'11': Conventional fixed mortgage
'12': Loan
'13': Other (i.e. phone, health club, etc.)
'14': Store credit card
'15': Installment loan
'16': Credit card
'17': Medical
'18': Mobile or digital wallet
'19': Private student loan
'20': Non-federal student loan
'21': Domestic (US) money transfer
'22': VA mortgage
'23': Vehicle loan
'24': Auto debt
'25': Payday loan
'26': Conventional adjustable mortgage (ARM)
'27': Other personal consumer report
'28': Payday loan debt
'29': Savings account
'30': Virtual currency
'31': Other bank product/service
'32': Other type of mortgage
'33': Other banking product or service
'34': Other mortgage
'35': International money transfer
'36': Lease
'37': General-purpose prepaid card
'38': Home equity loan or line of credit (HELOC)
'39': Government benefit card
'40': Mortgage debt
'41': Personal line of credit
'42': Home equity loan or line of credit
'43': Federal student loan debt
'44': Private student loan debt
'45': Credit repair services
'46': Title loan
'47': Auto
'48': Vehicle lease
'49': Mortgage
'50': Reverse mortgage
'51': General purpose card
'52': CD (Certificate of Deposit)
'53': Federal student loan
'54': Payroll card
'55': Debt settlement
'56': Check cashing service
'57': Traveler's check or cashier's check
'58': Gift card
'59': (CD) Certificate of deposit
'60': Money order
'61': Foreign currency exchange
'62': Refund anticipation check
'63': Gift or merchant card
'64': Cashing a check without an account
'65': ID prepaid card
'66': Mobile wallet
'67': Government benefit payment card
'68': Pawn loan
'69': Other special purpose card
'70': Check cashing
'71': Credit repair
'72': Traveler’s/Cashier’s checks
'73': Transit card
'74': Student prepaid card
'75': Electronic Benefit Transfer / EBT card
'76': ''
- name: Issue
dtype: string
- name: Sub Issue
dtype: string
- name: Complaint Text
dtype: string
- name: Company Public Response
dtype: string
- name: Company
dtype: string
- name: State
dtype: string
- name: Zip Code
dtype: string
- name: Tags
dtype:
class_label:
names:
'0': Servicemember
'1': Older American
'2': Older American, Servicemember
'3': ''
- name: Consumer Consent Provided
dtype: string
- name: Submitted via
dtype: string
- name: Date Sent To Company
dtype: string
- name: Company Response To Consumer
dtype: string
- name: Timely Response
dtype: string
- name: Consumer Disputed
dtype: string
- name: Complaint ID
dtype: string
splits:
- name: train
num_bytes: 1605177353
num_examples: 2455765
download_size: 404187716
dataset_size: 1605177353
---
# Dataset Card for Consumer Finance Complaints
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.consumerfinance.gov/data-research/consumer-complaints/
- **Repository:**
https://github.com/cfpb/consumerfinance.gov
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This database is a collection of complaints about consumer financial products and services that we sent to companies for response.
The Consumer Complaint Database is a collection of complaints about consumer financial products and services that we sent to companies for response. Complaints are published after the company responds, confirming a commercial relationship with the consumer, or after 15 days, whichever comes first. Complaints referred to other regulators, such as complaints about depository institutions with less than $10 billion in assets, are not published in the Consumer Complaint Database. The database generally updates daily.
Complaints can give us insights into problems people are experiencing in the marketplace and help us regulate consumer financial products and services under existing federal consumer financial laws, enforce those laws judiciously, and educate and empower consumers to make informed financial decisions. We also report on complaint trends annually in Consumer Response’s Annual Report to Congress.
### Supported Tasks and Leaderboards
Text Classification Tasks
| Task | Label Name | Description | SOTA |
| ----------- | ----------- |----------- | ----------- |
| Text Classification | Product| Predict the related product of a complaint | N/A |
| Task | Label Name | Description | SOTA |
| ----------- | ----------- |----------- | ----------- |
| Text Classification | Sub-Product| Predict the related sub product of a complaint | N/A |
| Task | Label Name | Description | SOTA |
| ----------- | ----------- |----------- | ----------- |
| Text Classification | Tags | Predict whether a complaint has been made by someone elderly or a service person| N/A |
### Languages
English
## Dataset Structure
### Data Instances
This dataset is a point in time extract of the database, the database increases in size every day
An example of 'train' looks as follows.
```
{
"Complaint ID": "4511031",
"Product": "Credit reporting, credit repair services, or other personal consumer reports",
"Sub Issue": "Credit inquiries on your report that you don't recognize",
"Consumer Disputed": "N/A",
"Sub Product": "Credit reporting",
"State": "TX",
"Tags": "Older American, Servicemember",
"Company Public Response": "",
"Zip Code": "75202",
"Issue": "Improper use of your report",
"Submitted via": "Web",
"Company Response To Consumer": "Closed with explanation",
"Complaint Text": "I am XXXX XXXX and I am submitting this complaint myself and there is no third party involved. Despite the multiple previous written requests, the unverified inquiries listed below still remain on my credit report in violation of Federal Law. The Equifax Credit Bureau failed to comply with Fair Credit Reporting Act, XXXX XXXX sections XXXX within the time set forth by law and continued reporting of erroneous information which now, given all my attempts to address it directly with the creditor, as willful negligence and non-compliance with federal statutes. PLEASE REMOVE THE FOLLOWING INQUIRIES COMPLETELY FROM MY CREDIT REPORT : XXXX CARD-Date of inquiry XX/XX/XXXX XXXX CARD-Date of inquiry XX/XX/XXXX",
"Date Received": "07-02-2021",
"Company": "EQUIFAX, INC.",
"Consumer Consent Provided": "Consent not provided",
"Timely Response": "Yes",
"Date Sent To Company": "2021-07-02"
}
```
### Data Fields
| Field | name | Description | Data Type |
| ----------- | ----------- |----------- | ----------- |
| Date received | The date the CFPB received the complaint | date & time | |
| Product | The type of product the consumer identified in the complaint | plain text | This field is a categorical variable. |
| Sub-product | The type of sub-product the consumer identified in the complaint | plain text | This field is a categorical variable. Not all Products have Sub-products. |
| Issue | The issue the consumer identified in the complaint | plain text | This field is a categorical variable. Possible values are dependent on Product. |
| Sub-issue | The sub-issue the consumer identified in the complaint | plain text | This field is a categorical variable. Possible values are dependent on product and issue. Not all Issues have corresponding Sub-issues. |
| Consumer complaint narrative | Consumer complaint narrative is the consumer-submitted description of "what happened" from the complaint. Consumers must opt-in to share their narrative. We will not publish the narrative unless the consumer consents, and consumers can opt-out at any time. The CFPB takes reasonable steps to scrub personal information from each complaint that could be used to identify the consumer. | plain text | Consumers' descriptions of what happened are included if consumers consent to publishing the description and after we take steps to remove personal information. |
| Company public response | The company's optional, public-facing response to a consumer's complaint. Companies can choose to select a response from a pre-set list of options that will be posted on the public database. For example, "Company believes complaint is the result of an isolated error." | plain text | Companies' public-facing responses to complaints are included if companies choose to publish one. Companies may select a public response from a set list of options as soon as they respond to the complaint, but no later than 180 days after the complaint was sent to the company for response. |
| Company | The complaint is about this company | plain text | This field is a categorical variable. |
| State | The state of the mailing address provided by the consumer | plain text | This field is a categorical variable. |
| ZIP code | The mailing ZIP code provided by the consumer | plain text | Mailing ZIP code provided by the consumer. This field may: i) include the first five digits of a ZIP code; ii) include the first three digits of a ZIP code (if the consumer consented to publication of their complaint narrative); or iii) be blank (if ZIP codes have been submitted with non-numeric values, if there are less than 20,000 people in a given ZIP code, or if the complaint has an address outside of the United States). For example, complaints where the submitter reports the age of the consumer as 62 years or older are tagged, ‘Older American.’ Complaints submitted by or on behalf of a servicemember or the spouse or dependent of a servicemember are tagged, ‘Servicemember.’ Servicemember includes anyone who is active duty, National Guard, or Reservist, as well as anyone who previously served and is a Veteran or retiree. |
| Tags | Data that supports easier searching and sorting of complaints submitted by or on behalf of consumers. | plain text | |
| Consumer consent provided? | Identifies whether the consumer opted in to publish their complaint narrative. We do not publish the narrative unless the consumer consents and consumers can opt-out at any time. | plain text | This field shows whether a consumer provided consent to publish their complaint narrative |
| Submitted via | How the complaint was submitted to the CFPB | plain text | This field is a categorical variable. |
| Date sent to company | The date the CFPB sent the complaint to the company | date & time | |
| Company response to consumer | This is how the company responded. For example, "Closed with explanation." | plain text | This field is a categorical variable. |
| Timely response? | Whether the company gave a timely response | plain text | yes/no |
| Consumer disputed? | Whether the consumer disputed the company’s response | plain text | YES/ NO/ N/A: The Bureau discontinued the consumer dispute option on April 24, 2017. |
| Complaint ID | The unique identification number for a complaint | number | |
### Data Splits
This dataset only contains a TRAIN set - this can be further split into TRAIN, TEST and VALIDATE subsets with the datasets library
## Dataset Creation
### Curation Rationale
Open sourcing customer complaints
### Source Data
https://cfpb.github.io/api/ccdb/
#### Initial Data Collection and Normalization
This database is maintained by the Consumer Financial Protection Bureau
#### Who are the source language producers?
English
### Annotations
#### Annotation process
User submitted to the CFPB
#### Who are the annotators?
N/A
### Personal and Sensitive Information
All PII data has been anonymised
## Considerations for Using the Data
### Social Impact of Dataset
N/A
### Discussion of Biases
This database is not a statistical sample of consumers’ experiences in the marketplace. Complaints are not necessarily representative of all consumers’ experiences and complaints do not constitute “information” for purposes of the Information Quality Act .
Complaint volume should be considered in the context of company size and/or market share. For example, companies with more customers may have more complaints than companies with fewer customers. We encourage you to pair complaint data with public and private data sets for additional context.
The Bureau publishes the consumer’s narrative description of his or her experience if the consumer opts to share it publicly and after the Bureau takes steps to remove personal information. We don’t verify all the allegations in complaint narratives. Unproven allegations in consumer narratives should be regarded as opinion, not fact. We do not adopt the views expressed and make no representation that consumers’ allegations are accurate, clear, complete, or unbiased in substance or presentation. Users should consider what conclusions may be fairly drawn from complaints alone.
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
https://cfpb.github.io/api/ccdb/
### Licensing Information
Creative Commons Zero v1.0 Universal
### Citation Information
N/A
### Contributions
Thanks to [@kayvane1](https://github.com/kayvane1) for adding this dataset and to the [Consumer Financial Protection Bureau](https://cfpb.github.io/) for publishing it. | 16,804 | [
[
-0.0333251953125,
-0.0408935546875,
0.032257080078125,
0.040679931640625,
-0.007358551025390625,
0.006862640380859375,
0.01529693603515625,
-0.06353759765625,
0.01934814453125,
0.052642822265625,
-0.048248291015625,
-0.06298828125,
-0.0236663818359375,
0.001... |
few_rel | 2023-06-01T14:59:47.000Z | [
"task_categories:other",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:mit",
"relation-extraction",... | null | FewRel is a large-scale few-shot relation extraction dataset, which contains more than one hundred relations and tens of thousands of annotated instances cross different domains. | @inproceedings{han-etal-2018-fewrel,
title = "{F}ew{R}el: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation",
author = "Han, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1514",
doi = "10.18653/v1/D18-1514",
pages = "4803--4809"
}
@inproceedings{gao-etal-2019-fewrel,
title = "{F}ew{R}el 2.0: Towards More Challenging Few-Shot Relation Classification",
author = "Gao, Tianyu and Han, Xu and Zhu, Hao and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1649",
doi = "10.18653/v1/D19-1649",
pages = "6251--6256"
} | 2 | 125 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
- n<1K
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: fewrel
pretty_name: Few-Shot Relation Classification Dataset
tags:
- relation-extraction
dataset_info:
- config_name: default
features:
- name: relation
dtype: string
- name: tokens
sequence: string
- name: head
struct:
- name: text
dtype: string
- name: type
dtype: string
- name: indices
sequence:
sequence: int64
- name: tail
struct:
- name: text
dtype: string
- name: type
dtype: string
- name: indices
sequence:
sequence: int64
- name: names
sequence: string
splits:
- name: train_wiki
num_bytes: 19923155
num_examples: 44800
- name: val_nyt
num_bytes: 1385642
num_examples: 2500
- name: val_pubmed
num_bytes: 488502
num_examples: 1000
- name: val_semeval
num_bytes: 2646249
num_examples: 8851
- name: val_wiki
num_bytes: 5147348
num_examples: 11200
- name: pubmed_unsupervised
num_bytes: 1117703
num_examples: 2500
download_size: 22674323
dataset_size: 30708599
- config_name: pid2name
features:
- name: relation
dtype: string
- name: names
sequence: string
splits:
- name: pid2name
num_bytes: 81607
num_examples: 744
download_size: 22674323
dataset_size: 81607
config_names:
- default
- pid2name
---
# Dataset Card for few_rel
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub Page](https://thunlp.github.io/)
- **Repository:** [GitHub](https://github.com/thunlp/FewRel)
- **Paper:** [FewRel](https://arxiv.org/abs/1810.10147), [FewRel 2.0](https://arxiv.org/abs/1910.07124)
- **Leaderboard:** [GitHub Leaderboard](https://thunlp.github.io/fewrel.html)
- **Point of Contact:** [Needs More Information]
### Dataset Summary
FewRel is a large-scale few-shot relation extraction dataset, which contains more than one hundred relations and tens of thousands of annotated instances cross different domains.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The dataset contaings English text, as used by writers on Wikipedia, and crowdsourced English annotations.
## Dataset Structure
### Data Instances
An instance from `train_wiki` split:
```
{'head': {'indices': [[16]], 'text': 'tjq', 'type': 'Q1331049'}, 'names': ['place served by transport hub', 'territorial entity or entities served by this transport hub (airport, train station, etc.)'], 'relation': 'P931', 'tail': {'indices': [[13, 14]], 'text': 'tanjung pandan', 'type': 'Q3056359'}, 'tokens': ['Merpati', 'flight', '106', 'departed', 'Jakarta', '(', 'CGK', ')', 'on', 'a', 'domestic', 'flight', 'to', 'Tanjung', 'Pandan', '(', 'TJQ', ')', '.']}
```
### Data Fields
For `default`:
- `relation`: a `string` feature containing PID of the relation.
- `tokens`: a `list` of `string` features containing tokens for the text.
- `head`: a dictionary containing:
- `text`: a `string` feature representing the head entity.
- `type`: a `string` feature representing the type of the head entity.
- `indices`: a `list` containing `list` of token indices.
- `tail`: a dictionary containing:
- `text`: a `string` feature representing the tail entity.
- `type`: a `string` feature representing the type of the tail entity.
- `indices`: a `list` containing `list` of token indices.
- `names`: a `list` of `string` features containing relation names. For `pubmed_unsupervised` split, this is set to a `list` with an empty `string`. For `val_semeval` and `val_pubmed` split, this is set to a `list` with the `string` from the `relation` field.
### Data Splits
`train_wiki`: 44800
`val_nyt`: 2500
`val_pubmed`: 1000
`val_semeval`: 8851
`val_wiki`: 11200
`pubmed_unsupervised`: 2500
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
For FewRel:
Han, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong
For FewRel 2.0:
Gao, Tianyu and Han, Xu and Zhu, Hao and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie
### Licensing Information
```
MIT License
Copyright (c) 2018 THUNLP
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@inproceedings{han-etal-2018-fewrel,
title = "{F}ew{R}el: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation",
author = "Han, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1514",
doi = "10.18653/v1/D18-1514",
pages = "4803--4809"
}
```
```
@inproceedings{gao-etal-2019-fewrel,
title = "{F}ew{R}el 2.0: Towards More Challenging Few-Shot Relation Classification",
author = "Gao, Tianyu and Han, Xu and Zhu, Hao and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1649",
doi = "10.18653/v1/D19-1649",
pages = "6251--6256"
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset. | 8,484 | [
[
-0.042572021484375,
-0.038116455078125,
0.0299530029296875,
0.00951385498046875,
-0.0172271728515625,
-0.032135009765625,
-0.0183868408203125,
-0.0309600830078125,
0.020355224609375,
0.034820556640625,
-0.057403564453125,
-0.06280517578125,
-0.036163330078125,
... |
sofc_materials_articles | 2023-03-09T10:44:46.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:token-classification",
"task_categories:text-classification",
"task_ids:named-entity-recognition",
"task_ids:slot-filling",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creators:fo... | null | The SOFC-Exp corpus consists of 45 open-access scholarly articles annotated by domain experts.
A corpus and an inter-annotator agreement study demonstrate the complexity of the suggested
named entity recognition and slot filling tasks as well as high annotation quality is presented
in the accompanying paper. | @misc{friedrich2020sofcexp,
title={The SOFC-Exp Corpus and Neural Approaches to Information Extraction in the Materials Science Domain},
author={Annemarie Friedrich and Heike Adel and Federico Tomazic and Johannes Hingerl and Renou Benteau and Anika Maruscyk and Lukas Lange},
year={2020},
eprint={2006.03039},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 6 | 125 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- token-classification
- text-classification
task_ids:
- named-entity-recognition
- slot-filling
- topic-classification
pretty_name: SofcMaterialsArticles
dataset_info:
features:
- name: text
dtype: string
- name: sentence_offsets
sequence:
- name: begin_char_offset
dtype: int64
- name: end_char_offset
dtype: int64
- name: sentences
sequence: string
- name: sentence_labels
sequence: int64
- name: token_offsets
sequence:
- name: offsets
sequence:
- name: begin_char_offset
dtype: int64
- name: end_char_offset
dtype: int64
- name: tokens
sequence:
sequence: string
- name: entity_labels
sequence:
sequence:
class_label:
names:
'0': B-DEVICE
'1': B-EXPERIMENT
'2': B-MATERIAL
'3': B-VALUE
'4': I-DEVICE
'5': I-EXPERIMENT
'6': I-MATERIAL
'7': I-VALUE
'8': O
- name: slot_labels
sequence:
sequence:
class_label:
names:
'0': B-anode_material
'1': B-cathode_material
'2': B-conductivity
'3': B-current_density
'4': B-degradation_rate
'5': B-device
'6': B-electrolyte_material
'7': B-experiment_evoking_word
'8': B-fuel_used
'9': B-interlayer_material
'10': B-interconnect_material
'11': B-open_circuit_voltage
'12': B-power_density
'13': B-resistance
'14': B-support_material
'15': B-thickness
'16': B-time_of_operation
'17': B-voltage
'18': B-working_temperature
'19': I-anode_material
'20': I-cathode_material
'21': I-conductivity
'22': I-current_density
'23': I-degradation_rate
'24': I-device
'25': I-electrolyte_material
'26': I-experiment_evoking_word
'27': I-fuel_used
'28': I-interlayer_material
'29': I-interconnect_material
'30': I-open_circuit_voltage
'31': I-power_density
'32': I-resistance
'33': I-support_material
'34': I-thickness
'35': I-time_of_operation
'36': I-voltage
'37': I-working_temperature
'38': O
- name: links
sequence:
- name: relation_label
dtype:
class_label:
names:
'0': coreference
'1': experiment_variation
'2': same_experiment
'3': thickness
- name: start_span_id
dtype: int64
- name: end_span_id
dtype: int64
- name: slots
sequence:
- name: frame_participant_label
dtype:
class_label:
names:
'0': anode_material
'1': cathode_material
'2': current_density
'3': degradation_rate
'4': device
'5': electrolyte_material
'6': fuel_used
'7': interlayer_material
'8': open_circuit_voltage
'9': power_density
'10': resistance
'11': support_material
'12': time_of_operation
'13': voltage
'14': working_temperature
- name: slot_id
dtype: int64
- name: spans
sequence:
- name: span_id
dtype: int64
- name: entity_label
dtype:
class_label:
names:
'0': ''
'1': DEVICE
'2': MATERIAL
'3': VALUE
- name: sentence_id
dtype: int64
- name: experiment_mention_type
dtype:
class_label:
names:
'0': ''
'1': current_exp
'2': future_work
'3': general_info
'4': previous_work
- name: begin_char_offset
dtype: int64
- name: end_char_offset
dtype: int64
- name: experiments
sequence:
- name: experiment_id
dtype: int64
- name: span_id
dtype: int64
- name: slots
sequence:
- name: frame_participant_label
dtype:
class_label:
names:
'0': anode_material
'1': cathode_material
'2': current_density
'3': degradation_rate
'4': conductivity
'5': device
'6': electrolyte_material
'7': fuel_used
'8': interlayer_material
'9': open_circuit_voltage
'10': power_density
'11': resistance
'12': support_material
'13': time_of_operation
'14': voltage
'15': working_temperature
- name: slot_id
dtype: int64
splits:
- name: train
num_bytes: 7402373
num_examples: 26
- name: test
num_bytes: 2650700
num_examples: 11
- name: validation
num_bytes: 1993857
num_examples: 8
download_size: 3733137
dataset_size: 12046930
---
# Dataset Card for SofcMaterialsArticles
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [boschresearch/sofc-exp_textmining_resources](https://github.com/boschresearch/sofc-exp_textmining_resources)
- **Repository:** [boschresearch/sofc-exp_textmining_resources](https://github.com/boschresearch/sofc-exp_textmining_resources)
- **Paper:** [The SOFC-Exp Corpus and Neural Approaches to Information Extraction in the Materials Science Domain](https://arxiv.org/abs/2006.03039)
- **Leaderboard:**
- **Point of Contact:** [Annemarie Friedrich](annemarie.friedrich@de.bosch.com)
### Dataset Summary
> The SOFC-Exp corpus contains 45 scientific publications about solid oxide fuel cells (SOFCs), published between 2013 and 2019 as open-access articles all with a CC-BY license. The dataset was manually annotated by domain experts with the following information:
>
> * Mentions of relevant experiments have been marked using a graph structure corresponding to instances of an Experiment frame (similar to the ones used in FrameNet.) We assume that an Experiment frame is introduced to the discourse by mentions of words such as report, test or measure (also called the frame-evoking elements). The nodes corresponding to the respective tokens are the heads of the graphs representing the Experiment frame.
> * The Experiment frame related to SOFC-Experiments defines a set of 16 possible participant slots. Participants are annotated as dependents of links between the frame-evoking element and the participant node.
> * In addition, we provide coarse-grained entity/concept types for all frame participants, i.e, MATERIAL, VALUE or DEVICE. Note that this annotation has not been performed on the full texts but only on sentences containing information about relevant experiments, and a few sentences in addition. In the paper, we run experiments for both tasks only on the set of sentences marked as experiment-describing in the gold standard, which is admittedly a slightly simplified setting. Entity types are only partially annotated on other sentences. Slot filling could of course also be evaluated in a fully automatic setting with automatic experiment sentence detection as a first step.
### Supported Tasks and Leaderboards
- `topic-classification`: The dataset can be used to train a model for topic-classification, to identify sentences that mention SOFC-related experiments.
- `named-entity-recognition`: The dataset can be used to train a named entity recognition model to detect `MATERIAL`, `VALUE`, `DEVICE`, and `EXPERIMENT` entities.
- `slot-filling`: The slot-filling task is approached as fine-grained entity-typing-in-context, assuming that each sentence represents a single experiment frame. Sequence tagging architectures are utilized for tagging the tokens of each experiment-describing sentence with the set of slot types.
The paper experiments with BiLSTM architectures with `BERT`- and `SciBERT`- generated token embeddings, as well as with `BERT` and `SciBERT` directly for the modeling task. A simple CRF architecture is used as a baseline for sequence-tagging tasks. Implementations of the transformer-based architectures can be found in the `huggingface/transformers` library: [BERT](https://huggingface.co/bert-base-uncased), [SciBERT](https://huggingface.co/allenai/scibert_scivocab_uncased)
### Languages
This corpus is in English.
## Dataset Structure
### Data Instances
As each example is a full text of an academic paper, plus annotations, a json formatted example is space-prohibitive for this README.
### Data Fields
- `text`: The full text of the paper
- `sentence_offsets`: Start and end character offsets for each sentence in the text.
- `begin_char_offset`: a `int64` feature.
- `end_char_offset`: a `int64` feature.
- `sentences`: A sequence of the sentences in the text (using `sentence_offsets`)
- `sentence_labels`: Sequence of binary labels for whether a sentence contains information of interest.
- `token_offsets`: Sequence of sequences containing start and end character offsets for each token in each sentence in the text.
- `offsets`: a dictionary feature containing:
- `begin_char_offset`: a `int64` feature.
- `end_char_offset`: a `int64` feature.
- `tokens`: Sequence of sequences containing the tokens for each sentence in the text.
- `feature`: a `string` feature.
- `entity_labels`: a dictionary feature containing:
- `feature`: a classification label, with possible values including `B-DEVICE`, `B-EXPERIMENT`, `B-MATERIAL`, `B-VALUE`, `I-DEVICE`.
- `slot_labels`: a dictionary feature containing:
- `feature`: a classification label, with possible values including `B-anode_material`, `B-cathode_material`, `B-conductivity`, `B-current_density`, `B-degradation_rate`.
- `links`: a dictionary feature containing:
- `relation_label`: a classification label, with possible values including `coreference`, `experiment_variation`, `same_experiment`, `thickness`.
- `start_span_id`: a `int64` feature.
- `end_span_id`: a `int64` feature.
- `slots`: a dictionary feature containing:
- `frame_participant_label`: a classification label, with possible values including `anode_material`, `cathode_material`, `current_density`, `degradation_rate`, `device`.
- `slot_id`: a `int64` feature.
- `spans`: a dictionary feature containing:
- `span_id`: a `int64` feature.
- `entity_label`: a classification label, with possible values including ``, `DEVICE`, `MATERIAL`, `VALUE`.
- `sentence_id`: a `int64` feature.
- `experiment_mention_type`: a classification label, with possible values including ``, `current_exp`, `future_work`, `general_info`, `previous_work`.
- `begin_char_offset`: a `int64` feature.
- `end_char_offset`: a `int64` feature.
- `experiments`: a dictionary feature containing:
- `experiment_id`: a `int64` feature.
- `span_id`: a `int64` feature.
- `slots`: a dictionary feature containing:
- `frame_participant_label`: a classification label, with possible values including `anode_material`, `cathode_material`, `current_density`, `degradation_rate`, `conductivity`.
- `slot_id`: a `int64` feature.
Very detailed information for each of the fields can be found in the [corpus file formats section](https://github.com/boschresearch/sofc-exp_textmining_resources#corpus-file-formats) of the associated dataset repo
### Data Splits
This dataset consists of three splits:
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Input Examples | 26 | 8 | 11 |
The authors propose the experimental setting of using the training data in a 5-fold cross validation setting for development and tuning, and finally applying tte model(s) to the independent test set.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The corpus consists of 45
open-access scientific publications about SOFCs
and related research, annotated by domain experts.
### Annotations
#### Annotation process
For manual annotation, the authors use the InCeption annotation tool (Klie et al., 2018).
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The manual annotations created for the SOFC-Exp corpus are licensed under a [Creative Commons Attribution 4.0 International License (CC-BY-4.0)](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@misc{friedrich2020sofcexp,
title={The SOFC-Exp Corpus and Neural Approaches to Information Extraction in the Materials Science Domain},
author={Annemarie Friedrich and Heike Adel and Federico Tomazic and Johannes Hingerl and Renou Benteau and Anika Maruscyk and Lukas Lange},
year={2020},
eprint={2006.03039},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@ZacharySBrown](https://github.com/ZacharySBrown) for adding this dataset. | 14,815 | [
[
-0.04144287109375,
-0.057708740234375,
0.031646728515625,
0.01322174072265625,
0.0014028549194335938,
-0.01293182373046875,
-0.01499176025390625,
-0.0149688720703125,
0.0239410400390625,
0.0160980224609375,
-0.038818359375,
-0.051544189453125,
-0.027938842773437... |
SocialGrep/reddit-crypto-aug-2021 | 2022-07-01T19:08:05.000Z | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | SocialGrep | This corpus contains the complete data for the activity on seven major cryptocurrency subreddits for the entire month of August 2021. | null | 4 | 125 | 2022-03-02T23:29:22 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for reddit-crypto-aug-2021
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=crypto)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=crypto)
### Dataset Summary
This corpus contains the complete data for the activity on the following subreddits for the entire month of August 2021:
- /r/cryptocurrency
- /r/cryptocurrencyclassic
- /r/cryptocurrencyico
- /r/cryptomars
- /r/cryptomoon
- /r/cryptomoonshots
- /r/satoshistreetbets
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | 3,946 | [
[
-0.03961181640625,
-0.0582275390625,
0.021881103515625,
0.02301025390625,
-0.04388427734375,
0.00615692138671875,
-0.013336181640625,
-0.03607177734375,
0.063232421875,
0.0396728515625,
-0.06524658203125,
-0.08233642578125,
-0.053985595703125,
0.020736694335... |
SocialGrep/reddit-wallstreetbets-aug-2021 | 2022-07-01T19:15:07.000Z | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | SocialGrep | This corpus contains the complete data for the activity on /r/WallStreetBets for the entire month of August 2021. | null | 2 | 125 | 2022-03-02T23:29:22 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for reddit-wallstreetbets-aug-2021
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=wallstreetbets)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=dataset&utm_term=wallstreetbets)
### Dataset Summary
This corpus contains the complete data for the activity on subreddit /r/WallStreetBets for the entire month of August.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | 3,775 | [
[
-0.037628173828125,
-0.06634521484375,
0.018218994140625,
0.032257080078125,
-0.0258331298828125,
0.00786590576171875,
-0.0201416015625,
-0.033660888671875,
0.05755615234375,
0.034759521484375,
-0.06890869140625,
-0.08697509765625,
-0.056915283203125,
0.0124... |
biu-nlp/qa_srl2018 | 2022-10-19T06:16:06.000Z | [
"region:us"
] | biu-nlp | The dataset contains question-answer pairs to model verbal predicate-argument structure. The questions start with wh-words (Who, What, Where, What, etc.) and contain a verb predicate in the sentence; the answers are phrases in the sentence.
This dataset, a.k.a "QASRL Bank", "QASRL-v2" or "QASRL-LS" (Large Scale), was constructed via crowdsourcing. | @inproceedings{fitzgerald2018large,
title={Large-Scale QA-SRL Parsing},
author={FitzGerald, Nicholas and Michael, Julian and He, Luheng and Zettlemoyer, Luke},
booktitle={Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={2051--2060},
year={2018}
} | 1 | 125 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
eugenesiow/Set14 | 2022-10-21T04:00:31.000Z | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:other",
"other-image-super-resolution",
"region:us"
] | eugenesiow | Set14 is an evaluation dataset with 14 RGB images for the image super resolution task. | @inproceedings{zeyde2010single,
title={On single image scale-up using sparse-representations},
author={Zeyde, Roman and Elad, Michael and Protter, Matan},
booktitle={International conference on curves and surfaces},
pages={711--730},
year={2010},
organization={Springer}
} | 0 | 125 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language: []
license:
- other
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- other
task_ids: []
pretty_name: Set14
tags:
- other-image-super-resolution
---
# Dataset Card for Set14
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: https://sites.google.com/site/romanzeyde/research-interests
- **Repository**: https://huggingface.co/datasets/eugenesiow/Set14
- **Paper**: http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/2010/CS/CS-2010-12.pdf
- **Leaderboard**: https://github.com/eugenesiow/super-image#scale-x2
### Dataset Summary
Set14 is an evaluation dataset with 14 RGB images for the image super resolution task. It was first used as the test set of the paper "On single image scale-up using sparse-representations" by [Zeyde et al. (2010)](http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/2010/CS/CS-2010-12.pdf).
Install with `pip`:
```bash
pip install datasets super-image
```
Evaluate a model with the [`super-image`](https://github.com/eugenesiow/super-image) library:
```python
from datasets import load_dataset
from super_image import EdsrModel
from super_image.data import EvalDataset, EvalMetrics
dataset = load_dataset('eugenesiow/Set14', 'bicubic_x2', split='validation')
eval_dataset = EvalDataset(dataset)
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
EvalMetrics().evaluate(model, eval_dataset)
```
### Supported Tasks and Leaderboards
The dataset is commonly used for evaluation of the `image-super-resolution` task.
Unofficial [`super-image`](https://github.com/eugenesiow/super-image) leaderboard for:
- [Scale 2](https://github.com/eugenesiow/super-image#scale-x2)
- [Scale 3](https://github.com/eugenesiow/super-image#scale-x3)
- [Scale 4](https://github.com/eugenesiow/super-image#scale-x4)
- [Scale 8](https://github.com/eugenesiow/super-image#scale-x8)
### Languages
Not applicable.
## Dataset Structure
### Data Instances
An example of `validation` for `bicubic_x2` looks as follows.
```
{
"hr": "/.cache/huggingface/datasets/downloads/extracted/Set14_HR/baboon.png",
"lr": "/.cache/huggingface/datasets/downloads/extracted/Set14_LR_x2/baboon.png"
}
```
### Data Fields
The data fields are the same among all splits.
- `hr`: a `string` to the path of the High Resolution (HR) `.png` image.
- `lr`: a `string` to the path of the Low Resolution (LR) `.png` image.
### Data Splits
| name |validation|
|-------|---:|
|bicubic_x2|14|
|bicubic_x3|14|
|bicubic_x4|14|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
No annotations.
#### Who are the annotators?
No annotators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- **Original Authors**: [Zeyde et al.](http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/2010/CS/CS-2010-12.pdf)
### Licensing Information
Academic use only.
### Citation Information
```bibtex
@inproceedings{zeyde2010single,
title={On single image scale-up using sparse-representations},
author={Zeyde, Roman and Elad, Michael and Protter, Matan},
booktitle={International conference on curves and surfaces},
pages={711--730},
year={2010},
organization={Springer}
}
```
### Contributions
Thanks to [@eugenesiow](https://github.com/eugenesiow) for adding this dataset.
| 4,934 | [
[
-0.04559326171875,
-0.034759521484375,
0.005771636962890625,
0.0030956268310546875,
-0.020294189453125,
-0.00867462158203125,
-0.0173187255859375,
-0.0290374755859375,
0.0295257568359375,
0.0210723876953125,
-0.05859375,
-0.055267333984375,
-0.03857421875,
0... |
iohadrubin/smcalflow | 2022-01-01T20:57:52.000Z | [
"region:us"
] | iohadrubin | 2 | 125 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... | ||
jamescalam/reddit-python | 2022-04-25T12:41:35.000Z | [
"region:us"
] | jamescalam | null | null | 2 | 125 | 2022-04-25T12:29:25 | # Python Subreddit
Dataset containing data scraped from the [Python subreddit](https://www.reddit.com/r/python). | 113 | [
[
-0.0170135498046875,
-0.047454833984375,
0.01090240478515625,
0.040191650390625,
-0.0343017578125,
0.00989532470703125,
0.01441192626953125,
-0.0029582977294921875,
0.052337646484375,
0.018096923828125,
-0.06414794921875,
-0.026214599609375,
-0.023895263671875,
... |
BennoKrojer/ImageCoDe | 2022-05-13T21:26:08.000Z | [
"license:afl-3.0",
"arxiv:2203.15867",
"region:us"
] | BennoKrojer | null | null | 1 | 125 | 2022-05-05T21:50:13 | ---
license: afl-3.0
---
# Dataset Card for ImageCoDe
To get started quickly, load descriptions via:
```
from datasets import load_dataset
examples = load_dataset('BennoKrojer/ImageCoDe')
```
And download `image_sets.zip` for all images sets (each directory consisting of 10 images).
## Dataset Description
- **Homepage & Leaderboard:** https://mcgill-nlp.github.io/imagecode/
- **Repository:** https://github.com/McGill-NLP/imagecode
- **Paper:** https://arxiv.org/abs/2203.15867
- **Point of Contact:** benno DOT krojer ÄT gmail DOT com
### Dataset Summary
We introduce ImageCoDe, a vision-and-language benchmark that requires contextual language understanding in the form of pragmatics, temporality, long descriptions and visual nuances. The task: Given a detailed description, retrieve the target image among 10 minimally contrastive images. ImageCoDe contains 21K descriptions and 94K images. THe images are primarily frames based on video datasets.
## Dataset Structure
### Data Instances
An instance contains a description, the corresponding image set name, and the target index:
```
{"image_set": "video-storytelling-videowedding_de8dLXvgV-I-shot6_0",
"image_index": "8",
"description": "The flowers the woman in the teal strapless dress is carrying are completely obscured by the man in the black shirt's head. "}
```
### Data Splits
| Dataset Split | Number of Descriptions in Split |
| ------------- |----------------------------- |
| Train | 16,594 |
| Validation | 2,302 |
| Test | 2,306 |
## Dataset Creation
### Curation Rationale
The main goal of ImageCoDe is to highlight weaknesses of recent Vision-and-Language models regarding complex language and fine-grained visual representations. In addition, we found that the dataset offers plenty of pragmatic examples and is therefore suitable for studying pragmatics. | 1,931 | [
[
-0.036895751953125,
-0.033355712890625,
0.006053924560546875,
0.0229034423828125,
-0.0389404296875,
-0.0138702392578125,
-0.0293731689453125,
-0.039703369140625,
-0.0003859996795654297,
0.041473388671875,
-0.0283660888671875,
-0.06011962890625,
-0.0430908203125,... |
Francesco/road-traffic | 2023-03-30T09:12:18.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | 1 | 125 | 2023-03-30T09:11:50 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': road-traffic
'1': bicycles
'2': buses
'3': crosswalks
'4': fire hydrants
'5': motorcycles
'6': traffic lights
'7': vehicles
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: road-traffic
tags:
- rf100
---
# Dataset Card for road-traffic
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/road-traffic
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
road-traffic
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/road-traffic
### Citation Information
```
@misc{ road-traffic,
title = { road traffic Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/road-traffic } },
url = { https://universe.roboflow.com/object-detection/road-traffic },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | 3,510 | [
[
-0.0518798828125,
-0.047332763671875,
0.0243988037109375,
-0.006427764892578125,
-0.03839111328125,
-0.00939178466796875,
-0.00579833984375,
-0.04315185546875,
0.0221099853515625,
0.031829833984375,
-0.048248291015625,
-0.06427001953125,
-0.040679931640625,
... |
open-llm-leaderboard/details | 2023-08-25T09:32:19.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 125 | 2023-06-28T09:31:04 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
emotone_ar | 2023-01-25T14:29:56.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ar",
"license:unknown",
"region:us"
] | null | Dataset of 10065 tweets in Arabic for Emotion detection in Arabic text | @inbook{inbook,
author = {Al-Khatib, Amr and El-Beltagy, Samhaa},
year = {2018},
month = {01},
pages = {105-114},
title = {Emotional Tone Detection in Arabic Tweets: 18th International Conference, CICLing 2017, Budapest, Hungary, April 17–23, 2017, Revised Selected Papers, Part II},
isbn = {978-3-319-77115-1},
doi = {10.1007/978-3-319-77116-8_8}
} | 5 | 124 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Emotional Tone in Arabic
dataset_info:
features:
- name: tweet
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': anger
'2': joy
'3': sadness
'4': love
'5': sympathy
'6': surprise
'7': fear
splits:
- name: train
num_bytes: 1541746
num_examples: 10065
download_size: 1563138
dataset_size: 1541746
---
# Dataset Card for Emotional Tone in Arabic
## Table of Contents
- [Dataset Card for Emotional Tone in Arabic](#dataset-card-for-emotional-tone-in-arabic)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [|split|num examples|](#splitnum-examples)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Repository](https://github.com/AmrMehasseb/Emotional-Tone)
- **Paper:** [Emotional Tone Detection in Arabic Tweets](https://www.researchgate.net/publication/328164296_Emotional_Tone_Detection_in_Arabic_Tweets_18th_International_Conference_CICLing_2017_Budapest_Hungary_April_17-23_2017_Revised_Selected_Papers_Part_II)
- **Point of Contact:** [Amr Al-Khatib](https://github.com/AmrMehasseb)
### Dataset Summary
Dataset of 10065 tweets in Arabic for Emotion detection in Arabic text
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is based on Arabic.
## Dataset Structure
### Data Instances
example:
```
>>> {'label': 0, 'tweet': 'الاوليمبياد الجايه هكون لسه ف الكليه ..'}
```
### Data Fields
- "tweet": plain text tweet in Arabic
- "label": emotion class label
the dataset distribution and balance for each class looks like the following
|label||Label description | Count |
|---------|---------| ------- |
|0 |none | 1550 |
|1 |anger | 1444 |
|2 |joy | 1281 |
|3 |sadness | 1256 |
|4 |love | 1220 |
|5 |sympathy | 1062 |
|6 |surprise | 1045 |
|7 |fear | 1207 |
### Data Splits
The dataset is not split.
| | train |
|----------|--------:|
| no split | 10,065 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inbook{inbook,
author = {Al-Khatib, Amr and El-Beltagy, Samhaa},
year = {2018},
month = {01},
pages = {105-114},
title = {Emotional Tone Detection in Arabic Tweets: 18th International Conference, CICLing 2017, Budapest, Hungary, April 17–23, 2017, Revised Selected Papers, Part II},
isbn = {978-3-319-77115-1},
doi = {10.1007/978-3-319-77116-8_8}
}
```
### Contributions
Thanks to [@abdulelahsm](https://github.com/abdulelahsm) for adding this dataset. | 5,011 | [
[
-0.04119873046875,
-0.03289794921875,
0.004375457763671875,
0.032012939453125,
-0.03619384765625,
0.008270263671875,
-0.0298309326171875,
-0.04107666015625,
0.0298614501953125,
0.0160369873046875,
-0.05780029296875,
-0.1009521484375,
-0.053619384765625,
0.01... |
GEM/OrangeSum | 2022-09-03T18:26:49.000Z | [
"task_categories:summarization",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:fr",
"license:other",
"region:us"
] | GEM | The OrangeSum dataset was inspired by the XSum dataset. It was created by scraping the "Orange Actu" website: https://actu.orange.fr/. Orange S.A. is a large French multinational telecommunications corporation, with 266M customers worldwide. Scraped pages cover almost a decade from Feb 2011 to Sep 2020. They belong to five main categories: France, world, politics, automotive, and society. The society category is itself divided into 8 subcategories: health, environment, people, culture, media, high-tech, unsual ("insolite" in French), and miscellaneous.
Each article featured a single-sentence title as well as a very brief abstract, both professionally written by the author of the article. These two fields were extracted from each page, thus creating two summarization tasks: OrangeSum Title and OrangeSum Abstract. | @inproceedings{kamal-eddine-etal-2021-barthez,
title = "{BART}hez: a Skilled Pretrained {F}rench Sequence-to-Sequence Model",
author = "Kamal Eddine, Moussa and
Tixier, Antoine and
Vazirgiannis, Michalis",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.740",
pages = "9369--9390",
} | 0 | 124 | 2022-03-02T23:29:22 | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- fr
license:
- other
multilinguality:
- unknown
pretty_name: OrangeSum
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids:
- unknown
---
# Dataset Card for GEM/OrangeSum
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/Tixierae/OrangeSum
- **Paper:** https://aclanthology.org/2021.emnlp-main.740
- **Leaderboard:** N/A
- **Point of Contact:** [Needs More Information]
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/OrangeSum).
### Dataset Summary
OrangeSum is a French summarization dataset inspired by XSum. It features two subtasks: abstract generation and title generation. The data was sourced from "Orange Actu" articles between 2011 and 2020.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/OrangeSum')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/OrangeSum).
#### paper
[ACL Anthology](https://aclanthology.org/2021.emnlp-main.740)
## Dataset Overview
### Where to find the Data and its Documentation
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/Tixierae/OrangeSum)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ACL Anthology](https://aclanthology.org/2021.emnlp-main.740)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{kamal-eddine-etal-2021-barthez,
title = "{BART}hez: a Skilled Pretrained {F}rench Sequence-to-Sequence Model",
author = "Kamal Eddine, Moussa and
Tixier, Antoine and
Vazirgiannis, Michalis",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.740",
doi = "10.18653/v1/2021.emnlp-main.740",
pages = "9369--9390",
abstract = "Inductive transfer learning has taken the entire NLP field by storm, with models such as BERT and BART setting new state of the art on countless NLU tasks. However, most of the available models and research have been conducted for English. In this work, we introduce BARThez, the first large-scale pretrained seq2seq model for French. Being based on BART, BARThez is particularly well-suited for generative tasks. We evaluate BARThez on five discriminative tasks from the FLUE benchmark and two generative tasks from a novel summarization dataset, OrangeSum, that we created for this research. We show BARThez to be very competitive with state-of-the-art BERT-based French language models such as CamemBERT and FlauBERT. We also continue the pretraining of a multilingual BART on BARThez{'} corpus, and show our resulting model, mBARThez, to significantly boost BARThez{'} generative performance.",
}
```
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`French`
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
other: Other license
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
### Credit
### Dataset Structure
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
Papers about abstractive summarization using seq2seq models:
- [Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond](https://aclanthology.org/K16-1028/)
- [Get To The Point: Summarization with Pointer-Generator Networks](https://aclanthology.org/P17-1099/)
- [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://aclanthology.org/2020.acl-main.703)
- [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://aclanthology.org/2021.emnlp-main.740/)
Papers about (pretrained) Transformers:
- [Attention is All you Need](https://papers.nips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html)
- [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://aclanthology.org/N19-1423/)
#### Technical Terms
<!-- info: Technical terms used in this card and the dataset and their definitions -->
<!-- scope: microscope -->
No unique technical words in this data card.
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
The ability of the model to generate human like titles and abstracts for given news articles.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`, `BERT-Score`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
Automatic Evaluation: Rouge-1, Rouge-2, RougeL and BERTScore were used.
Human evalutaion: a human evaluation study was conducted with 11 French native speakers. The evaluators were PhD students from the computer science department of the university of the authors, working in NLP and other fields of AI. They volunteered after receiving an email announcement. the best-Worst Scaling (Louviere et al.,2015) was used. Two summaries from two different systems, along with their input document, were presented to a human annotator who had to decide which one was better. The evaluators were asked to base their judgments on accuracy (does the summary contain accurate facts?), informativeness (is important in-formation captured?) and fluency (is the summary written in well-formed French?).
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
The dataset contains news articles written by professional authors.
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
### Known Technical Limitations
| 9,593 | [
[
-0.034210205078125,
-0.052154541015625,
0.0228271484375,
0.014678955078125,
-0.00240325927734375,
-0.0009217262268066406,
-0.0369873046875,
-0.0295867919921875,
0.0228729248046875,
0.0413818359375,
-0.0288848876953125,
-0.04815673828125,
-0.058349609375,
0.0... |
GEM/sportsett_basketball | 2022-10-24T15:30:28.000Z | [
"task_categories:table-to-text",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:mit",
"data-to-text",
"region:us"
] | GEM | SportSett:Basketball dataset for Data-to-Text Generation contains NBA games stats aligned with their human written summaries. | @inproceedings{thomson-etal-2020-sportsett,
title = "{S}port{S}ett:Basketball - A robust and maintainable data-set for Natural Language Generation",
author = "Thomson, Craig and
Reiter, Ehud and
Sripada, Somayajulu",
booktitle = "Proceedings of the Workshop on Intelligent Information Processing and Natural Language Generation",
month = sep,
year = "2020",
address = "Santiago de Compostela, Spain",
publisher = "Association for Computational Lingustics",
url = "https://aclanthology.org/2020.intellang-1.4",
pages = "32--40",
} | 6 | 124 | 2022-03-02T23:29:22 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- en
license:
- mit
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- table-to-text
task_ids: []
pretty_name: sportsett_basketball
tags:
- data-to-text
---
# Dataset Card for GEM/sportsett_basketball
## Dataset Description
- **Homepage:** https://github.com/nlgcat/sport_sett_basketball
- **Repository:** https://github.com/nlgcat/sport_sett_basketball
- **Paper:** https://aclanthology.org/2020.intellang-1.4/
- **Leaderboard:** N/A
- **Point of Contact:** Craig Thomson
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/sportsett_basketball).
### Dataset Summary
The sportsett dataset is an English data-to-text dataset in the basketball domain. The inputs are statistics summarizing an NBA game and the outputs are high-quality descriptions of the game in natural language.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/sportsett_basketball')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/sportsett_basketball).
#### website
[Github](https://github.com/nlgcat/sport_sett_basketball)
#### paper
[ACL Anthology](https://aclanthology.org/2020.intellang-1.4/)
#### authors
Craig Thomson, Ashish Upadhyay
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Github](https://github.com/nlgcat/sport_sett_basketball)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/nlgcat/sport_sett_basketball)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ACL Anthology](https://aclanthology.org/2020.intellang-1.4/)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{thomson-etal-2020-sportsett,
title = "{S}port{S}ett:Basketball - A robust and maintainable data-set for Natural Language Generation",
author = "Thomson, Craig and
Reiter, Ehud and
Sripada, Somayajulu",
booktitle = "Proceedings of the Workshop on Intelligent Information Processing and Natural Language Generation",
month = sep,
year = "2020",
address = "Santiago de Compostela, Spain",
publisher = "Association for Computational Lingustics",
url = "https://aclanthology.org/2020.intellang-1.4",
pages = "32--40",
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Craig Thomson
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
c.thomson@abdn.ac.uk
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
American English
One dialect, one language.
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
American sports writers
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
mit: MIT License
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
Maintain a robust and scalable Data-to-Text generation resource with structured data and textual summaries
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Data-to-Text
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
A model trained on this dataset should summarise the statistical and other information from a basketball game. This will be focused on a single game, although facts from prior games, or aggregate statistics over many games can and should be used for comparison where appropriate. There no single common narrative, although summaries usually start with who player, when, where, and the score. They then provide high level commentary on what the difference in the game was (why the winner won). breakdowns of statistics for prominent players follow, winning team first. Finally, the upcoming schedule for both teams is usually included. There are, however, other types of fact that can be included, and other narrative structures.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
University of Aberdeen, Robert Gordon University
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Craig Thomson, Ashish Upadhyay
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
EPSRC
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Craig Thomson, Ashish Upadhyay
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
Each instance in the dataset has five fields.
1. "sportsett_id": This is a unique id as used in the original SportSett database. It starts with '1' with the first instance in the train-set and ends with '6150' with the last instance in test-set.
2. "gem_id": This is a unique id created as per GEM's requirement which follows the `GEM-${DATASET_NAME}-${SPLIT-NAME}-${id}` pattern.
3. "game": This field contains a dictionary with information about current game. It has information such as date on which the game was played alongwith the stadium, city, state where it was played.
4. "teams": This filed is a dictionary of multiple nested dictionaries. On the highest level, it has two keys: 'home' and 'vis', which provide the stats for home team and visiting team of the game. Both are dictionaries with same structure. Each dictionary will contain team's information such as name of the team, their total wins/losses in current season, their conference standing, the SportSett ids for their current and previous games. Apart from these general information, they also have the box- and line- scores for the team in the game. Box score is the stats of players from the team at the end of the game, while line score along with the whole game stats is divided into quarters and halves as well as the extra-time (if happened in the game). After these scores, there is another field of next-game, which gives general information about team's next game such as the place and opponent's name of the next game.
5. "summaries": This is a list of summaries for each game. Some games will have more than one summary, in that case, the list will have more than one entries. Each summary in the list is a string which can be tokenised by a space, following the practices in RotoWire-FG dataset ([Wang, 2019](https://www.aclweb.org/anthology/W19-8639)).
#### Reason for Structure
<!-- info: How was the dataset structure determined? -->
<!-- scope: microscope -->
The structure mostly follows the original structure defined in RotoWire dataset ([Wiseman et. al. 2017](https://aclanthology.org/D17-1239/)) with some modifications (such as game and next-game keys) address the problem of information gap between input and output data ([Thomson et. al. 2020](https://aclanthology.org/2020.inlg-1.6/)).
#### How were labels chosen?
<!-- info: How were the labels chosen? -->
<!-- scope: microscope -->
Similar to RotoWire dataset ([Wiseman et. al. 2017](https://aclanthology.org/D17-1239/))
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{
"sportsett_id": "1",
"gem_id": "GEM-sportsett_basketball-train-0",
"game": {
"day": "1",
"month": "November",
"year": "2014",
"dayname": "Saturday",
"season": "2014",
"stadium": "Wells Fargo Center",
"city": "Philadelphia",
"state": "Pennsylvania",
"attendance": "19753",
"capacity": "20478",
"game_id": "1"
},
"teams": {
"home": {
"name": "76ers",
"place": "Philadelphia",
"conference": "Eastern Conference",
"division": "Atlantic",
"wins": "0",
"losses": "3",
"conference_standing": 15,
"game_number": "3",
"previous_game_id": "42",
"next_game_id": "2",
"line_score": {
"game": {
"FG3A": "23",
"FG3M": "7",
"FG3_PCT": "30",
"FGA": "67",
"FGM": "35",
"FG_PCT": "52",
"FTA": "26",
"FTM": "19",
"FT_PCT": "73",
"DREB": "33",
"OREB": "4",
"TREB": "37",
"BLK": "10",
"AST": "28",
"STL": "9",
"TOV": "24",
"PF": "21",
"PTS": "96",
"MIN": "4"
},
"H1": {
"FG3A": "82",
"FG3M": "30",
"FG3_PCT": "37",
"FGA": "2115",
"FGM": "138",
"FG_PCT": "7",
"FTA": "212",
"FTM": "18",
"FT_PCT": "8",
"DREB": "810",
"OREB": "21",
"TREB": "831",
"BLK": "51",
"AST": "107",
"STL": "21",
"TOV": "64",
"PTS": "3024",
"MIN": "6060"
},
"H2": {
"FG3A": "85",
"FG3M": "40",
"FG3_PCT": "47",
"FGA": "1615",
"FGM": "104",
"FG_PCT": "6",
"FTA": "66",
"FTM": "55",
"FT_PCT": "83",
"DREB": "96",
"OREB": "10",
"TREB": "106",
"BLK": "22",
"AST": "92",
"STL": "24",
"TOV": "68",
"PTS": "2913",
"MIN": "6060"
},
"Q1": {
"FG3A": "8",
"FG3M": "3",
"FG3_PCT": "38",
"FGA": "21",
"FGM": "13",
"FG_PCT": "62",
"FTA": "2",
"FTM": "1",
"FT_PCT": "50",
"DREB": "8",
"OREB": "2",
"TREB": "10",
"BLK": "5",
"AST": "10",
"STL": "2",
"TOV": "6",
"PTS": "30",
"MIN": "60"
},
"Q2": {
"FG3A": "2",
"FG3M": "0",
"FG3_PCT": "0",
"FGA": "15",
"FGM": "8",
"FG_PCT": "53",
"FTA": "12",
"FTM": "8",
"FT_PCT": "67",
"DREB": "10",
"OREB": "1",
"TREB": "11",
"BLK": "1",
"AST": "7",
"STL": "1",
"TOV": "4",
"PTS": "24",
"MIN": "60"
},
"Q3": {
"FG3A": "8",
"FG3M": "4",
"FG3_PCT": "50",
"FGA": "16",
"FGM": "10",
"FG_PCT": "62",
"FTA": "6",
"FTM": "5",
"FT_PCT": "83",
"DREB": "9",
"OREB": "1",
"TREB": "10",
"BLK": "2",
"AST": "9",
"STL": "2",
"TOV": "6",
"PTS": "29",
"MIN": "60"
},
"Q4": {
"FG3A": "5",
"FG3M": "0",
"FG3_PCT": "0",
"FGA": "15",
"FGM": "4",
"FG_PCT": "27",
"FTA": "6",
"FTM": "5",
"FT_PCT": "83",
"DREB": "6",
"OREB": "0",
"TREB": "6",
"BLK": "2",
"AST": "2",
"STL": "4",
"TOV": "8",
"PTS": "13",
"MIN": "60"
},
"OT": {
"FG3A": "0",
"FG3M": "0",
"FG3_PCT": "0",
"FGA": "0",
"FGM": "0",
"FG_PCT": "0",
"FTA": "0",
"FTM": "0",
"FT_PCT": "0",
"DREB": "0",
"OREB": "0",
"TREB": "0",
"BLK": "0",
"AST": "0",
"STL": "0",
"TOV": "0",
"PTS": "0",
"MIN": "0"
}
},
"box_score": [
{
"first_name": "Tony",
"last_name": "Wroten",
"name": "Tony Wroten",
"starter": "True",
"MIN": "33",
"FGM": "6",
"FGA": "11",
"FG_PCT": "55",
"FG3M": "1",
"FG3A": "4",
"FG3_PCT": "25",
"FTM": "8",
"FTA": "11",
"FT_PCT": "73",
"OREB": "0",
"DREB": "3",
"TREB": "3",
"AST": "10",
"STL": "1",
"BLK": "1",
"TOV": "4",
"PF": "1",
"PTS": "21",
"+/-": "-11",
"DOUBLE": "double"
},
{
"first_name": "Hollis",
"last_name": "Thompson",
"name": "Hollis Thompson",
"starter": "True",
"MIN": "32",
"FGM": "4",
"FGA": "8",
"FG_PCT": "50",
"FG3M": "2",
"FG3A": "5",
"FG3_PCT": "40",
"FTM": "0",
"FTA": "0",
"FT_PCT": "0",
"OREB": "0",
"DREB": "1",
"TREB": "1",
"AST": "2",
"STL": "0",
"BLK": "3",
"TOV": "2",
"PF": "2",
"PTS": "10",
"+/-": "-17",
"DOUBLE": "none"
},
{
"first_name": "Henry",
"last_name": "Sims",
"name": "Henry Sims",
"starter": "True",
"MIN": "27",
"FGM": "4",
"FGA": "9",
"FG_PCT": "44",
"FG3M": "0",
"FG3A": "0",
"FG3_PCT": "0",
"FTM": "1",
"FTA": "2",
"FT_PCT": "50",
"OREB": "1",
"DREB": "3",
"TREB": "4",
"AST": "2",
"STL": "0",
"BLK": "1",
"TOV": "0",
"PF": "1",
"PTS": "9",
"+/-": "-10",
"DOUBLE": "none"
},
{
"first_name": "Nerlens",
"last_name": "Noel",
"name": "Nerlens Noel",
"starter": "True",
"MIN": "25",
"FGM": "1",
"FGA": "4",
"FG_PCT": "25",
"FG3M": "0",
"FG3A": "0",
"FG3_PCT": "0",
"FTM": "0",
"FTA": "0",
"FT_PCT": "0",
"OREB": "0",
"DREB": "5",
"TREB": "5",
"AST": "3",
"STL": "1",
"BLK": "1",
"TOV": "3",
"PF": "1",
"PTS": "2",
"+/-": "-19",
"DOUBLE": "none"
},
{
"first_name": "Luc",
"last_name": "Mbah a Moute",
"name": "Luc Mbah a Moute",
"starter": "True",
"MIN": "19",
"FGM": "4",
"FGA": "10",
"FG_PCT": "40",
"FG3M": "0",
"FG3A": "2",
"FG3_PCT": "0",
"FTM": "1",
"FTA": "2",
"FT_PCT": "50",
"OREB": "3",
"DREB": "4",
"TREB": "7",
"AST": "3",
"STL": "1",
"BLK": "0",
"TOV": "6",
"PF": "3",
"PTS": "9",
"+/-": "-12",
"DOUBLE": "none"
},
{
"first_name": "Brandon",
"last_name": "Davies",
"name": "Brandon Davies",
"starter": "False",
"MIN": "23",
"FGM": "7",
"FGA": "9",
"FG_PCT": "78",
"FG3M": "1",
"FG3A": "2",
"FG3_PCT": "50",
"FTM": "3",
"FTA": "4",
"FT_PCT": "75",
"OREB": "0",
"DREB": "3",
"TREB": "3",
"AST": "0",
"STL": "3",
"BLK": "0",
"TOV": "3",
"PF": "3",
"PTS": "18",
"+/-": "-1",
"DOUBLE": "none"
},
{
"first_name": "Chris",
"last_name": "Johnson",
"name": "Chris Johnson",
"starter": "False",
"MIN": "21",
"FGM": "2",
"FGA": "4",
"FG_PCT": "50",
"FG3M": "1",
"FG3A": "3",
"FG3_PCT": "33",
"FTM": "0",
"FTA": "0",
"FT_PCT": "0",
"OREB": "0",
"DREB": "2",
"TREB": "2",
"AST": "0",
"STL": "3",
"BLK": "0",
"TOV": "2",
"PF": "5",
"PTS": "5",
"+/-": "3",
"DOUBLE": "none"
},
{
"first_name": "K.J.",
"last_name": "McDaniels",
"name": "K.J. McDaniels",
"starter": "False",
"MIN": "20",
"FGM": "2",
"FGA": "4",
"FG_PCT": "50",
"FG3M": "1",
"FG3A": "3",
"FG3_PCT": "33",
"FTM": "3",
"FTA": "4",
"FT_PCT": "75",
"OREB": "0",
"DREB": "1",
"TREB": "1",
"AST": "2",
"STL": "0",
"BLK": "3",
"TOV": "2",
"PF": "3",
"PTS": "8",
"+/-": "-10",
"DOUBLE": "none"
},
{
"first_name": "Malcolm",
"last_name": "Thomas",
"name": "Malcolm Thomas",
"starter": "False",
"MIN": "19",
"FGM": "4",
"FGA": "4",
"FG_PCT": "100",
"FG3M": "0",
"FG3A": "0",
"FG3_PCT": "0",
"FTM": "0",
"FTA": "0",
"FT_PCT": "0",
"OREB": "0",
"DREB": "9",
"TREB": "9",
"AST": "0",
"STL": "0",
"BLK": "0",
"TOV": "0",
"PF": "2",
"PTS": "8",
"+/-": "-6",
"DOUBLE": "none"
},
{
"first_name": "Alexey",
"last_name": "Shved",
"name": "Alexey Shved",
"starter": "False",
"MIN": "14",
"FGM": "1",
"FGA": "4",
"FG_PCT": "25",
"FG3M": "1",
"FG3A": "4",
"FG3_PCT": "25",
"FTM": "3",
"FTA": "3",
"FT_PCT": "100",
"OREB": "0",
"DREB": "1",
"TREB": "1",
"AST": "6",
"STL": "0",
"BLK": "0",
"TOV": "2",
"PF": "0",
"PTS": "6",
"+/-": "-7",
"DOUBLE": "none"
},
{
"first_name": "JaKarr",
"last_name": "Sampson",
"name": "JaKarr Sampson",
"starter": "False",
"MIN": "2",
"FGM": "0",
"FGA": "0",
"FG_PCT": "0",
"FG3M": "0",
"FG3A": "0",
"FG3_PCT": "0",
"FTM": "0",
"FTA": "0",
"FT_PCT": "0",
"OREB": "0",
"DREB": "1",
"TREB": "1",
"AST": "0",
"STL": "0",
"BLK": "1",
"TOV": "0",
"PF": "0",
"PTS": "0",
"+/-": "0",
"DOUBLE": "none"
},
{
"first_name": "Michael",
"last_name": "Carter-Williams",
"name": "Michael Carter-Williams",
"starter": "False",
"MIN": "0",
"FGM": "0",
"FGA": "0",
"FG_PCT": "0",
"FG3M": "0",
"FG3A": "0",
"FG3_PCT": "0",
"FTM": "0",
"FTA": "0",
"FT_PCT": "0",
"OREB": "0",
"DREB": "0",
"TREB": "0",
"AST": "0",
"STL": "0",
"BLK": "0",
"TOV": "0",
"PF": "0",
"PTS": "0",
"+/-": "0",
"DOUBLE": "none"
}
],
"next_game": {
"day": "3",
"month": "November",
"year": "2014",
"dayname": "Monday",
"stadium": "Wells Fargo Center",
"city": "Philadelphia",
"opponent_name": "Rockets",
"opponent_place": "Houston",
"is_home": "True"
}
},
"vis": {
"name": "Heat",
"place": "Miami",
"conference": "Eastern Conference",
"division": "Southeast",
"wins": "2",
"losses": "0",
"conference_standing": 1,
"game_number": "2",
"previous_game_id": "329",
"next_game_id": "330",
"line_score": {
"game": {
"FG3A": "24",
"FG3M": "12",
"FG3_PCT": "50",
"FGA": "83",
"FGM": "41",
"FG_PCT": "49",
"FTA": "29",
"FTM": "20",
"FT_PCT": "69",
"DREB": "26",
"OREB": "9",
"TREB": "35",
"BLK": "0",
"AST": "33",
"STL": "16",
"TOV": "16",
"PF": "20",
"PTS": "114",
"MIN": "4"
},
"H1": {
"FG3A": "69",
"FG3M": "44",
"FG3_PCT": "64",
"FGA": "2321",
"FGM": "1110",
"FG_PCT": "48",
"FTA": "106",
"FTM": "64",
"FT_PCT": "60",
"DREB": "35",
"OREB": "23",
"TREB": "58",
"BLK": "00",
"AST": "88",
"STL": "53",
"TOV": "34",
"PTS": "3228",
"MIN": "6060"
},
"H2": {
"FG3A": "45",
"FG3M": "22",
"FG3_PCT": "49",
"FGA": "1920",
"FGM": "1010",
"FG_PCT": "53",
"FTA": "85",
"FTM": "55",
"FT_PCT": "65",
"DREB": "612",
"OREB": "22",
"TREB": "634",
"BLK": "00",
"AST": "98",
"STL": "35",
"TOV": "36",
"PTS": "2727",
"MIN": "6060"
},
"Q1": {
"FG3A": "6",
"FG3M": "4",
"FG3_PCT": "67",
"FGA": "23",
"FGM": "11",
"FG_PCT": "48",
"FTA": "10",
"FTM": "6",
"FT_PCT": "60",
"DREB": "3",
"OREB": "2",
"TREB": "5",
"BLK": "0",
"AST": "8",
"STL": "5",
"TOV": "3",
"PTS": "32",
"MIN": "60"
},
"Q2": {
"FG3A": "9",
"FG3M": "4",
"FG3_PCT": "44",
"FGA": "21",
"FGM": "10",
"FG_PCT": "48",
"FTA": "6",
"FTM": "4",
"FT_PCT": "67",
"DREB": "5",
"OREB": "3",
"TREB": "8",
"BLK": "0",
"AST": "8",
"STL": "3",
"TOV": "4",
"PTS": "28",
"MIN": "60"
},
"Q3": {
"FG3A": "4",
"FG3M": "2",
"FG3_PCT": "50",
"FGA": "19",
"FGM": "10",
"FG_PCT": "53",
"FTA": "8",
"FTM": "5",
"FT_PCT": "62",
"DREB": "6",
"OREB": "2",
"TREB": "8",
"BLK": "0",
"AST": "9",
"STL": "3",
"TOV": "3",
"PTS": "27",
"MIN": "60"
},
"Q4": {
"FG3A": "5",
"FG3M": "2",
"FG3_PCT": "40",
"FGA": "20",
"FGM": "10",
"FG_PCT": "50",
"FTA": "5",
"FTM": "5",
"FT_PCT": "100",
"DREB": "12",
"OREB": "2",
"TREB": "14",
"BLK": "0",
"AST": "8",
"STL": "5",
"TOV": "6",
"PTS": "27",
"MIN": "60"
},
"OT": {
"FG3A": "0",
"FG3M": "0",
"FG3_PCT": "0",
"FGA": "0",
"FGM": "0",
"FG_PCT": "0",
"FTA": "0",
"FTM": "0",
"FT_PCT": "0",
"DREB": "0",
"OREB": "0",
"TREB": "0",
"BLK": "0",
"AST": "0",
"STL": "0",
"TOV": "0",
"PTS": "0",
"MIN": "0"
}
},
"box_score": [
{
"first_name": "Chris",
"last_name": "Bosh",
"name": "Chris Bosh",
"starter": "True",
"MIN": "33",
"FGM": "9",
"FGA": "17",
"FG_PCT": "53",
"FG3M": "2",
"FG3A": "5",
"FG3_PCT": "40",
"FTM": "10",
"FTA": "11",
"FT_PCT": "91",
"OREB": "3",
"DREB": "5",
"TREB": "8",
"AST": "4",
"STL": "2",
"BLK": "0",
"TOV": "3",
"PF": "2",
"PTS": "30",
"+/-": "10",
"DOUBLE": "none"
},
{
"first_name": "Dwyane",
"last_name": "Wade",
"name": "Dwyane Wade",
"starter": "True",
"MIN": "32",
"FGM": "4",
"FGA": "18",
"FG_PCT": "22",
"FG3M": "0",
"FG3A": "1",
"FG3_PCT": "0",
"FTM": "1",
"FTA": "3",
"FT_PCT": "33",
"OREB": "1",
"DREB": "2",
"TREB": "3",
"AST": "10",
"STL": "3",
"BLK": "0",
"TOV": "6",
"PF": "1",
"PTS": "9",
"+/-": "13",
"DOUBLE": "none"
},
{
"first_name": "Luol",
"last_name": "Deng",
"name": "Luol Deng",
"starter": "True",
"MIN": "29",
"FGM": "7",
"FGA": "11",
"FG_PCT": "64",
"FG3M": "1",
"FG3A": "3",
"FG3_PCT": "33",
"FTM": "0",
"FTA": "1",
"FT_PCT": "0",
"OREB": "2",
"DREB": "2",
"TREB": "4",
"AST": "2",
"STL": "2",
"BLK": "0",
"TOV": "1",
"PF": "0",
"PTS": "15",
"+/-": "4",
"DOUBLE": "none"
},
{
"first_name": "Shawne",
"last_name": "Williams",
"name": "Shawne Williams",
"starter": "True",
"MIN": "29",
"FGM": "5",
"FGA": "9",
"FG_PCT": "56",
"FG3M": "3",
"FG3A": "5",
"FG3_PCT": "60",
"FTM": "2",
"FTA": "2",
"FT_PCT": "100",
"OREB": "0",
"DREB": "4",
"TREB": "4",
"AST": "4",
"STL": "1",
"BLK": "0",
"TOV": "1",
"PF": "4",
"PTS": "15",
"+/-": "16",
"DOUBLE": "none"
},
{
"first_name": "Norris",
"last_name": "Cole",
"name": "Norris Cole",
"starter": "True",
"MIN": "27",
"FGM": "4",
"FGA": "7",
"FG_PCT": "57",
"FG3M": "2",
"FG3A": "4",
"FG3_PCT": "50",
"FTM": "0",
"FTA": "0",
"FT_PCT": "0",
"OREB": "0",
"DREB": "1",
"TREB": "1",
"AST": "4",
"STL": "2",
"BLK": "0",
"TOV": "0",
"PF": "1",
"PTS": "10",
"+/-": "6",
"DOUBLE": "none"
},
{
"first_name": "Mario",
"last_name": "Chalmers",
"name": "Mario Chalmers",
"starter": "False",
"MIN": "25",
"FGM": "6",
"FGA": "9",
"FG_PCT": "67",
"FG3M": "2",
"FG3A": "2",
"FG3_PCT": "100",
"FTM": "6",
"FTA": "10",
"FT_PCT": "60",
"OREB": "0",
"DREB": "2",
"TREB": "2",
"AST": "4",
"STL": "4",
"BLK": "0",
"TOV": "0",
"PF": "1",
"PTS": "20",
"+/-": "18",
"DOUBLE": "none"
},
{
"first_name": "Shabazz",
"last_name": "Napier",
"name": "Shabazz Napier",
"starter": "False",
"MIN": "20",
"FGM": "2",
"FGA": "3",
"FG_PCT": "67",
"FG3M": "1",
"FG3A": "2",
"FG3_PCT": "50",
"FTM": "0",
"FTA": "0",
"FT_PCT": "0",
"OREB": "0",
"DREB": "3",
"TREB": "3",
"AST": "4",
"STL": "2",
"BLK": "0",
"TOV": "1",
"PF": "4",
"PTS": "5",
"+/-": "11",
"DOUBLE": "none"
},
{
"first_name": "Chris",
"last_name": "Andersen",
"name": "Chris Andersen",
"starter": "False",
"MIN": "17",
"FGM": "0",
"FGA": "2",
"FG_PCT": "0",
"FG3M": "0",
"FG3A": "0",
"FG3_PCT": "0",
"FTM": "0",
"FTA": "0",
"FT_PCT": "0",
"OREB": "1",
"DREB": "2",
"TREB": "3",
"AST": "0",
"STL": "0",
"BLK": "0",
"TOV": "0",
"PF": "2",
"PTS": "0",
"+/-": "6",
"DOUBLE": "none"
},
{
"first_name": "Josh",
"last_name": "McRoberts",
"name": "Josh McRoberts",
"starter": "False",
"MIN": "11",
"FGM": "1",
"FGA": "3",
"FG_PCT": "33",
"FG3M": "0",
"FG3A": "1",
"FG3_PCT": "0",
"FTM": "1",
"FTA": "2",
"FT_PCT": "50",
"OREB": "0",
"DREB": "3",
"TREB": "3",
"AST": "0",
"STL": "0",
"BLK": "0",
"TOV": "2",
"PF": "3",
"PTS": "3",
"+/-": "1",
"DOUBLE": "none"
},
{
"first_name": "James",
"last_name": "Ennis",
"name": "James Ennis",
"starter": "False",
"MIN": "7",
"FGM": "2",
"FGA": "3",
"FG_PCT": "67",
"FG3M": "1",
"FG3A": "1",
"FG3_PCT": "100",
"FTM": "0",
"FTA": "0",
"FT_PCT": "0",
"OREB": "1",
"DREB": "1",
"TREB": "2",
"AST": "1",
"STL": "0",
"BLK": "0",
"TOV": "0",
"PF": "1",
"PTS": "5",
"+/-": "2",
"DOUBLE": "none"
},
{
"first_name": "Justin",
"last_name": "Hamilton",
"name": "Justin Hamilton",
"starter": "False",
"MIN": "5",
"FGM": "1",
"FGA": "1",
"FG_PCT": "100",
"FG3M": "0",
"FG3A": "0",
"FG3_PCT": "0",
"FTM": "0",
"FTA": "0",
"FT_PCT": "0",
"OREB": "1",
"DREB": "1",
"TREB": "2",
"AST": "0",
"STL": "0",
"BLK": "0",
"TOV": "1",
"PF": "0",
"PTS": "2",
"+/-": "3",
"DOUBLE": "none"
},
{
"first_name": "Andre",
"last_name": "Dawkins",
"name": "Andre Dawkins",
"starter": "False",
"MIN": "1",
"FGM": "0",
"FGA": "0",
"FG_PCT": "0",
"FG3M": "0",
"FG3A": "0",
"FG3_PCT": "0",
"FTM": "0",
"FTA": "0",
"FT_PCT": "0",
"OREB": "0",
"DREB": "0",
"TREB": "0",
"AST": "0",
"STL": "0",
"BLK": "0",
"TOV": "1",
"PF": "1",
"PTS": "0",
"+/-": "0",
"DOUBLE": "none"
},
{
"first_name": "Shannon",
"last_name": "Brown",
"name": "Shannon Brown",
"starter": "False",
"MIN": "0",
"FGM": "0",
"FGA": "0",
"FG_PCT": "0",
"FG3M": "0",
"FG3A": "0",
"FG3_PCT": "0",
"FTM": "0",
"FTA": "0",
"FT_PCT": "0",
"OREB": "0",
"DREB": "0",
"TREB": "0",
"AST": "0",
"STL": "0",
"BLK": "0",
"TOV": "0",
"PF": "0",
"PTS": "0",
"+/-": "0",
"DOUBLE": "none"
}
],
"next_game": {
"day": "2",
"month": "November",
"year": "2014",
"dayname": "Sunday",
"stadium": "American Airlines Arena",
"city": "Miami",
"opponent_name": "Raptors",
"opponent_place": "Toronto",
"is_home": "True"
}
}
},
"summaries": [
"The Miami Heat ( 20 ) defeated the Philadelphia 76ers ( 0 - 3 ) 114 - 96 on Saturday . Chris Bosh scored a game - high 30 points to go with eight rebounds in 33 minutes . Josh McRoberts made his Heat debut after missing the entire preseason recovering from toe surgery . McRoberts came off the bench and played 11 minutes . Shawne Williams was once again the starter at power forward in McRoberts ' stead . Williams finished with 15 points and three three - pointers in 29 minutes . Mario Chalmers scored 18 points in 25 minutes off the bench . Luc Richard Mbah a Moute replaced Chris Johnson in the starting lineup for the Sixers on Saturday . Hollis Thompson shifted down to the starting shooting guard job to make room for Mbah a Moute . Mbah a Moute finished with nine points and seven rebounds in 19 minutes . K.J . McDaniels , who suffered a minor hip flexor injury in Friday 's game , was available and played 21 minutes off the bench , finishing with eight points and three blocks . Michael Carter-Williams is expected to be out until Nov. 13 , but Tony Wroten continues to put up impressive numbers in Carter-Williams ' absence . Wroten finished with a double - double of 21 points and 10 assists in 33 minutes . The Heat will complete a back - to - back set at home Sunday against the Tornoto Raptors . The Sixers ' next game is at home Monday against the Houston Rockets ."
]
}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
- Train: NBA seasons - 2014, 2015, & 2016; total instances - 3690
- Validation: NBA seasons - 2017; total instances - 1230
- Test: NBA seasons - 2018; total instances - 1230
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The splits were created as per different NBA seasons. All the games in regular season (no play-offs) are added in the dataset
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
This dataset contains a data analytics problem in the classic sense ([Reiter, 2007](https://aclanthology.org/W07-2315)). That is, there is a large amount of data from which insights need to be selected. Further, the insights should be both from simple shallow queries (such as dirext transcriptions of the properties of a subject, i.e., a player and their statistics), as well as aggregated (how a player has done over time). There is far more on the data side than is required to be realised, and indeed, could be practically realised. This depth of data analytics problem does not exist in other datasets.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
Many, if not all aspects of data-to-text systems can be measured with this dataset. It has complex data analytics, meaninful document planning (10-15 sentence documents with a narrative structure), as well as microplanning and realisation requirements. Finding models to handle this volume of data, as well as methods for meaninfully evaluate generations is a very open question.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
For dataset discussion see [Thomson et al, 2020](https://aclanthology.org/2020.intellang-1.4/)
For evaluation see:
- [Thomson & Reiter 2020, Thomson & Reiter (2021)](https://aclanthology.org/2021.inlg-1.23)
- [Kasner et al (2021)](https://aclanthology.org/2021.inlg-1.25)
For a system using the relational database form of SportSett, see:
- [Thomson et al (2020)](https://aclanthology.org/2020.inlg-1.6/)
For recent systems using the Rotowire dataset, see:
- [Puduppully & Lapata (2021)](https://github.com/ratishsp/data2text-macro-plan-py)
- [Rebuffel et all (2020)](https://github.com/KaijuML/data-to-text-hierarchical)
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Many, if not all aspects of data-to-text systems can be measured with this dataset. It has complex data analytics, meaninful document planning (10-15 sentence documents with a narrative structure), as well as microplanning and realisation requirements. Finding models to handle this volume of data, as well as methods for meaninfully evaluate generations is a very open question.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`BLEU`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
BLEU is the only off-the-shelf metric commonly used. Works have also used custom metrics like RG ([Wiseman et al, 2017](https://aclanthology.org/D17-1239)), and a recent shared task explored other metrics and their corrolation with human evaluation ([Thomson & Reiter, 2021](https://aclanthology.org/2021.inlg-1.23)).
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
Most results from prior works use the original Rotowire dataset, which has train/validation/test contamination. For results of BLEU and RG on the relational database format of SportSett, as a guide, see [Thomson et al, 2020](https://aclanthology.org/2020.inlg-1.6).
#### Relevant Previous Results
<!-- info: What are the most relevant previous results for this task/dataset? -->
<!-- scope: microscope -->
The results on this dataset are largely unexplored, as is the selection of suitable metrics that correlate with human judgment. See Thomson et al, 2021 (https://aclanthology.org/2021.inlg-1.23) for an overview, and Kasner et al (2021) for the best performing metric at the time of writing (https://aclanthology.org/2021.inlg-1.25).
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The references texts were taken from the existing dataset RotoWire-FG ([Wang, 2019](https://www.aclweb.org/anthology/W19-8639)), which is in turn based on Rotowire ([Wiseman et al, 2017](https://aclanthology.org/D17-1239)). The rationale behind this dataset was to re-structure the data such that aggregate statistics over multiple games, as well as upcoming game schedules could be included, moving the dataset from snapshots of single games, to a format where almost everything that could be present in the reference texts could be found in the data.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
Create a summary of a basketball, with insightful facts about the game, teams, and players, both within the game, withing periods during the game, and over the course of seasons/careers where appropriate. This is a data-to-text problem in the classic sense ([Reiter, 2007](https://aclanthology.org/W07-2315)) in that it has a difficult data analystics state, in addition to ordering and transcription of selected facts.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
yes
#### Source Details
<!-- info: List the sources (one per line) -->
<!-- scope: periscope -->
RotoWire-FG (https://www.rotowire.com).
Wikipedia (https://en.wikipedia.org/wiki/Main_Page)
Basketball Reference (https://www.basketball-reference.com)
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Multiple websites`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
None
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
Summaries of basketball games (in the NBA).
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Data Preprocessing
<!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) -->
<!-- scope: microscope -->
It retains the original tokenization scheme employed by Wang 2019
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
manually
#### Filter Criteria
<!-- info: What were the selection criteria? -->
<!-- scope: microscope -->
Games from the 2014 through 2018 seasons were selected. Within these seasons games are not filtered, all are present, but this was an arbitrary solution from the original RotoWirte-FG dataset.
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
#### Justification for Using the Data
<!-- info: If not, what is the justification for reusing the data? -->
<!-- scope: microscope -->
The dataset consits of a pre-existing dataset, as well as publically available facts.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
unlikely
#### Categories of PII
<!-- info: What categories of PII are present or suspected in the data? -->
<!-- scope: periscope -->
`generic PII`
#### Any PII Identification?
<!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
<!-- scope: periscope -->
no identification
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
yes
#### Links and Summaries of Analysis Work
<!-- info: Provide links to and summaries of works analyzing these biases. -->
<!-- scope: microscope -->
Unaware of any work, but, this is a dataset considting solely of summaries of mens professional basketball games. It does not cover different levels of the sport, or different genders, and all pronouns are likely to be male unless a specific player is referred to by other pronouns in the training text. This makes it difficult to train systems where gender can be specified as an attribute, although it is an interesting, open problem that could be investigated using the dataset.
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
No, it is very specifically American English from the sports journalism domain.
## Considerations for Using the Data
### PII Risks and Liability
#### Potential PII Risk
<!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
<!-- scope: microscope -->
All information relating to persons is of public record.
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`public domain`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`public domain`
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
SportSett resolved the major overlap problems of RotoWire, although some overlap is unavoidable. For example, whilst it is not possible to find career totals and other historic information for all players (the data only goes back to 2014), it is possible to do so for some players. It is unavoidable that some data which is aggregated, exists in its base form in previous partitions. The season-based partition scheme heavily constrains this however.
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
Factual accuray continues to be a problem, systems may incorrectly represent the facts of the game.
#### Discouraged Use Cases
<!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. -->
<!-- scope: microscope -->
Using the RG metric to maximise the number of true facts in a generate summary is not nececeraly
| 44,941 | [
[
-0.0205230712890625,
-0.050750732421875,
0.01324462890625,
0.0047607421875,
-0.01131439208984375,
0.017913818359375,
-0.021514892578125,
-0.0311737060546875,
0.036590576171875,
0.01436614990234375,
-0.053741455078125,
-0.055908203125,
-0.039642333984375,
0.0... |
SocialGrep/top-american-universities-on-reddit | 2022-07-25T18:57:00.000Z | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | SocialGrep | This NLP dataset contains all the posts and comments in the subreddits of top 10 universities in the United States, chosen according to the 2019 Forbes ranking. | null | 2 | 124 | 2022-03-02T23:29:22 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for top-american-universities-on-reddit
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/top-american-universities-on-reddit?utm_source=huggingface&utm_medium=link&utm_campaign=topamericanuniversitiesonreddit)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=topamericanuniversitiesonreddit)
### Dataset Summary
This corpus contains the complete data for the activity of the subreddits of the top 10 US colleges, according to the [2019 Forbes listing](https://www.forbes.com/top-colleges/#1208425d1987).
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | 3,937 | [
[
-0.048095703125,
-0.06439208984375,
0.024566650390625,
0.01513671875,
-0.0253448486328125,
0.01727294921875,
-0.022308349609375,
-0.0164031982421875,
0.047882080078125,
0.0220947265625,
-0.06414794921875,
-0.081787109375,
-0.04815673828125,
0.024200439453125... |
classla/FRENK-hate-sl | 2022-10-21T07:46:11.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:sl",
"license:other",
"hate-speech-detection",
"offensive-language",
"arxiv:1906.02045",
"region:us"
] | classla | The FRENK Datasets of Socially Unacceptable Discourse in Slovene. | @misc{ljubešić2019frenk,
title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English},
author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec},
year={2019},
eprint={1906.02045},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/1906.02045}
} | 0 | 124 | 2022-03-02T23:29:22 | ---
language:
- sl
license:
- other
size_categories:
- 1K<n<10K
task_categories:
- text-classification
task_ids: []
tags:
- hate-speech-detection
- offensive-language
---
Slovenian subset of the [FRENK dataset](http://hdl.handle.net/11356/1433). Also available on HuggingFace dataset hub: [English subset](https://huggingface.co/datasets/5roop/FRENK-hate-en), [Croatian subset](https://huggingface.co/datasets/5roop/FRENK-hate-hr).
## Dataset Description
- **Homepage:** http://hdl.handle.net/11356/1433
- **Repository:** http://hdl.handle.net/11356/1433
- **Paper:** https://arxiv.org/abs/1906.02045
- **Project page** https://nl.ijs.si/frenk/
## Description of the original dataset
>The original FRENK dataset consists of comments to Facebook posts (news articles) of mainstream media outlets from Croatia, Great Britain, and Slovenia, on the topics of migrants and LGBT. The dataset contains whole discussion threads. Each comment is annotated by the type of socially unacceptable discourse (e.g., inappropriate, offensive, violent speech) and its target (e.g., migrants/LGBT, commenters, media). The annotation schema is described in detail in [https://arxiv.org/pdf/1906.02045.pdf]. Usernames in the metadata are pseudo-anonymised and removed from the comments.
>
>The data in each language (Croatian (hr), English (en), Slovenian (sl), and topic (migrants, LGBT) is divided into a training and a testing portion. The training and testing data consist of separate discussion threads, i.e., there is no cross-discussion-thread contamination between training and testing data. The sizes of the splits are the following: Croatian, migrants: 4356 training comments, 978 testing comments; Croatian LGBT: 4494 training comments, 1142 comments; English, migrants: 4540 training comments, 1285 testing comments; English, LGBT: 4819 training comments, 1017 testing comments; Slovenian, migrants: 5145 training comments, 1277 testing comments; Slovenian, LGBT: 2842 training comments, 900 testing comments.
For this dataset only the Croatian data was used. Training segment has been split into beginning 90% (published here as training split) and end 10% (published here as dev split).
## Usage in `Transformers`
```python
import datasets
ds = datasets.load_dataset("classla/FRENK-hate-sl","binary")
```
For binary classification the following encoding is used:
```python
_CLASS_MAP_BINARY = {
'Acceptable': 0,
'Offensive': 1,
}
```
The original labels are available if the dataset is loaded with the `multiclass` option:
```python
import datasets
ds = datasets.load_dataset("classla/FRENK-hate-sl","multiclass").
```
In this case the encoding used is:
```python
_CLASS_MAP_MULTICLASS = {
'Acceptable speech': 0,
'Inappropriate': 1,
'Background offensive': 2,
'Other offensive': 3,
'Background violence': 4,
'Other violence': 5,
}
```
## Data structure
* `text`: text
* `target`: who is the target of the hate-speech text ("no target", "commenter", "target" (migrants or LGBT, depending on the topic), or "related to" (again, the topic))
* `topic`: whether the text relates to lgbt or migrants hate-speech domains
* `label`: label of the text instance, see above.
## Data instance
```
{'text': 'Otroci so odprti in brez predsodkov.Predsodke jim vcepimo starejši,starši,družba,družina...Če otroku lepo razložimo,razume.Nikoli ni dobro,da omejujemo otroka,njegovo inteligenco in duhovnost z lastnim ne razumevanjem nečesa ali nekoga.Predsodek je miselni zapor,prepreka,da bi bili svobodni.Ljubezen je svoboda.Sem ZA spremembo zakona!Srečno :D',
'target': 'No target',
'topic': 'lgbt',
'label': 0}
```
## Licensing information
CLARIN.SI Licence ACA ID-BY-NC-INF-NORED 1.0
## Citation information
When using this dataset please cite the following paper:
```
@misc{ljubešić2019frenk,
title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English},
author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec},
year={2019},
eprint={1906.02045},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/1906.02045}
}
```
The original dataset can be cited as
```
@misc{11356/1433,
title = {Offensive language dataset of Croatian, English and Slovenian comments {FRENK} 1.0},
author = {Ljube{\v s}i{\'c}, Nikola and Fi{\v s}er, Darja and Erjavec, Toma{\v z}},
url = {http://hdl.handle.net/11356/1433},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {{CLARIN}.{SI} Licence {ACA} {ID}-{BY}-{NC}-{INF}-{NORED} 1.0},
year = {2021} }
```
| 4,603 | [
[
-0.04022216796875,
-0.047210693359375,
-0.0015954971313476562,
0.0255584716796875,
-0.0125732421875,
-0.0236663818359375,
-0.03143310546875,
-0.02716064453125,
0.018951416015625,
0.0244598388671875,
-0.045806884765625,
-0.0628662109375,
-0.0548095703125,
0.0... |
jimregan/clarinpl_sejmsenat | 2023-01-22T13:37:24.000Z | [
"task_categories:other",
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pl",
"license:other",
"region:us"
] | jimregan | A collection of 97 hours of parliamentary speeches published on the ClarinPL website
Note that in order to limit the required storage for preparing this dataset, the audio
is stored in the .wav format and is not converted to a float32 array. To convert the audio
file to a float32 array, please make use of the `.map()` function as follows:
```python
import soundfile as sf
def map_to_array(batch):
speech_array, _ = sf.read(batch["file"])
batch["speech"] = speech_array
return batch
dataset = dataset.map(map_to_array, remove_columns=["file"])
``` | @article{marasek2014system,
title={System for automatic transcription of sessions of the {P}olish {S}enate},
author={Marasek, Krzysztof and Kor{\v{z}}inek, Danijel and Brocki, {\L}ukasz},
journal={Archives of Acoustics},
volume={39},
number={4},
pages={501--509},
year={2014}
} | 1 | 124 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language:
- pl
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- other
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for ClarinPL Sejm/Senat Speech Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CLARIN-PL mowa](https://mowa.clarin-pl.eu/)
- **Repository:** [Needs More Information]
- **Paper:** [System for Automatic Transcription of Sessions of the Polish Senate](https://acoustics.ippt.pan.pl/index.php/aa/article/view/327/pdf_32)
- **Leaderboard:** [Paperswithcode Leaderboard][Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
A collection of 97 hours of parliamentary speeches published on the ClarinPL website.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The audio is in Polish.
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`.
An example from the dataset is:
```
{'file': '/root/.cache/huggingface/datasets/downloads/extracted/4143b1d75559b10028c1c7e8800c9ccc05934ca5a8ea15f8f9a92770576a1ee3/SejmSenat/audio/AdamAbramowicz-20130410/file000.wav',
'id': 'AdamAbramowicz-20130410-file000',
'speaker_id': 'AdamAbramowicz',
'text': 'panie marszałku wysoka izbo panie ministrze próbuje się przedstawiać polskę jako zieloną wyspę kraj który się szybko rozwija tymczasem rzeczywistość jest zupełnie inna a widać ją także dzisiaj przed polskim parlamentem próbuje się rząd próbuje zagonić polaków do pracy aż do śmierci przedłużać wiek emerytalny czyliczyli sytuacja gospodarcza polski w tym wypadku jest przedstawiana już zupełnie inaczej pakiet klimatyczny i protokół z kioto jak się zgadzają fachowcy od gospodarki jest szkodliwy dla krajów które są na dorobku a polska właśnie jest takim krajem'}
```
### Data Fields
- file: A path to the downloaded audio file in .wav format.
- text: the transcription of the audio file.
- speaker_id: The ID of the speaker of the audio.
### Data Splits
| | Train | Test |
| ----- | ----- | ---- |
| dataset | 6622 | 130 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
### Contributions
[Needs More Information] | 4,101 | [
[
-0.03948974609375,
-0.04833984375,
0.00984954833984375,
0.0218048095703125,
-0.0240325927734375,
-0.01027679443359375,
-0.047210693359375,
-0.0214996337890625,
0.042022705078125,
0.04248046875,
-0.060577392578125,
-0.07073974609375,
-0.046051025390625,
0.017... |
bigbio/bionlp_st_2019_bb | 2022-12-22T15:44:04.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | The task focuses on the extraction of the locations and phenotypes of
microorganisms from PubMed abstracts and full-text excerpts, and the
characterization of these entities with respect to reference knowledge
sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by
the importance of the knowledge on biodiversity for fundamental research
and applications in microbiology. | @inproceedings{bossy-etal-2019-bacteria,
title = "Bacteria Biotope at {B}io{NLP} Open Shared Tasks 2019",
author = "Bossy, Robert and
Del{\'e}ger, Louise and
Chaix, Estelle and
Ba, Mouhamadou and
N{\'e}dellec, Claire",
booktitle = "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-5719",
doi = "10.18653/v1/D19-5719",
pages = "121--131",
abstract = "This paper presents the fourth edition of the Bacteria
Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on
the extraction of the locations and phenotypes of microorganisms
from PubMed abstracts and full-text excerpts, and the characterization
of these entities with respect to reference knowledge sources (NCBI
taxonomy, OntoBiotope ontology). The task is motivated by the importance
of the knowledge on biodiversity for fundamental research and applications
in microbiology. The paper describes the different proposed subtasks, the
corpus characteristics, and the challenge organization. We also provide an
analysis of the results obtained by participants, and inspect the evolution
of the results since the last edition in 2016.",
} | 1 | 124 | 2022-11-13T22:07:17 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: BioNLP 2019 BB
homepage: https://sites.google.com/view/bb-2019/dataset
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
- RELATION_EXTRACTION
---
# Dataset Card for BioNLP 2019 BB
## Dataset Description
- **Homepage:** https://sites.google.com/view/bb-2019/dataset
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED,RE
The task focuses on the extraction of the locations and phenotypes of
microorganisms from PubMed abstracts and full-text excerpts, and the
characterization of these entities with respect to reference knowledge
sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by
the importance of the knowledge on biodiversity for fundamental research
and applications in microbiology.
## Citation Information
```
@inproceedings{bossy-etal-2019-bacteria,
title = "Bacteria Biotope at {B}io{NLP} Open Shared Tasks 2019",
author = "Bossy, Robert and
Del{'e}ger, Louise and
Chaix, Estelle and
Ba, Mouhamadou and
N{'e}dellec, Claire",
booktitle = "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-5719",
doi = "10.18653/v1/D19-5719",
pages = "121--131",
abstract = "This paper presents the fourth edition of the Bacteria
Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on
the extraction of the locations and phenotypes of microorganisms
from PubMed abstracts and full-text excerpts, and the characterization
of these entities with respect to reference knowledge sources (NCBI
taxonomy, OntoBiotope ontology). The task is motivated by the importance
of the knowledge on biodiversity for fundamental research and applications
in microbiology. The paper describes the different proposed subtasks, the
corpus characteristics, and the challenge organization. We also provide an
analysis of the results obtained by participants, and inspect the evolution
of the results since the last edition in 2016.",
}
```
| 2,339 | [
[
-0.026214599609375,
-0.0304107666015625,
0.0382080078125,
-0.00751495361328125,
-0.03619384765625,
0.001209259033203125,
-0.017974853515625,
-0.0355224609375,
0.05517578125,
0.036865234375,
-0.0279388427734375,
-0.05218505859375,
-0.033721923828125,
0.037597... |
bigbio/biorelex | 2022-12-22T15:44:10.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | BioRelEx is a biological relation extraction dataset. Version 1.0 contains 2010
annotated sentences that describe binding interactions between various
biological entities (proteins, chemicals, etc.). 1405 sentences are for
training, another 201 sentences are for validation. They are publicly available
at https://github.com/YerevaNN/BioRelEx/releases. Another 404 sentences are for
testing which are kept private for at this Codalab competition
https://competitions.codalab.org/competitions/20468. All sentences contain words
"bind", "bound" or "binding". For every sentence we provide: 1) Complete
annotations of all biological entities that appear in the sentence 2) Entity
types (32 types) and grounding information for most of the proteins and families
(links to uniprot, interpro and other databases) 3) Coreference between entities
in the same sentence (e.g. abbreviations and synonyms) 4) Binding interactions
between the annotated entities 5) Binding interaction types: positive, negative
(A does not bind B) and neutral (A may bind to B) | @inproceedings{khachatrian2019biorelex,
title = "{B}io{R}el{E}x 1.0: Biological Relation Extraction Benchmark",
author = "Khachatrian, Hrant and
Nersisyan, Lilit and
Hambardzumyan, Karen and
Galstyan, Tigran and
Hakobyan, Anna and
Arakelyan, Arsen and
Rzhetsky, Andrey and
Galstyan, Aram",
booktitle = "Proceedings of the 18th BioNLP Workshop and Shared Task",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W19-5019",
doi = "10.18653/v1/W19-5019",
pages = "176--190"
} | 2 | 124 | 2022-11-13T22:07:24 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: BioRelEx
homepage: https://github.com/YerevaNN/BioRelEx
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
- RELATION_EXTRACTION
- COREFERENCE_RESOLUTION
---
# Dataset Card for BioRelEx
## Dataset Description
- **Homepage:** https://github.com/YerevaNN/BioRelEx
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED,RE,COREF
BioRelEx is a biological relation extraction dataset. Version 1.0 contains 2010
annotated sentences that describe binding interactions between various
biological entities (proteins, chemicals, etc.). 1405 sentences are for
training, another 201 sentences are for validation. They are publicly available
at https://github.com/YerevaNN/BioRelEx/releases. Another 404 sentences are for
testing which are kept private for at this Codalab competition
https://competitions.codalab.org/competitions/20468. All sentences contain words
"bind", "bound" or "binding". For every sentence we provide: 1) Complete
annotations of all biological entities that appear in the sentence 2) Entity
types (32 types) and grounding information for most of the proteins and families
(links to uniprot, interpro and other databases) 3) Coreference between entities
in the same sentence (e.g. abbreviations and synonyms) 4) Binding interactions
between the annotated entities 5) Binding interaction types: positive, negative
(A does not bind B) and neutral (A may bind to B)
## Citation Information
```
@inproceedings{khachatrian2019biorelex,
title = "{B}io{R}el{E}x 1.0: Biological Relation Extraction Benchmark",
author = "Khachatrian, Hrant and
Nersisyan, Lilit and
Hambardzumyan, Karen and
Galstyan, Tigran and
Hakobyan, Anna and
Arakelyan, Arsen and
Rzhetsky, Andrey and
Galstyan, Aram",
booktitle = "Proceedings of the 18th BioNLP Workshop and Shared Task",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W19-5019",
doi = "10.18653/v1/W19-5019",
pages = "176--190"
}
```
| 2,281 | [
[
-0.040985107421875,
-0.0306396484375,
0.0198822021484375,
0.01258087158203125,
-0.030426025390625,
-0.01407623291015625,
-0.01409912109375,
-0.0574951171875,
0.0260162353515625,
0.0263519287109375,
-0.0455322265625,
-0.059844970703125,
-0.04388427734375,
0.0... |
blastwind/github-code-haskell-function | 2023-05-16T05:05:40.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"code",
"haskell",
"region:us"
] | blastwind | null | null | 0 | 124 | 2023-05-14T05:17:31 | ---
dataset_info:
features:
- name: repo_name
dtype: string
- name: path
dtype: string
- name: license
dtype: string
- name: full_code
dtype: string
- name: full_size
dtype: int64
- name: uncommented_code
dtype: string
- name: uncommented_size
dtype: int64
- name: function_only_code
dtype: string
- name: function_only_size
dtype: int64
- name: is_commented
dtype: bool
- name: is_signatured
dtype: bool
- name: n_ast_errors
dtype: int64
- name: ast_max_depth
dtype: int64
- name: n_whitespaces
dtype: int64
- name: n_ast_nodes
dtype: int64
- name: n_ast_terminals
dtype: int64
- name: n_ast_nonterminals
dtype: int64
- name: loc
dtype: int64
- name: cycloplexity
dtype: int64
splits:
- name: train
num_bytes: 3094608763
num_examples: 3263408
download_size: 1168831903
dataset_size: 3094608763
task_categories:
- text-generation
tags:
- code
- haskell
size_categories:
- 1M<n<10M
---
# Dataset Card for "github-code-haskell-function"
Rows: 3.26M
Download Size: 1.17GB
This dataset is extracted from [github-code-haskell-file](https://huggingface.co/datasets/blastwind/github-code-haskell-file).
Each row has 3 flavors of the same function:
`uncommented_code`: Includes the function and its closest signature.
`function_only_code`: Includes the function only.
`full_code`: Includes the function and its closest [signature](https://wiki.haskell.org/Type_signature) and comment.
The heuristic for finding the closest signature and comment follows: If the immediate previous neighbor of the function
is neither a signature nor comment, `full_code` is just the function. If the previous neighbor is one though, include
them appropriately, then search the previous neighbor for the other node with the same logic.
Further, each row also contains attribute values for my personal analysis project. The attributes are calculated from the code in column `uncommented_code`.
7% (225k) of the rows have cyclomatic complexity and LOC valued at `-1` because [`homplexity`](https://github.com/BlastWind/homplexity) failed in parsing the row's `uncommented_code`.
| 2,193 | [
[
-0.0265045166015625,
-0.0280914306640625,
0.04168701171875,
0.00327301025390625,
-0.0282440185546875,
0.021759033203125,
-0.0175018310546875,
-0.01439666748046875,
0.0333251953125,
0.052032470703125,
-0.03192138671875,
-0.056640625,
-0.038543701171875,
0.013... |
brando/debug1_af | 2023-10-20T19:03:38.000Z | [
"license:apache-2.0",
"region:us"
] | brando | null | null | 1 | 124 | 2023-08-09T22:53:07 | ---
license: apache-2.0
---
If you find this please cite it:
```
@software{brando2021ultimateutils,
author={Brando Miranda},
title={Ultimate Utils - the Ultimate Utils library for Machine Learning and Artificial Intelligence},
url={https://github.com/brando90/ultimate-utils},
year={2021}
}
```
it's not suppose to be used by people yet.
It's under **apache license 2.0** too.
Files are
```
Topic # of theorems # Statements Selected (floor)
Polynomial 515 0
Polynomial_Factorial 47 11
``` | 511 | [
[
-0.006168365478515625,
0.0038127899169921875,
0.0347900390625,
0.030120849609375,
-0.020263671875,
0.002574920654296875,
0.00882720947265625,
-0.0240631103515625,
0.0096282958984375,
0.04376220703125,
-0.032318115234375,
-0.0438232421875,
-0.03765869140625,
... |
metooma | 2023-01-25T14:40:24.000Z | [
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:origi... | null | The dataset consists of tweets belonging to #MeToo movement on Twitter, labelled into different categories.
Due to Twitter's development policies, we only provide the tweet ID's and corresponding labels,
other data can be fetched via Twitter API.
The data has been labelled by experts, with the majority taken into the account for deciding the final label.
We provide these labels for each of the tweets. The labels provided for each data point
includes -- Relevance, Directed Hate, Generalized Hate,
Sarcasm, Allegation, Justification, Refutation, Support, Oppose | @inproceedings{gautam2020metooma,
title={# MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement},
author={Gautam, Akash and Mathur, Puneet and Gosangi, Rakesh and Mahata, Debanjan and Sawhney, Ramit and Shah, Rajiv Ratn},
booktitle={Proceedings of the International AAAI Conference on Web and Social Media},
volume={14},
pages={209--216},
year={2020} } | 0 | 123 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
- text-retrieval
task_ids:
- multi-class-classification
- multi-label-classification
paperswithcode_id: metooma
pretty_name: '#MeTooMA dataset'
dataset_info:
features:
- name: TweetId
dtype: string
- name: Text_Only_Informative
dtype:
class_label:
names:
'0': Text Non Informative
'1': Text Informative
- name: Image_Only_Informative
dtype:
class_label:
names:
'0': Image Non Informative
'1': Image Informative
- name: Directed_Hate
dtype:
class_label:
names:
'0': Directed Hate Absent
'1': Directed Hate Present
- name: Generalized_Hate
dtype:
class_label:
names:
'0': Generalized Hate Absent
'1': Generalized Hate Present
- name: Sarcasm
dtype:
class_label:
names:
'0': Sarcasm Absent
'1': Sarcasm Present
- name: Allegation
dtype:
class_label:
names:
'0': Allegation Absent
'1': Allegation Present
- name: Justification
dtype:
class_label:
names:
'0': Justification Absent
'1': Justification Present
- name: Refutation
dtype:
class_label:
names:
'0': Refutation Absent
'1': Refutation Present
- name: Support
dtype:
class_label:
names:
'0': Support Absent
'1': Support Present
- name: Oppose
dtype:
class_label:
names:
'0': Oppose Absent
'1': Oppose Present
splits:
- name: train
num_bytes: 821738
num_examples: 7978
- name: test
num_bytes: 205489
num_examples: 1995
download_size: 408889
dataset_size: 1027227
---
# Dataset Card for #MeTooMA dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU
- **Repository:** https://github.com/midas-research/MeTooMA
- **Paper:** https://ojs.aaai.org//index.php/ICWSM/article/view/7292
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
- The dataset consists of tweets belonging to #MeToo movement on Twitter, labelled into different categories.
- This dataset includes more data points and has more labels than any of the previous datasets that contain social media
posts about sexual abuse discloures. Please refer to the Related Datasets of the publication for a detailed information about this.
- Due to Twitters development policies, the authors provide only the tweet IDs and corresponding labels,
other data can be fetched via Twitter API.
- The data has been labelled by experts, with the majority taken into the account for deciding the final label.
- The authors provide these labels for each of the tweets.
- Relevance
- Directed Hate
- Generalized Hate
- Sarcasm
- Allegation
- Justification
- Refutation
- Support
- Oppose
- The definitions for each task/label is in the main publication.
- Please refer to the accompanying paper https://aaai.org/ojs/index.php/ICWSM/article/view/7292 for statistical analysis on the textual data
extracted from this dataset.
- The language of all the tweets in this dataset is English
- Time period: October 2018 - December 2018
- Suggested Use Cases of this dataset:
- Evaluating usage of linguistic acts such as: hate-spech and sarcasm in the incontext of public sexual abuse discloures.
- Extracting actionable insights and virtual dynamics of gender roles in sexual abuse revelations.
- Identifying how influential people were potrayed on public platform in the
events of mass social movements.
- Polarization analysis based on graph simulations of social nodes of users involved
in the #MeToo movement.
### Supported Tasks and Leaderboards
Multi Label and Multi-Class Classification
### Languages
English
## Dataset Structure
- The dataset is structured into CSV format with TweetID and accompanying labels.
- Train and Test sets are split into respective files.
### Data Instances
Tweet ID and the appropriate labels
### Data Fields
Tweet ID and appropriate labels (binary label applicable for a data point) and multiple labels for each Tweet ID
### Data Splits
- Train: 7979
- Test: 1996
## Dataset Creation
### Curation Rationale
- Twitter was the major source of all the public discloures of sexual abuse incidents during the #MeToo movement.
- People expressed their opinions over issues which were previously missing from the social media space.
- This provides an option to study the linguistic behaviours of social media users in an informal setting,
therefore the authors decide to curate this annotated dataset.
- The authors expect this dataset would be of great interest and use to both computational and socio-linguists.
- For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media.
### Source Data
- Source of all the data points in this dataset is Twitter social media platform.
#### Initial Data Collection and Normalization
- All the tweets are mined from Twitter with initial search paramters identified using keywords from the #MeToo movement.
- Redundant keywords were removed based on manual inspection.
- Public streaming APIs of Twitter were used for querying with the selected keywords.
- Based on text de-duplication and cosine similarity score, the set of tweets were pruned.
- Non english tweets were removed.
- The final set was labelled by experts with the majority label taken into the account for deciding the final label.
- Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292
#### Who are the source language producers?
Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292
### Annotations
#### Annotation process
- The authors chose against crowd sourcing for labeling this dataset due to its highly sensitive nature.
- The annotators are domain experts having degress in advanced clinical psychology and gender studies.
- They were provided a guidelines document with instructions about each task and its definitions, labels and examples.
- They studied the document, worked a few examples to get used to this annotation task.
- They also provided feedback for improving the class definitions.
- The annotation process is not mutually exclusive, implying that presence of one label does not mean the
absence of the other one.
#### Who are the annotators?
- The annotators are domain experts having a degree in clinical psychology and gender studies.
- Please refer to the accompnaying paper for a detailed annotation process.
### Personal and Sensitive Information
- Considering Twitters policy for distribution of data, only Tweet ID and applicable labels are shared for the public use.
- It is highly encouraged to use this dataset for scientific purposes only.
- This dataset collection completely follows the Twitter mandated guidelines for distribution and usage.
## Considerations for Using the Data
### Social Impact of Dataset
- The authors of this dataset do not intend to conduct a population centric analysis of #MeToo movement on Twitter.
- The authors acknowledge that findings from this dataset cannot be used as-is for any direct social intervention, these
should be used to assist already existing human intervention tools and therapies.
- Enough care has been taken to ensure that this work comes of as trying to target a specific person for their
personal stance of issues pertaining to the #MeToo movement.
- The authors of this work do not aim to vilify anyone accused in the #MeToo movement in any manner.
- Please refer to the ethics and discussion section of the mentioned publication for appropriate sharing of this dataset
and social impact of this work.
### Discussion of Biases
- The #MeToo movement acted as a catalyst for implementing social policy changes to benefit the members of
community affected by sexual abuse.
- Any work undertaken on this dataset should aim to minimize the bias against minority groups which
might amplified in cases of sudden outburst of public reactions over sensitive social media discussions.
### Other Known Limitations
- Considering privacy concerns, social media practitioners should be aware of making automated interventions
to aid the victims of sexual abuse as some people might not prefer to disclose their notions.
- Concerned social media users might also repeal their social information, if they found out that their
information is being used for computational purposes, hence it is important seek subtle individual consent
before trying to profile authors involved in online discussions to uphold personal privacy.
## Additional Information
Please refer to this link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU
### Dataset Curators
- If you use the corpus in a product or application, then please credit the authors
and [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]
(http://midas.iiitd.edu.in) appropriately.
Also, if you send us an email, we will be thrilled to know about how you have used the corpus.
- If interested in commercial use of the corpus, send email to midas@iiitd.ac.in.
- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India
disclaims any responsibility for the use of the corpus and does not provide technical support.
However, the contact listed above will be happy to respond to queries and clarifications
- Please feel free to send us an email:
- with feedback regarding the corpus.
- with information on how you have used the corpus.
- if interested in having us analyze your social media data.
- if interested in a collaborative research project.
### Licensing Information
[More Information Needed]
### Citation Information
Please cite the following publication if you make use of the dataset: https://ojs.aaai.org/index.php/ICWSM/article/view/7292
```
@article{Gautam_Mathur_Gosangi_Mahata_Sawhney_Shah_2020, title={#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement}, volume={14}, url={https://aaai.org/ojs/index.php/ICWSM/article/view/7292}, abstractNote={<p>In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment.</p&gt;}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Gautam, Akash and Mathur, Puneet and Gosangi, Rakesh and Mahata, Debanjan and Sawhney, Ramit and Shah, Rajiv Ratn}, year={2020}, month={May}, pages={209-216} }
```
### Contributions
Thanks to [@akash418](https://github.com/akash418) for adding this dataset. | 13,208 | [
[
-0.0024700164794921875,
-0.058868408203125,
0.02105712890625,
0.0250091552734375,
-0.01401519775390625,
0.0161285400390625,
-0.005340576171875,
-0.019439697265625,
0.0193939208984375,
0.0293426513671875,
-0.062469482421875,
-0.061737060546875,
-0.051483154296875... |
time_dial | 2022-11-03T16:07:53.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"dialog-ac... | null | TimeDial presents a crowdsourced English challenge set, for temporal commonsense reasoning, formulated
as a multiple choice cloze task with around 1.5k carefully curated dialogs. The dataset is derived from
the DailyDialog (Li et al., 2017), which is a multi-turn dialog corpus.
In order to establish strong baselines and provide information on future model development, we
conducted extensive experiments with state-of-the-art LMs. While humans can easily answer these
questions (97.8%), the best T5 model variant struggles on this challenge set (73%). Moreover, our
qualitative error analyses show that the models often rely on shallow, spurious features (particularly text
matching), instead of truly doing reasoning over the context. | @inproceedings{qin-etal-2021-timedial,
title = "{TimeDial: Temporal Commonsense Reasoning in Dialog}",
author = "Qin, Lianhui and Gupta, Aditya and Upadhyay, Shyam and He, Luheng and Choi, Yejin and Faruqui, Manaal",
booktitle = "Proc. of ACL",
year = "2021"
} | 4 | 123 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: 'TimeDial: Temporal Commonsense Reasoning in Dialog'
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
paperswithcode_id: timedial
tags:
- dialog-act-classification
dataset_info:
features:
- name: id
dtype: int32
- name: conversation
sequence: string
- name: correct1
dtype: string
- name: correct2
dtype: string
- name: incorrect1
dtype: string
- name: incorrect1_rule
dtype: string
- name: incorrect2
dtype: string
- name: incorrect2_rule
dtype: string
splits:
- name: test
num_bytes: 1449879
num_examples: 1446
download_size: 1613806
dataset_size: 1449879
---
# Dataset Card for TimeDial: Temporal Commonsense Reasoning in Dialog
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [TimeDial](https://github.com/google-research-datasets/timedial)
- **Paper:** [TimeDial: Temporal Commonsense Reasoning in Dialog](https://arxiv.org/abs/2106.04571)
- **Point of Contact:** [Please create an issue in the official repository](https://github.com/google-research-datasets/timedial)
### Dataset Summary
TimeDial presents a crowdsourced English challenge set, for temporal commonsense reasoning, formulated as a multiple choice cloze task with around 1.5k carefully curated dialogs. The dataset is derived from the DailyDialog ([Li et al., 2017](https://www.aclweb.org/anthology/I17-1099/)), which is a multi-turn dialog corpus.
In order to establish strong baselines and provide information on future model development, the authors conducted extensive experiments with state-of-the-art LMs. While humans can easily answer these questions (97.8\%), the best T5 model variant struggles on this challenge set (73\%). Moreover, our qualitative error analyses show that the models often rely on shallow, spurious features (particularly text matching), instead of truly doing reasoning over the context.
Detailed experiments and analyses can be found in their [paper](https://arxiv.org/pdf/2106.04571.pdf).
### Supported Tasks and Leaderboards
To be updated soon.
### Languages
The dataset is in English only.
## Dataset Structure
### Data Instances
```
{
"id": 1,
"conversation": [
"A: We need to take the accounts system offline to carry out the upgrade . But don't worry , it won't cause too much inconvenience . We're going to do it over the weekend .",
"B: How long will the system be down for ?",
"A: We'll be taking everything offline in about two hours ' time . It'll be down for a minimum of twelve hours . If everything goes according to plan , it should be up again by 6 pm on Saturday .",
"B: That's fine . We've allowed <MASK> to be on the safe side ."
],
"correct1": "forty-eight hours",
"correct2": "50 hours ",
"incorrect1": "two hours ",
"incorrect1_rule": "Rule 1",
"incorrect2": "12 days ",
"incorrect2_rule": "Rule 2"
}
```
### Data Fields
- "id": Unique identifier, as a integer
- "conversation": Dialog context with <MASK> span, as a string
- "correct1": Original <MASK> span, as a string
- "correct2": Additional correct option provided by annotators, as a string
- "incorrect1": Incorrect option #1 provided by annotators, as a string
- "incorrect1_rule": One of phrase matching ("Rule 1"), numeral matching ("Rule 2"), or open ended ("Rule 3"), as a string
- "incorrect2": Incorrect option #2 provided by annotators, as a string
- "incorrect2_rule": One of phrase matching ("Rule 1"), numeral matching ("Rule 2"), or open ended ("Rule 3"), as a string
### Data Splits
TimeDial dataset consists only of a test set of 1,104 dialog instances with 2 correct and 2 incorrect options with the following statistics:
| | Avg. |
|-----|-----|
|Turns per Dialog | 11.7 |
|Words per Turn | 16.5 |
|Time Spans per Dialog | 3 |
## Dataset Creation
### Curation Rationale
Although previous works have studied temporal reasoning in natural language, they have either focused on specific time-related concepts in isolation, such as temporal ordering and relation extraction, and/or dealt with limited context, such as single-sentence-based question answering and natural language inference.
In this work, they make the first systematic study of temporal commonsense reasoning in a multi-turn dialog setting. The task involves complex reasoning that requires operations like comparison and arithmetic reasoning over temporal expressions and the need for commonsense and world knowledge.
### Source Data
#### Initial Data Collection and Normalization
The TIMEDIAL dataset is derived from DailyDialog data (Li et al., 2017), which is a multi-turn dialog corpus containing over 13K English dialogs. Dialogs in this dataset consist of turn-taking between two people on topics over 10 broad categories, ranging from daily lives to financial topics.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The data collection process involves two steps: (1) identifying dialogs that are rich in temporal expressions, and (2) asking human annotators to provide correct and incorrect options for cloze instances derived from these dialogs. More details about the two steps:
1) Temporal expression identification: Here, they select dialogs that are rich with temporal information, in order to focus on complex temporal reasoning that arises in natural dialogs. Temporal expressions are automatically identified with SU-Time, an off-the-shelf temporal expression detector. They keep only the dialogs with more than 3 temporal expressions and at least one expression that contains numerals like “two weeks” (as opposed to non-numeric spans, like “summer”, “right now”, and “later”). In their initial experiment, they observe that language models can often correctly predict these non-numerical temporal phrases.
2) Human annotated options: Next, they make spans in the dialogs. For a dialog, they mask out each temporal expression that contains numerals, each resulting in a cloze question that is then sent for human annotation.
This resulted in 1,526 instances for annotation. For each masked span in each dialog, they obtain human annotation to derive a fixed set of correct and incorrect options given the context. Concretely, given a masked dialog and a seed correct answer (i.e., the original text) for the masked span, the annotators were asked to (1) come up with an alternative correct answer that makes sense in the dialog adhering to commonsense, and (2) formulate two incorrect answers that have no possibility of making sense in the dialog context. They highlight all time expressions in the context to make it easier for annotators to select reasonable time expressions.
#### Who are the annotators?
They are English linguists.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
TimeDial dataset is licensed under CC BY-NC-SA 4.0.
### Citation Information
```
@inproceedings{qin-etal-2021-timedial,
title = "{TimeDial: Temporal Commonsense Reasoning in Dialog}",
author = "Qin, Lianhui and Gupta, Aditya and Upadhyay, Shyam and He, Luheng and Choi, Yejin and Faruqui, Manaal",
booktitle = "Proc. of ACL",
year = "2021"
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset. | 9,019 | [
[
-0.0168914794921875,
-0.07012939453125,
0.032196044921875,
0.027923583984375,
-0.01546478271484375,
0.0039520263671875,
-0.00888824462890625,
-0.0467529296875,
-0.003055572509765625,
0.03009033203125,
-0.070556640625,
-0.038848876953125,
-0.019256591796875,
... |
DDSC/lcc | 2023-07-20T19:43:29.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:da",
"license:cc-by-4.0",
"region:us"
] | DDSC | null | null | 3 | 123 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: TwitterSent
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for LCC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository**: https://github.com/fnielsen/lcc-sentiment
- **Direct Download part 1**: https://raw.githubusercontent.com/fnielsen/lcc-sentiment/master/dan_mixed_2014_10K-sentences.csv
- **Direct Download part 2**: https://raw.githubusercontent.com/fnielsen/lcc-sentiment/master/dan_newscrawl_2011_10K-sentences.csv
### Dataset Summary
This dataset consists of Danish data from [the Leipzig Collection](https://www.aclweb.org/anthology/L06-1396/) that has been annotated for sentiment analysis by Finn Årup Nielsen.
### Supported Tasks and Leaderboards
This dataset is suitable for sentiment analysis.
### Languages
This dataset is in Danish.
## Dataset Structure
### Data Instances
Every entry in the dataset has a document and an associated label.
### Data Fields
An entry in the dataset consists of the following fields:
- `text` (`str`): The text content.
- `label` (`str`): The label of the `text`. Can be "positiv", "neutral" or "negativ" for positive, neutral and negative sentiment, respectively.
### Data Splits
A `train` and `test` split is available, with the test split being 30% of the dataset, randomly sampled in a stratified fashion. There are 349 documents in the training split and 150 in the test split.
## Additional Information
### Dataset Curators
The collection and annotation of the dataset is solely due to the Finn Årup Nielsen. It was originally annotated as a score between -5 and +5, but the labels in this version have been converted to a negative, neutral and positive label.
### Licensing Information
The dataset is released under the CC BY 4.0 license.
### Citation Information
```
@misc{lcc,
title={LCC},
author={Finn Årup Nielsen},
year={2016},
note={\url{https://github.com/fnielsen/lcc-sentiment}}
}
```
### Contributions
Thanks to [@saattrupdan](https://github.com/saattrupdan) for adding this dataset to the Hugging Face Hub. | 2,795 | [
[
-0.05291748046875,
-0.031707763671875,
0.0037841796875,
0.01947021484375,
-0.042236328125,
-0.0015153884887695312,
-0.0333251953125,
-0.035369873046875,
0.0367431640625,
0.03369140625,
-0.06036376953125,
-0.08306884765625,
-0.0333251953125,
0.013557434082031... |
classla/FRENK-hate-hr | 2022-10-21T07:46:28.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:hr",
"license:other",
"hate-speech-detection",
"offensive-language",
"arxiv:1906.02045",
"region:us"
] | classla | The FRENK Datasets of Socially Unacceptable Discourse in Croatian. | @misc{ljubešić2019frenk,
title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English},
author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec},
year={2019},
eprint={1906.02045},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/1906.02045}
} | 0 | 123 | 2022-03-02T23:29:22 | ---
language:
- hr
license:
- other
size_categories:
- 1K<n<10K
task_categories:
- text-classification
task_ids: []
tags:
- hate-speech-detection
- offensive-language
---
# Offensive language dataset of Croatian comments FRENK 1.0
Croatian subset of the [FRENK dataset](http://hdl.handle.net/11356/1433). Also available on HuggingFace dataset hub: [English subset](https://huggingface.co/datasets/5roop/FRENK-hate-en), [Slovenian subset](https://huggingface.co/datasets/5roop/FRENK-hate-sl).
## Dataset Description
- **Homepage:** http://hdl.handle.net/11356/1433
- **Repository:** http://hdl.handle.net/11356/1433
- **Paper:** https://arxiv.org/abs/1906.02045
- **Project page** https://nl.ijs.si/frenk/
## Description of the original dataset
>The original FRENK dataset consists of comments to Facebook posts (news articles) of mainstream media outlets from Croatia, Great Britain, and Slovenia, on the topics of migrants and LGBT. The dataset contains whole discussion threads. Each comment is annotated by the type of socially unacceptable discourse (e.g., inappropriate, offensive, violent speech) and its target (e.g., migrants/LGBT, commenters, media). The annotation schema is described in detail in [https://arxiv.org/pdf/1906.02045.pdf]. Usernames in the metadata are pseudo-anonymised and removed from the comments.
>
>The data in each language (Croatian (hr), English (en), Slovenian (sl), and topic (migrants, LGBT) is divided into a training and a testing portion. The training and testing data consist of separate discussion threads, i.e., there is no cross-discussion-thread contamination between training and testing data. The sizes of the splits are the following: Croatian, migrants: 4356 training comments, 978 testing comments; Croatian LGBT: 4494 training comments, 1142 comments; English, migrants: 4540 training comments, 1285 testing comments; English, LGBT: 4819 training comments, 1017 testing comments; Slovenian, migrants: 5145 training comments, 1277 testing comments; Slovenian, LGBT: 2842 training comments, 900 testing comments.
For this dataset only the Croatian data was used. Training segment has been split into beginning 90% (published here as training split) and end 10% (published here as dev split). Test segment has been preserved in its original form.
## Usage in `Transformers`
```python
import datasets
ds = datasets.load_dataset("classla/FRENK-hate-hr","binary")
```
For binary classification the following encoding is used:
```python
_CLASS_MAP_BINARY = {
'Acceptable': 0,
'Offensive': 1,
}
```
The original labels are available if the dataset is loaded with the `multiclass` option:
```python
import datasets
ds = datasets.load_dataset("classla/FRENK-hate-hr","multiclass").
```
In this case the encoding used is:
```python
_CLASS_MAP_MULTICLASS = {
'Acceptable speech': 0,
'Inappropriate': 1,
'Background offensive': 2,
'Other offensive': 3,
'Background violence': 4,
'Other violence': 5,
}
```
## Data structure
* `text`: text
* `target`: who is the target of the hate-speech text ("no target", "commenter", "target" (migrants or LGBT, depending on the topic), or "related to" (again, the topic))
* `topic`: whether the text relates to lgbt or migrants hate-speech domains
* `label`: label of the text instance, see above.
## Data instance
```
{'text': 'Potpisujem komentar g ankice pavicic',
'target': 'No target',
'topic': 'lgbt',
'label': 0}
```
## Licensing information
CLARIN.SI Licence ACA ID-BY-NC-INF-NORED 1.0
## Citation information
When using this dataset please cite the following paper:
```
@misc{ljubešić2019frenk,
title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English},
author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec},
year={2019},
eprint={1906.02045},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/1906.02045}
}
```
The original dataset can be cited as
```
@misc{11356/1433,
title = {Offensive language dataset of Croatian, English and Slovenian comments {FRENK} 1.0},
author = {Ljube{\v s}i{\'c}, Nikola and Fi{\v s}er, Darja and Erjavec, Toma{\v z}},
url = {http://hdl.handle.net/11356/1433},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {{CLARIN}.{SI} Licence {ACA} {ID}-{BY}-{NC}-{INF}-{NORED} 1.0},
year = {2021} }
``` | 4,407 | [
[
-0.03387451171875,
-0.048797607421875,
-0.007205963134765625,
0.031768798828125,
-0.01450347900390625,
-0.0225677490234375,
-0.037139892578125,
-0.029327392578125,
0.009735107421875,
0.0241241455078125,
-0.0433349609375,
-0.05780029296875,
-0.0526123046875,
... |
flax-community/swahili-safi | 2021-07-18T12:48:55.000Z | [
"region:us"
] | flax-community | Cleaned dataset for Swahili Language Modeling | @InProceedings{huggingface:flax-community,
title = Cleaned dataset for Swahili Language Modeling,
authors={Fitsum, Alok, Patrick},
year={2021},
link = https://huggingface.co/datasets/flax-community/swahili-safi
} | 3 | 123 | 2022-03-02T23:29:22 | # Swahili-Safi Dataset
A relatively clean dataset for Swahili language modeling, built by combining and cleaning several existing datasets.
Sources include:
```
mc4-sw
oscar-sw
swahili_news
IWSLT
XNLI
flores 101
swahili-lm
gamayun-swahili-minikit
broadcastnews-sw
subset of wikipedia-en translated (using m2m100) to sw
```
In total this dataset is ~3.5 GB in size with over 21 million lines of text.
## Usage
This dataset can be downloaded and used as follows:
```python
from datasets import load_dataset
ds = load_dataset("flax-community/swahili-safi")
```
| 564 | [
[
-0.038787841796875,
-0.044830322265625,
-0.0158233642578125,
0.00691986083984375,
-0.01107025146484375,
-0.027130126953125,
-0.0276336669921875,
-0.03851318359375,
0.01010894775390625,
0.0567626953125,
-0.048126220703125,
-0.009429931640625,
-0.0302886962890625,... |
gsarti/change_it | 2022-10-27T08:37:09.000Z | [
"task_categories:summarization",
"task_categories:text-generation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:it",
"license:cc-by-nc-sa-4.0",
"conditional-text-generation",
"sty... | gsarti | The CHANGE-IT dataset contains approximately 152,000 article-headline pairs, collected from two Italian
newspapers situated at opposite ends of the political spectrum, namely la Repubblica (left) and
Il Giornale (right), with the two newspapers equally represented. The dataset has been used in the context
of the CHANGE-IT task (https://sites.google.com/view/change-it) during the Evalita 2020 evaluation campaign
(http://www.evalita.it/2020). CHANGE-IT is a generation task for Italian – more specifically, a style transfer
task for headlines of Italian newspapers. Given a (collection of) headlines from one newspaper, namely
Il Giornale (G) or La Repubblica (R), it challenges automatic systems to change all G-headlines to headlines in
style R, and all R-headlines to headlines in style G. Although the task only concerns headline change, the dataset
comprehends both the headlines as well as their respective full articles. | @inproceedings{demattei-etal-2020-changeit,
author = {De Mattei, Lorenzo and Cafagna, Michele and Dell'Orletta, Felice and Nissim, Malvina and Gatt, Albert},
title = {{CHANGE-IT @ EVALITA 2020}: Change Headlines, Adapt News, GEnerate},
booktitle = {Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2020)},
editor = {Basile, Valerio and Croce, Danilo and Di Maro, Maria, and Passaro, Lucia C.},
publisher = {CEUR.org},
year = {2020},
address = {Online}
} | 1 | 123 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- it
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
- text-generation
task_ids: []
pretty_name: change-it
tags:
- conditional-text-generation
- style-transfer
---
# Dataset Card for CHANGE-IT
## Table of Contents
- [Dataset Card for CHANGE-IT](#dataset-card-for-change-it)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Style Transfer](#style-transfer)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://live.european-language-grid.eu/catalogue/corpus/7373](https://live.european-language-grid.eu/catalogue/corpus/7373)
- **Repository:** [Github](https://github.com/michelecafagna26/CHANGE-IT)
- **Paper:** [CEUR-ws.org](http://ceur-ws.org/Vol-2765/paper169.pdf)
- **Video** [Vimeo](https://vimeo.com/484098874)
- **Point of Contact:** [Lorenzo De Mattei](lorenzo.demattei@gmail.com)
- **Size of downloaded dataset files:** 168.7 MB
- **Size of the generated dataset:** 411 MB
- **Total amount of disk used:** 579.7 MB
### Dataset Summary
The CHANGE-IT dataset contains approximately 152,000 article-headline pairs, collected from two Italian newspapers situated at opposite ends of the political spectrum, namely la Repubblica (left) and Il Giornale (right), with the two newspapers equally represented. The dataset has been used in the context
of the [CHANGE-IT task](https://sites.google.com/view/change-it) during the [Evalita 2020 evaluation campaign](http://www.evalita.it/2020). CHANGE-IT is a generation task for Italian – more specifically, a style transfer task for headlines of Italian newspapers. Given a (collection of) headlines from one newspaper, namely Il Giornale (G) or La Repubblica (R), it challenges automatic systems to change all G-headlines to headlines in style R, and all R-headlines to headlines in style G. Although the task only concerns headline change, the dataset comprehends both the headlines as well as their respective full articles.
**Disclaimer**: *The CHANGE-IT dataset is hosted by the [European Language Grid](https://live.european-language-grid.eu/) and licensed under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/). To use the dataset using* 🤗 *Datasets, download and unzip the folder from its [ELG page](https://live.european-language-grid.eu/catalogue/corpus/7373) and pass it to the* `load_dataset` *method as:* `datasets.load_dataset('gsarti/change_it', data_dir='path/to/unzipped/folder')`
### Supported Tasks and Leaderboards
#### Style Transfer
The following table is taken from Table 4 of the original paper, where a *pointer-network* architecture is used as a baseline to perform style transfer in two settings. In the **rep2gio** variant the system is trained to summarize Repubblica headlines from full texts (vice versa for **gio2rep**), and the style transfer is performed by summarizing full texts of the other newspaper in the source newspaper's headline style. **avg** is the average of the two settings.
| | HH| AH|Main|Compliancy|
|--------:|---:|---:|---:|---------:|
|`rep2gio`|.649|.876|.799| .449|
|`gio2rep`|.639|.871|.435| .240|
| `avg`|.644|.874|.616| .345|
Here **Main**, **HH** and **AH** are all BERT-base models trained to evaluate the quality of style transfer as follows:
- **Main**: the model is trained to classify a generated headline either as `ilgiornale` or `repubblica`, achieving ~80% F1 score on gold data. Tests whether the transfer has been successful.
- **Headline-Headline (HH)**: the model is trained to check the compatibility between original and generated headlines. Tests whether the generation is coherent with the reference.
- **Article-Headline (AH)**: the model is trained to check the compatibility between original fulltext article and generated headlines. Tests whether the generation is coherent with the source article.
The final metric, **Overall compliancy**, is a binary metric that is positive if the other three metrics match (**Main** decision is reversed, **HH** and **AH** predict match), and negative otherwise. Refer to Section 3 of the original paper for more details.
### Languages
The language data in CHANGE-IT is in Italian (BCP-47 `it`)
## Dataset Structure
### Data Instances
A sample from the `test` split of the `ilgiornale` config is provided below. The other configuration, `ilgiornale`, has the same structure.
```json
{
"id": 0,
"headline": "Ucraina, coalizione della Timoshenko denuncia irruzione nella sede",
"full_text": "Rimane alta la tensione in Ucraina , dove da giorni i manifestanti scendono in piazza per protestare contro la decisione del presidente Viktor Yanukovich, che ha deciso di congelare l'accordo di associazione con l'Unione Europea. Il momento è molto delicato. L'opposizione teme una repressione violenza della protesta, con le forze speciali che hanno costretto i manifestanti a Kiev ad allontanarsi dalla sede del governo, per ripiegare su piazza Indipendenza. Il leader d'opposizione Vitaly Klitschko ha invitato il presidente a non utilizzare la forza, se non vuole avere il sangue dei manifestanti sulle sue mani. Nel frattempo il presidente Yanukovich ha aperto alla possibilità di un dialogo, annunciando per domani un incontro con i suoi due predecessori, Leonid Kuchma e Viktor Yushchenko. Ieri un milioni di persone sono scese in piazza, scaduti i due giorni di ultimatum dati al governo per indire nuove elezioni, I manifestanti hanno rovesciato la grande statua di Lenin posta sul boulevard Shevchenko. Piazza Indipendenza (Maidan Nezalezhnosti) resta il punto più caldo della capitale. Qui sono state erette barricate davanti agli ingressi della metropolitana, nel tentativo di preparsi a un'azione della polizia, che al momento non ha però preso iniziative contro i dimostranti. In serata Batkivshcyna, la coalizione dell'ex premier Yulia Timoshenko , ha denunciato l'irruzione di almeno venti agenti della polizia antisommossa nel proprio quartier generale. Il portavoce della polizia, Olga Bilyk, ha smentito: \"Né la polizia di Kiev, né la Berkut - ha dichiarato - hanno condotto operazioni nella sede\".",
"alignment": "A2"
}
```
The text is provided as-is, without further preprocessing or tokenization.
### Data Fields
- `headline`: The original headline for the newspaper.
- `full_text`: The article full text associated to the respective headline.
- `alignment`: The alignment value used for the style transfer experiments. Values:
- `A1`: Top 5K pairs, highly aligned.
- `A2`: Test set, highly aligned.
- `A3`: 10K to 20K pairs, fairly aligned.
- `R`: Bottom ~50K pairs, weakly/not aligned.
### Data Splits
| config| train| test|
|---------:|-------------------------------------:|-----------:|
|`ilgiornale`|5'000 (A1) + 10'000 (A3) + 48'701 (R) | 5'000 (A2) |
|`repubblica`|5'000 (A1) + 10'000 (A3) + 48'701 (R) | 5'000 (A2) |
### Dataset Creation
Please refer to the original article [CHANGE-IT @ EVALITA 2020: Change Headlines, Adapt News, GEnerate](http://ceur-ws.org/Vol-2765/paper169.pdf) for additional information on dataset creation.
## Additional Information
### Dataset Curators
The organizers of the CHANGE-IT shared tasks are the curators of the original dataset. For problems or updates on the 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).
### Licensing Information
Licensed with Creative Commons Attribution Non Commercial Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
Please cite the authors if you use these corpora in your work:
```
@inproceedings{demattei-etal-2020-changeit,
author = {De Mattei, Lorenzo and Cafagna, Michele and Dell'Orletta, Felice and Nissim, Malvina and Gatt, Albert},
title = {{CHANGE-IT @ EVALITA 2020}: Change Headlines, Adapt News, GEnerate},
booktitle = {Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2020)},
editor = {Basile, Valerio and Croce, Danilo and Di Maro, Maria, and Passaro, Lucia C.},
publisher = {CEUR.org},
year = {2020},
address = {Online}
}
| 9,006 | [
[
-0.02569580078125,
-0.0296478271484375,
0.0274505615234375,
0.02752685546875,
-0.0214691162109375,
-0.00782012939453125,
-0.0276336669921875,
-0.034423828125,
0.0421142578125,
0.023223876953125,
-0.041900634765625,
-0.0506591796875,
-0.04248046875,
0.0177307... |
teticio/audio-diffusion-256 | 2022-11-09T10:49:48.000Z | [
"task_categories:image-to-image",
"size_categories:10K<n<100K",
"audio",
"spectrograms",
"region:us"
] | teticio | null | null | 3 | 123 | 2022-08-25T17:32:42 | ---
annotations_creators: []
language: []
language_creators: []
license: []
multilinguality: []
pretty_name: Mel spectrograms of music
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- audio
- spectrograms
task_categories:
- image-to-image
task_ids: []
---
Over 20,000 256x256 mel spectrograms of 5 second samples of music from my Spotify liked playlist. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference using De-noising Diffusion Probabilistic Models.
```
x_res = 256
y_res = 256
sample_rate = 22050
n_fft = 2048
hop_length = 512
``` | 660 | [
[
-0.055267333984375,
-0.01531219482421875,
0.052032470703125,
0.0299530029296875,
-0.0056610107421875,
-0.01300811767578125,
-0.0205841064453125,
-0.0251312255859375,
0.03594970703125,
0.02459716796875,
-0.0300750732421875,
-0.0654296875,
-0.0222320556640625,
... |
qanastek/HoC | 2022-11-01T15:03:11.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"region:us"
] | qanastek | The Hallmarks of Cancer Corpus for text classification
The Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed
publication abstracts manually annotated by experts according
to a taxonomy. The taxonomy consists of 37 classes in a
hierarchy. Zero or more class labels are assigned to each
sentence in the corpus. The labels are found under the "labels"
directory, while the tokenized text can be found under "text"
directory. The filenames are the corresponding PubMed IDs (PMID).
In addition to the HOC corpus, we also have the
[Cancer Hallmarks Analytics Tool](http://chat.lionproject.net/)
which classifes all of PubMed according to the HoC taxonomy. | @article{baker2015automatic,
title={Automatic semantic classification of scientific literature according to the hallmarks of cancer},
author={Baker, Simon and Silins, Ilona and Guo, Yufan and Ali, Imran and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={32},
number={3},
pages={432--440},
year={2015},
publisher={Oxford University Press}
}
@article{baker2017cancer,
title={Cancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer},
author={Baker, Simon and Ali, Imran and Silins, Ilona and Pyysalo, Sampo and Guo, Yufan and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={33},
number={24},
pages={3973--3981},
year={2017},
publisher={Oxford University Press}
}
@article{baker2017cancer,
title={Cancer hallmark text classification using convolutional neural networks},
author={Baker, Simon and Korhonen, Anna-Leena and Pyysalo, Sampo},
year={2016}
}
@article{baker2017initializing,
title={Initializing neural networks for hierarchical multi-label text classification},
author={Baker, Simon and Korhonen, Anna},
journal={BioNLP 2017},
pages={307--315},
year={2017}
} | 1 | 123 | 2022-11-01T10:49:52 | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- en
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
pretty_name: HoC
language_bcp47:
- en-US
---
# HoC : Hallmarks of Cancer Corpus
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://s-baker.net/resource/hoc/
- **Repository:** https://github.com/sb895/Hallmarks-of-Cancer
- **Paper:** https://academic.oup.com/bioinformatics/article/32/3/432/1743783
- **Leaderboard:** https://paperswithcode.com/dataset/hoc-1
- **Point of Contact:** [Yanis Labrak](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
The Hallmarks of Cancer Corpus for text classification
The Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed publication abstracts manually annotated by experts according to a taxonomy. The taxonomy consists of 37 classes in a hierarchy. Zero or more class labels are assigned to each sentence in the corpus. The labels are found under the "labels" directory, while the tokenized text can be found under "text" directory. The filenames are the corresponding PubMed IDs (PMID).
In addition to the HOC corpus, we also have the [Cancer Hallmarks Analytics Tool](http://chat.lionproject.net/) which classifes all of PubMed according to the HoC taxonomy.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for `multi-class-classification`.
### Languages
The corpora consists of PubMed article only in english:
- `English - United States (en-US)`
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/HoC")
validation = dataset["validation"]
print("First element of the validation set : ", validation[0])
```
## Dataset Structure
### Data Instances
```json
{
"document_id": "12634122_5",
"text": "Genes that were overexpressed in OM3 included oncogenes , cell cycle regulators , and those involved in signal transduction , whereas genes for DNA repair enzymes and inhibitors of transformation and metastasis were suppressed .",
"label": [9, 5, 0, 6]
}
```
### Data Fields
`document_id`: Unique identifier of the document.
`text`: Raw text of the PubMed abstracts.
`label`: One of the 10 currently known hallmarks of cancer.
| Hallmark | Search term |
|:-------------------------------------------:|:-------------------------------------------:|
| 1. Sustaining proliferative signaling (PS) | Proliferation Receptor Cancer |
| | 'Growth factor' Cancer |
| | 'Cell cycle' Cancer |
| 2. Evading growth suppressors (GS) | 'Cell cycle' Cancer |
| | 'Contact inhibition' |
| 3. Resisting cell death (CD) | Apoptosis Cancer |
| | Necrosis Cancer |
| | Autophagy Cancer |
| 4. Enabling replicative immortality (RI) | Senescence Cancer |
| | Immortalization Cancer |
| 5. Inducing angiogenesis (A) | Angiogenesis Cancer |
| | 'Angiogenic factor' |
| 6. Activating invasion & metastasis (IM) | Metastasis Invasion Cancer |
| 7. Genome instability & mutation (GI) | Mutation Cancer |
| | 'DNA repair' Cancer |
| | Adducts Cancer |
| | 'Strand breaks' Cancer |
| | 'DNA damage' Cancer |
| 8. Tumor-promoting inflammation (TPI) | Inflammation Cancer |
| | 'Oxidative stress' Cancer |
| | Inflammation 'Immune response' Cancer |
| 9. Deregulating cellular energetics (CE) | Glycolysis Cancer; 'Warburg effect' Cancer |
| 10. Avoiding immune destruction (ID) | 'Immune system' Cancer |
| | Immunosuppression Cancer |
### Data Splits
Distribution of data for the 10 hallmarks:
| **Hallmark** | **No. abstracts** | **No. sentences** |
|:------------:|:-----------------:|:-----------------:|
| 1. PS | 462 | 993 |
| 2. GS | 242 | 468 |
| 3. CD | 430 | 883 |
| 4. RI | 115 | 295 |
| 5. A | 143 | 357 |
| 6. IM | 291 | 667 |
| 7. GI | 333 | 771 |
| 8. TPI | 194 | 437 |
| 9. CE | 105 | 213 |
| 10. ID | 108 | 226 |
## Dataset Creation
### Source Data
#### Who are the source language producers?
The corpus has been produced and uploaded by Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
__HoC__: Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna
__Hugging Face__: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
```plain
GNU General Public License v3.0
```
```plain
Permissions
- Commercial use
- Modification
- Distribution
- Patent use
- Private use
Limitations
- Liability
- Warranty
Conditions
- License and copyright notice
- State changes
- Disclose source
- Same license
```
### Citation Information
We would very much appreciate it if you cite our publications:
[Automatic semantic classification of scientific literature according to the hallmarks of cancer](https://academic.oup.com/bioinformatics/article/32/3/432/1743783)
```bibtex
@article{baker2015automatic,
title={Automatic semantic classification of scientific literature according to the hallmarks of cancer},
author={Baker, Simon and Silins, Ilona and Guo, Yufan and Ali, Imran and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={32},
number={3},
pages={432--440},
year={2015},
publisher={Oxford University Press}
}
```
[Cancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer](https://www.repository.cam.ac.uk/bitstream/handle/1810/265268/btx454.pdf?sequence=8&isAllowed=y)
```bibtex
@article{baker2017cancer,
title={Cancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer},
author={Baker, Simon and Ali, Imran and Silins, Ilona and Pyysalo, Sampo and Guo, Yufan and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={33},
number={24},
pages={3973--3981},
year={2017},
publisher={Oxford University Press}
}
```
[Cancer hallmark text classification using convolutional neural networks](https://www.repository.cam.ac.uk/bitstream/handle/1810/270037/BIOTXTM2016.pdf?sequence=1&isAllowed=y)
```bibtex
@article{baker2017cancer,
title={Cancer hallmark text classification using convolutional neural networks},
author={Baker, Simon and Korhonen, Anna-Leena and Pyysalo, Sampo},
year={2016}
}
```
[Initializing neural networks for hierarchical multi-label text classification](http://www.aclweb.org/anthology/W17-2339)
```bibtex
@article{baker2017initializing,
title={Initializing neural networks for hierarchical multi-label text classification},
author={Baker, Simon and Korhonen, Anna},
journal={BioNLP 2017},
pages={307--315},
year={2017}
}
```
| 9,765 | [
[
-0.01776123046875,
-0.0279388427734375,
0.02362060546875,
0.006282806396484375,
-0.01554107666015625,
0.007762908935546875,
-0.0144195556640625,
-0.0213623046875,
0.041046142578125,
0.045806884765625,
-0.039581298828125,
-0.08258056640625,
-0.05267333984375,
... |
nbtpj/DUC2004 | 2023-01-09T10:56:59.000Z | [
"region:us"
] | nbtpj | null | null | 0 | 123 | 2023-01-09T10:47:36 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
alzoubi36/opp_115 | 2023-06-24T07:08:08.000Z | [
"region:us"
] | alzoubi36 | null | null | 0 | 123 | 2023-06-24T06:55:43 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
sequence: int64
splits:
- name: train
num_bytes: 1047118
num_examples: 2185
- name: validation
num_bytes: 270827
num_examples: 550
- name: test
num_bytes: 316635
num_examples: 697
download_size: 811600
dataset_size: 1634580
---
# Dataset for the OPP-115 task in the [PrivacyGLUE](https://github.com/infsys-lab/privacy-glue) dataset
| 451 | [
[
-0.01464080810546875,
-0.0276641845703125,
0.00836181640625,
-0.014373779296875,
0.0142822265625,
0.006622314453125,
0.0097503662109375,
-0.00598907470703125,
0.03033447265625,
0.04974365234375,
-0.05218505859375,
-0.057586669921875,
-0.0099334716796875,
-0.... |
euclaise/mqa | 2023-10-20T17:13:22.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"region:us"
] | euclaise | null | null | 0 | 123 | 2023-08-31T17:15:10 | ---
size_categories:
- 10K<n<100K
task_categories:
- question-answering
pretty_name: MultiQA
dataset_info:
features:
- name: msg
dtype: string
- name: resp_correct
dtype: string
- name: resp_incorrect
sequence: string
splits:
- name: train
num_bytes: 20624051.02310231
num_examples: 23408
download_size: 18672769
dataset_size: 20624051.02310231
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# MQA
Aggregation of datasets as per [here](https://huggingface.co/collections/euclaise/mqa-650f41afae507a2c7ca18b55)
I reserve no rights to the dataset, but the original datasets were made available under various public licenses. Hence, consider each subset of this dataset to be licensed as the original dataset from where it comes was. | 811 | [
[
-0.03363037109375,
-0.0149383544921875,
0.0343017578125,
0.02020263671875,
-0.016937255859375,
-0.00341033935546875,
0.027313232421875,
-0.016204833984375,
0.038726806640625,
0.0694580078125,
-0.040863037109375,
-0.021240234375,
-0.04852294921875,
0.00678634... |
vlsp-2023-vllm/arithmetic_vi | 2023-09-19T03:54:17.000Z | [
"arxiv:2005.14165",
"region:us"
] | vlsp-2023-vllm | null | null | 0 | 123 | 2023-09-10T17:55:16 | ---
dataset_info:
features:
- name: context
dtype: string
- name: completion
dtype: string
- name: meta
dtype: string
splits:
- name: test
num_bytes: 1729595
num_examples: 26000
download_size: 515170
dataset_size: 1729595
---
# Arithmetic (OpenAI)
Source: https://github.com/openai/gpt-3
Vietnamese version of Arithmetic.
## Citation Information
```
@article{brown2020language,
title={Language Models are Few-Shot Learners},
author={Tom B. Brown and Benjamin Mann and Nick Ryder and Melanie Subbiah and Jared Kaplan and Prafulla Dhariwal and Arvind Neelakantan and Pranav Shyam and Girish Sastry and Amanda Askell and Sandhini Agarwal and Ariel Herbert-Voss and Gretchen Krueger and Tom Henighan and Rewon Child and Aditya Ramesh and Daniel M. Ziegler and Jeffrey Wu and Clemens Winter and Christopher Hesse and Mark Chen and Eric Sigler and Mateusz Litwin and Scott Gray and Benjamin Chess and Jack Clark and Christopher Berner and Sam McCandlish and Alec Radford and Ilya Sutskever and Dario Amodei},
year={2020},
eprint={2005.14165},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 1,148 | [
[
-0.0018024444580078125,
-0.058624267578125,
0.04876708984375,
0.01103973388671875,
-0.016693115234375,
-0.038360595703125,
-0.0035037994384765625,
-0.01519775390625,
-0.00702667236328125,
0.01432037353515625,
-0.01032257080078125,
-0.03448486328125,
-0.041046142... |
liyucheng/ceval_all | 2023-09-29T10:07:50.000Z | [
"region:us"
] | liyucheng | null | null | 0 | 123 | 2023-09-29T10:04:27 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: explanation
dtype: string
splits:
- name: val
num_bytes: 406528
num_examples: 1346
- name: test
num_bytes: 3720917
num_examples: 12342
- name: dev
num_bytes: 172688
num_examples: 260
download_size: 2792076
dataset_size: 4300133
---
# Dataset Card for "ceval_all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 698 | [
[
-0.037689208984375,
-0.0215301513671875,
0.029083251953125,
0.011688232421875,
-0.0174560546875,
-0.01326751708984375,
0.0117950439453125,
-0.00998687744140625,
0.06475830078125,
0.041656494140625,
-0.042266845703125,
-0.07159423828125,
-0.042755126953125,
-... |
JasiekKaczmarczyk/giant-midi-sustain-masked | 2023-10-02T10:49:22.000Z | [
"region:us"
] | JasiekKaczmarczyk | null | null | 0 | 123 | 2023-10-02T09:46:21 | ---
dataset_info:
features:
- name: midi_filename
dtype: string
- name: source
dtype: string
- name: pitch
sequence: int16
length: 128
- name: dstart
sequence: float32
length: 128
- name: duration
sequence: float32
length: 128
- name: velocity
sequence: int16
length: 128
- name: masking_spaces
struct:
- name: <Random Mask>
sequence: bool
length: 128
- name: <LH Mask>
sequence: bool
length: 128
- name: <RH Mask>
sequence: bool
length: 128
- name: <Harmonic Root Mask>
sequence: bool
length: 128
- name: <Harmonic Outliers Mask>
sequence: bool
length: 128
splits:
- name: train
num_bytes: 453725935
num_examples: 239612
- name: validation
num_bytes: 55936260
num_examples: 29544
- name: test
num_bytes: 52710054
num_examples: 27844
download_size: 211201981
dataset_size: 562372249
---
# Dataset Card for "giant-midi-sustain-masked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,142 | [
[
-0.04925537109375,
-0.02142333984375,
0.0169677734375,
0.0283050537109375,
-0.01503753662109375,
0.020660400390625,
0.003997802734375,
-0.017486572265625,
0.08050537109375,
0.042938232421875,
-0.067138671875,
-0.04669189453125,
-0.036773681640625,
-0.0238800... |
emi429/humansleepproject-rr-small | 2023-10-11T20:00:33.000Z | [
"region:us"
] | emi429 | null | null | 0 | 123 | 2023-10-11T19:09:28 | ---
dataset_info:
features:
- name: rr_intervals
sequence: float64
- name: sleep_stage
dtype: string
splits:
- name: train
num_bytes: 131445053
num_examples: 56208
download_size: 21938826
dataset_size: 131445053
---
# Dataset Card for "humansleepproject-rr-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 426 | [
[
-0.034759521484375,
-0.0065155029296875,
0.01229095458984375,
0.016387939453125,
-0.01204681396484375,
0.0012197494506835938,
0.00847625732421875,
-0.019439697265625,
0.06951904296875,
0.026519775390625,
-0.06390380859375,
-0.038421630859375,
-0.0263824462890625... |
cryptonite | 2023-06-01T14:59:47.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",... | null | Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language
Current NLP datasets targeting ambiguity can be solved by a native speaker with relative ease. We present Cryptonite,
a large-scale dataset based on cryptic crosswords, which is both linguistically complex and naturally sourced. Each
example in Cryptonite is a cryptic clue, a short phrase or sentence with a misleading surface reading, whose solving
requires disambiguating semantic, syntactic, and phonetic wordplays, as well as world knowledge. Cryptic clues pose a
challenge even for experienced solvers, though top-tier experts can solve them with almost 100% accuracy. Cryptonite
is a challenging task for current models; fine-tuning T5-Large on 470k cryptic clues achieves only 7.6% accuracy, on
par with the accuracy of a rule-based clue solver (8.6%). | @misc{efrat2021cryptonite,
title={Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language},
author={Avia Efrat and Uri Shaham and Dan Kilman and Omer Levy},
year={2021},
eprint={2103.01242},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 2 | 122 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: null
pretty_name: Cryptonite
dataset_info:
- config_name: default
features:
- name: agent_info
sequence:
- name: Bottomline
dtype: string
- name: Role
dtype: string
- name: Target
dtype: float32
- name: agent_turn
sequence: int32
- name: dialogue_acts
sequence:
- name: intent
dtype: string
- name: price
dtype: float32
- name: utterance
sequence: string
- name: items
sequence:
- name: Category
dtype: string
- name: Images
dtype: string
- name: Price
dtype: float32
- name: Description
dtype: string
- name: Title
dtype: string
splits:
- name: train
num_bytes: 8538836
num_examples: 5247
- name: test
num_bytes: 1353933
num_examples: 838
- name: validation
num_bytes: 966032
num_examples: 597
download_size: 25373618
dataset_size: 10858801
- config_name: cryptonite
features:
- name: clue
dtype: string
- name: answer
dtype: string
- name: enumeration
dtype: string
- name: publisher
dtype: string
- name: date
dtype: int64
- name: quick
dtype: bool
- name: id
dtype: string
splits:
- name: train
num_bytes: 52228597
num_examples: 470804
- name: validation
num_bytes: 2901768
num_examples: 26156
- name: test
num_bytes: 2908275
num_examples: 26157
download_size: 21615952
dataset_size: 58038640
config_names:
- cryptonite
- default
---
# Dataset Card for Cryptonite
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/aviaefrat/cryptonite)
- **Repository:** [Github](https://github.com/aviaefrat/cryptonite)
- **Paper:** [Arxiv](https://arxiv.org/pdf/2103.01242.pdf)
- **Leaderboard:**
- **Point of Contact:** [Twitter](https://twitter.com/AviaEfrat)
### Dataset Summary
Current NLP datasets targeting ambiguity can be solved by a native speaker with relative ease. We present Cryptonite, a large-scale dataset based on cryptic crosswords, which is both linguistically complex and naturally sourced. Each example in Cryptonite is a cryptic clue, a short phrase or sentence with a misleading surface reading, whose solving requires disambiguating semantic, syntactic, and phonetic wordplays, as well as world knowledge. Cryptic clues pose a challenge even for experienced solvers, though top-tier experts can solve them with almost 100% accuracy. Cryptonite is a challenging task for current models; fine-tuning T5-Large on 470k cryptic clues achieves only 7.6% accuracy, on par with the accuracy of a rule-based clue solver (8.6%).
### Languages
English
## Dataset Structure
### Data Instances
This is one example from the train set.
```python
{
'clue': 'make progress socially in stated region (5)',
'answer': 'climb',
'date': 971654400000,
'enumeration': '(5)',
'id': 'Times-31523-6across',
'publisher': 'Times',
'quick': False
}
```
### Data Fields
- `clue`: a string representing the clue provided for the crossword
- `answer`: a string representing the answer to the clue
- `enumeration`: a string representing the
- `publisher`: a string representing the publisher of the crossword
- `date`: a int64 representing the UNIX timestamp of the date of publication of the crossword
- `quick`: a bool representing whether the crossword is quick (a crossword aimed at beginners, easier to solve)
- `id`: a string to uniquely identify a given example in the dataset
### Data Splits
Train (470,804 examples), validation (26,156 examples), test (26,157 examples).
## Dataset Creation
### Curation Rationale
Crosswords from the Times and the Telegraph.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Avia Efrat, Uri Shaham, Dan Kilman, Omer Levy
### Licensing Information
`cc-by-nc-4.0`
### Citation Information
```
@misc{efrat2021cryptonite,
title={Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language},
author={Avia Efrat and Uri Shaham and Dan Kilman and Omer Levy},
year={2021},
eprint={2103.01242},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@theo-m](https://github.com/theo-m) for adding this dataset. | 6,123 | [
[
-0.01363372802734375,
-0.038299560546875,
0.0190582275390625,
0.020172119140625,
-0.040374755859375,
0.012664794921875,
-0.032684326171875,
-0.047576904296875,
0.043426513671875,
0.022796630859375,
-0.056304931640625,
-0.07867431640625,
-0.059783935546875,
0... |
re_dial | 2022-11-18T21:41:23.000Z | [
"task_categories:other",
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
... | null | ReDial (Recommendation Dialogues) is an annotated dataset of dialogues, where users
recommend movies to each other. The dataset was collected by a team of researchers working at
Polytechnique Montréal, MILA – Quebec AI Institute, Microsoft Research Montréal, HEC Montreal, and Element AI.
The dataset allows research at the intersection of goal-directed dialogue systems
(such as restaurant recommendation) and free-form (also called “chit-chat”) dialogue systems. | @inproceedings{li2018conversational,
title={Towards Deep Conversational Recommendations},
author={Li, Raymond and Kahou, Samira Ebrahimi and Schulz, Hannes and Michalski, Vincent and Charlin, Laurent and Pal, Chris},
booktitle={Advances in Neural Information Processing Systems 31 (NIPS 2018)},
year={2018}
} | 0 | 122 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- other
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: redial
pretty_name: ReDial (Recommendation Dialogues)
tags:
- dialogue-sentiment-classification
dataset_info:
features:
- name: movieMentions
list:
- name: movieId
dtype: string
- name: movieName
dtype: string
- name: respondentQuestions
list:
- name: movieId
dtype: string
- name: suggested
dtype: int32
- name: seen
dtype: int32
- name: liked
dtype: int32
- name: messages
list:
- name: timeOffset
dtype: int32
- name: text
dtype: string
- name: senderWorkerId
dtype: int32
- name: messageId
dtype: int32
- name: conversationId
dtype: int32
- name: respondentWorkerId
dtype: int32
- name: initiatorWorkerId
dtype: int32
- name: initiatorQuestions
list:
- name: movieId
dtype: string
- name: suggested
dtype: int32
- name: seen
dtype: int32
- name: liked
dtype: int32
splits:
- name: train
num_bytes: 13496125
num_examples: 10006
- name: test
num_bytes: 1731449
num_examples: 1342
download_size: 5765261
dataset_size: 15227574
---
# Dataset Card for ReDial (Recommendation Dialogues)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ReDial Dataset](https://redialdata.github.io/website/)
- **Repository:** [ReDialData](https://github.com/ReDialData/website/tree/data)
- **Paper:** [Towards Deep Conversational Recommendations](https://proceedings.neurips.cc/paper/2018/file/800de15c79c8d840f4e78d3af937d4d4-Paper.pdf)
- **Point of Contact:** [ReDial Google Group](https://groups.google.com/forum/embed/?place=forum/redial-dataset&showpopout=true#!forum/redial-dataset)
### Dataset Summary
ReDial (Recommendation Dialogues) is an annotated dataset of dialogues, where users
recommend movies to each other. The dataset was collected by a team of researchers working at
Polytechnique Montréal, MILA – Quebec AI Institute, Microsoft Research Montréal, HEC Montreal, and Element AI.
The dataset allows research at the intersection of goal-directed dialogue systems
(such as restaurant recommendation) and free-form (also called “chit-chat”) dialogue systems.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
JSON-formatted example of a typical instance in the dataset.
```
{
"movieMentions":{
"203371":"Final Fantasy: The Spirits Within (2001)",
"84779":"The Triplets of Belleville (2003)",
"122159":"Mary and Max (2009)",
"151313":"A Scanner Darkly (2006)",
"191602":"Waking Life (2001)",
"165710":"The Boss Baby (2017)"
},
"respondentQuestions":{
"203371":{
"suggested":1,
"seen":0,
"liked":1
},
"84779":{
"suggested":0,
"seen":1,
"liked":1
},
"122159":{
"suggested":0,
"seen":1,
"liked":1
},
"151313":{
"suggested":0,
"seen":1,
"liked":1
},
"191602":{
"suggested":0,
"seen":1,
"liked":1
},
"165710":{
"suggested":1,
"seen":0,
"liked":1
}
},
"messages":[
{
"timeOffset":0,
"text":"Hi there, how are you? I'm looking for movie recommendations",
"senderWorkerId":0,
"messageId":1021
},
{
"timeOffset":15,
"text":"I am doing okay. What kind of movies do you like?",
"senderWorkerId":1,
"messageId":1022
},
{
"timeOffset":66,
"text":"I like animations like @84779 and @191602",
"senderWorkerId":0,
"messageId":1023
},
{
"timeOffset":86,
"text":"I also enjoy @122159",
"senderWorkerId":0,
"messageId":1024
},
{
"timeOffset":95,
"text":"Anything artistic",
"senderWorkerId":0,
"messageId":1025
},
{
"timeOffset":135,
"text":"You might like @165710 that was a good movie.",
"senderWorkerId":1,
"messageId":1026
},
{
"timeOffset":151,
"text":"What's it about?",
"senderWorkerId":0,
"messageId":1027
},
{
"timeOffset":207,
"text":"It has Alec Baldwin it is about a baby that works for a company and gets adopted it is very funny",
"senderWorkerId":1,
"messageId":1028
},
{
"timeOffset":238,
"text":"That seems like a nice comedy",
"senderWorkerId":0,
"messageId":1029
},
{
"timeOffset":272,
"text":"Do you have any animated recommendations that are a bit more dramatic? Like @151313 for example",
"senderWorkerId":0,
"messageId":1030
},
{
"timeOffset":327,
"text":"I like comedies but I prefer films with a little more depth",
"senderWorkerId":0,
"messageId":1031
},
{
"timeOffset":467,
"text":"That is a tough one but I will remember something",
"senderWorkerId":1,
"messageId":1032
},
{
"timeOffset":509,
"text":"@203371 was a good one",
"senderWorkerId":1,
"messageId":1033
},
{
"timeOffset":564,
"text":"Ooh that seems cool! Thanks for the input. I'm ready to submit if you are.",
"senderWorkerId":0,
"messageId":1034
},
{
"timeOffset":571,
"text":"It is animated, sci fi, and has action",
"senderWorkerId":1,
"messageId":1035
},
{
"timeOffset":579,
"text":"Glad I could help",
"senderWorkerId":1,
"messageId":1036
},
{
"timeOffset":581,
"text":"Nice",
"senderWorkerId":0,
"messageId":1037
},
{
"timeOffset":591,
"text":"Take care, cheers!",
"senderWorkerId":0,
"messageId":1038
},
{
"timeOffset":608,
"text":"bye",
"senderWorkerId":1,
"messageId":1039
}
],
"conversationId":"391",
"respondentWorkerId":1,
"initiatorWorkerId":0,
"initiatorQuestions":{
"203371":{
"suggested":1,
"seen":0,
"liked":1
},
"84779":{
"suggested":0,
"seen":1,
"liked":1
},
"122159":{
"suggested":0,
"seen":1,
"liked":1
},
"151313":{
"suggested":0,
"seen":1,
"liked":1
},
"191602":{
"suggested":0,
"seen":1,
"liked":1
},
"165710":{
"suggested":1,
"seen":0,
"liked":1
}
}
}
```
### Data Fields
The dataset is published in the “jsonl” format, i.e., as a text file where each line corresponds to a Dialogue given as a valid JSON document.
A Dialogue contains these fields:
**conversationId:** an integer
**initiatorWorkerId:** an integer identifying to the worker initiating the conversation (the recommendation seeker)
**respondentWorkerId:** an integer identifying the worker responding to the initiator (the recommender)
**messages:** a list of Message objects
**movieMentions:** a dict mapping movie IDs mentioned in this dialogue to movie names
**initiatorQuestions:** a dictionary mapping movie IDs to the labels supplied by the initiator. Each label is a bool corresponding to whether the initiator has said he saw the movie, liked it, or suggested it.
**respondentQuestions:** a dictionary mapping movie IDs to the labels supplied by the respondent. Each label is a bool corresponding to whether the initiator has said he saw the movie, liked it, or suggested it.
Each Message contains these fields:
**messageId:** a unique ID for this message
**text:** a string with the actual message. The string may contain a token starting with @ followed by an integer. This is a movie ID which can be looked up in the movieMentions field of the Dialogue object.
**timeOffset:** time since start of dialogue in seconds
**senderWorkerId:** the ID of the worker sending the message, either initiatorWorkerId or respondentWorkerId.
The labels in initiatorQuestions and respondentQuestions have the following meaning:
*suggested:* 0 if it was mentioned by the seeker, 1 if it was a suggestion from the recommender
*seen:* 0 if the seeker has not seen the movie, 1 if they have seen it, 2 if they did not say
*liked:* 0 if the seeker did not like the movie, 1 if they liked it, 2 if they did not say
### Data Splits
The dataset contains a total of 11348 dialogues, 10006 for training and model selection, and 1342 for testing.
## Dataset Creation
### Curation Rationale
The dataset allows research at the intersection of goal-directed dialogue systems (such as restaurant recommendation) and free-form (also called “chit-chat”) dialogue systems.
In the dataset, users talk about which movies they like and which ones they do not like, which ones they have seen or not etc., and labels which we ensured agree between the two participants. This allows to research how sentiment is expressed in dialogues, which differs a lot from e.g. review websites.
The dialogues and the movies they mention form a curious bi-partite graph structure, which is related to how users talk about the movie (e.g. genre information).
Ignoring label information, this dataset can also be viewed as a limited domain chit-chat dialogue dataset.
### Source Data
#### Initial Data Collection and Normalization
Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name).
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
#### Who are the source language producers?
Here we formalize the setup of a conversation involving recommendations for the purposes of data collection. To provide some additional structure to our data (and models) we define one person in the dialogue as the recommendation seeker and the other as the recommender.
To obtain data in this form, we developed an interface and pairing mechanism mediated by Amazon Mechanical Turk (AMT).
We pair up AMT workers and give each of them a role. The movie seeker has to explain what kind of movie he/she likes, and asks for movie suggestions. The recommender tries to understand the seeker’s movie tastes, and recommends movies. All exchanges of information and recommendations are made using natural language.
We add additional instructions to improve the data quality and guide the workers to dialogue the way we expect them to. Thus we ask to use formal language and that conversations contain roughly ten messages minimum. We also require that at least four different movies are mentioned in every conversation. Finally, we also ask to converse only about movies, and notably not to mention Mechanical Turk or the task itself.
In addition, we ask that every movie mention is tagged using the ‘@’ symbol. When workers type ‘@’, the following characters are used to find matching movie names, and workers can choose a movie from that list. This allows us to detect exactly what movies are mentioned and when. We gathered entities from DBpedia that were of type http://dbpedia.org/ontology/Film to obtain a list of movies, but also allow workers to add their own movies to the list if it is not present already. We obtained the release dates from the movie titles (e.g. http://dbpedia.org/page/American_Beauty_(1999_film), or, if the movie title does not contain that information, from an additional SPARQL request. Note that the year or release date of a movie can be essential to differentiate movies with the same name, but released at different dates.
We will refer to these additional labels as movie dialogue forms. Both workers have to answer these forms even though it really concerns the seeker’s movie tastes. Ideally, the two participants would give the same answer to every form, but it is possible that their answers do not coincide (because of carelessness, or dialogue ambiguity). The movie dialogue forms therefore allow us to evaluate sub-components of an overall neural dialogue system more systematically, for example one can train and evaluate a sentiment analysis model directly using these labels. %which could produce a reward for the dialogue agent.
In each conversation, the number of movies mentioned varies, so we have different numbers of movie dialogue form answers for each conversation. The distribution of the different classes of the movie dialogue form is shown in Table 1a. The liked/disliked/did not say label is highly imbalanced. This is standard for recommendation data, since people are naturally more likely to talk about movies that they like, and the recommender’s objective is to recommend movies that the seeker is likely to like.
### Annotations
#### Annotation process
Mentioned in above sub-section.
#### Who are the annotators?
For the AMT HIT we collect data in English and chose to restrict the data collection to countries where English is the main language. The fact that we pair workers together slows down the data collection since we ask that at least two persons are online at the same time to do the task, so a good amount of workers is required to make the collection possible. Meanwhile, the task is quite demanding, and we have to select qualified workers. HIT reward and qualification requirement were decisive to get good conversation quality while still ensuring that people could get paired together. We launched preliminary HITs to find a compromise and finally set the reward to $0.50 per person for each completed conversation (so each conversation costs us $1, plus taxes), and ask that workers meet the following requirements: (1)~Approval percentage greater than 95, (2)~Number of approved HITs greater than 1000, (3)~Their location must be in United States, Canada, United Kingdom, Australia, or New Zealand.
### Personal and Sensitive Information
Workers had to confirm a consent form before every task that explains what the data is being collected for and how it is going to be used.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset collection was funded by Google, IBM, and NSERC, with editorial support from Microsoft Research.
### Licensing Information
The data is published under the CC BY 4.0 License.
### Citation Information
```
@inproceedings{li2018conversational,
title={Towards Deep Conversational Recommendations},
author={Li, Raymond and Kahou, Samira Ebrahimi and Schulz, Hannes and Michalski, Vincent and Charlin, Laurent and Pal, Chris},
booktitle={Advances in Neural Information Processing Systems 31 (NIPS 2018)},
year={2018}
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset. | 16,905 | [
[
-0.038970947265625,
-0.045501708984375,
0.0261077880859375,
0.0126953125,
-0.01837158203125,
0.00469207763671875,
-0.00696563720703125,
-0.0078125,
0.070068359375,
0.0301513671875,
-0.0628662109375,
-0.060638427734375,
-0.04583740234375,
0.003484725952148437... |
turkish_product_reviews | 2023-01-25T14:54:42.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:tr",
"license:unknown",
"region:us"
] | null | Turkish Product Reviews.
This repository contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews. | null | 3 | 122 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- tr
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Turkish Product Reviews
dataset_info:
features:
- name: sentence
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 43369710
num_examples: 235165
download_size: 13184332
dataset_size: 43369710
---
# Dataset Card for Turkish Product Reviews
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [turkish-text-data](https://github.com/fthbrmnby/turkish-text-data)
- **Point of Contact:** [Fatih Barmanbay](https://github.com/fthbrmnby)
### Dataset Summary
This Turkish Product Reviews Dataset contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is based on Turkish.
## Dataset Structure
### Data Instances
**Example 1:**
**sentence:** beklentimin altında bir ürün kaliteli değil
**sentiment:** 0 (negative)
**Example 2:**
**sentence:** fiyat ve performans olarak gayet iyi
**sentiment:** 1 (positive)
### Data Fields
- **sentence**(string) : Contatins turkish product review
- **sentiment**(int) : 0 (negative) or 1 (positive)
### Data Splits
It is not divided into Train set and Test set.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by [Fatih Barmanbay](https://github.com/fthbrmnby).
### Licensing Information
The data is under the [CC-BY-SA-4.0 License](https://github.com/fthbrmnby/turkish-text-data/blob/master/LICENCE)
### Citation Information
No citation available for this dataset.
### Contributions
Thanks to [@basakbuluz](https://github.com/basakbuluz) for adding this dataset. | 3,747 | [
[
-0.049774169921875,
-0.053680419921875,
-0.0033016204833984375,
0.029510498046875,
-0.04022216796875,
-0.01053619384765625,
-0.0209197998046875,
-0.0272369384765625,
0.03302001953125,
0.03790283203125,
-0.055419921875,
-0.0791015625,
-0.048126220703125,
0.01... |
uitnlp/vietnamese_students_feedback | 2022-10-13T15:39:37.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:topic-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:vi",
"license:unknown",
... | uitnlp | Students’ feedback is a vital resource for the interdisciplinary research involving the combining of two different
research fields between sentiment analysis and education.
Vietnamese Students’ Feedback Corpus (UIT-VSFC) is the resource consists of over 16,000 sentences which are
human-annotated with two different tasks: sentiment-based and topic-based classifications.
To assess the quality of our corpus, we measure the annotator agreements and classification evaluation on the
UIT-VSFC corpus. As a result, we obtained the inter-annotator agreement of sentiments and topics with more than over
91% and 71% respectively. In addition, we built the baseline model with the Maximum Entropy classifier and achieved
approximately 88% of the sentiment F1-score and over 84% of the topic F1-score. | @InProceedings{8573337,
author={Nguyen, Kiet Van and Nguyen, Vu Duc and Nguyen, Phu X. V. and Truong, Tham T. H. and Nguyen, Ngan Luu-Thuy},
booktitle={2018 10th International Conference on Knowledge and Systems Engineering (KSE)},
title={UIT-VSFC: Vietnamese Students’ Feedback Corpus for Sentiment Analysis},
year={2018},
volume={},
number={},
pages={19-24},
doi={10.1109/KSE.2018.8573337}
} | 8 | 122 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- vi
license:
- unknown
multilinguality:
- monolingual
pretty_name: "Vietnamese Students\u2019 Feedback Corpus"
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
- topic-classification
---
# Dataset Card for Vietnamese Students’ Feedback Corpus
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sites.google.com/uit.edu.vn/uit-nlp/datasets-projects#h.p_4Brw8L-cbfTe
- **Repository:**
- **Paper:** [UIT-VSFC: Vietnamese Students’ Feedback Corpus for Sentiment Analysis](https://www.researchgate.net/publication/329645066_UIT-VSFC_Vietnamese_Students'_Feedback_Corpus_for_Sentiment_Analysis)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Students’ feedback is a vital resource for the interdisciplinary research involving the combining of two different
research fields between sentiment analysis and education.
Vietnamese Students’ Feedback Corpus (UIT-VSFC) is the resource consists of over 16,000 sentences which are
human-annotated with two different tasks: sentiment-based and topic-based classifications.
To assess the quality of our corpus, we measure the annotator agreements and classification evaluation on the
UIT-VSFC corpus. As a result, we obtained the inter-annotator agreement of sentiments and topics with more than over
91% and 71% respectively. In addition, we built the baseline model with the Maximum Entropy classifier and achieved
approximately 88% of the sentiment F1-score and over 84% of the topic F1-score.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language of the dataset text sentence is Vietnamese (`vi`).
## Dataset Structure
### Data Instances
An instance example:
```
{
'sentence': 'slide giáo trình đầy đủ .',
'sentiment': 2,
'topic': 1
}
```
### Data Fields
- `sentence` (str): Text sentence.
- `sentiment`: Sentiment class, with values 0 (negative), 1 (neutral) and 2 (positive).
- `topic`: Topic class, with values 0 (lecturer), 1 (training_program), 2 (facility) and 3 (others).
### Data Splits
The dataset is split in train, validation and test.
| | Tain | Validation | Test |
|--------------------|------:|-----------:|-----:|
| Number of examples | 11426 | 1583 | 3166 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown.
### Citation Information
```
@InProceedings{8573337,
author={Nguyen, Kiet Van and Nguyen, Vu Duc and Nguyen, Phu X. V. and Truong, Tham T. H. and Nguyen, Ngan Luu-Thuy},
booktitle={2018 10th International Conference on Knowledge and Systems Engineering (KSE)},
title={UIT-VSFC: Vietnamese Students’ Feedback Corpus for Sentiment Analysis},
year={2018},
volume={},
number={},
pages={19-24},
doi={10.1109/KSE.2018.8573337}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
| 4,774 | [
[
-0.034637451171875,
-0.05206298828125,
0.0102386474609375,
0.0140380859375,
-0.0140533447265625,
0.00672149658203125,
-0.0318603515625,
-0.003871917724609375,
0.0254058837890625,
0.037109375,
-0.0303497314453125,
-0.07305908203125,
-0.0333251953125,
0.021057... |
ywchoi/pubmed_abstract_4 | 2022-09-13T01:04:18.000Z | [
"region:us"
] | ywchoi | null | null | 0 | 122 | 2022-09-13T01:02:33 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
tarudesu/ViCTSD | 2023-03-12T14:19:06.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:vi",
"arxiv:2103.10069",
"region:us"
] | tarudesu | null | null | 0 | 122 | 2023-03-12T14:16:24 | ---
task_categories:
- text-classification
language:
- vi
size_categories:
- 10K<n<100K
---
# Constructive and Toxic Speech Detection for Open-domain Social Media Comments in Vietnamese
This is the official repository for the UIT-ViCTSD dataset from the paper [Constructive and Toxic Speech Detection for Open-domain Social Media Comments in Vietnamese](https://arxiv.org/pdf/2103.10069.pdf), which was accepted at the [IEA/AIE 2021](https://ieaaie2021.wordpress.com/list-of-accepted-papers/).
```
@InProceedings{nguyen2021victsd,
author="Nguyen, Luan Thanh and Van Nguyen, Kiet and Nguyen, Ngan Luu-Thuy",
title="Constructive and Toxic Speech Detection for Open-Domain Social Media Comments in Vietnamese",
booktitle="Advances and Trends in Artificial Intelligence. Artificial Intelligence Practices",
year="2021",
publisher="Springer International Publishing",
address="Cham",
pages="572--583"
}
```
## Introduction
The rise of social media has led to the increasing of comments on online forums. However, there still exists invalid comments which are not informative for users. Moreover, those comments are also quite toxic and harmful to people. In this paper, we create a dataset for constructive and toxic speech detection, named UIT-ViCTSD (Vietnamese Constructive and Toxic Speech Detection dataset) with 10,000 human-annotated comments. For these tasks, we propose a system for constructive and toxic speech detection with the state-of-the-art transfer learning model in Vietnamese NLP as PhoBERT. With this system, we obtain F1-scores of 78.59% and 59.40% for classifying constructive and toxic comments, respectively. Besides, we implement various baseline models as traditional Machine Learning and Deep Neural Network-Based models to evaluate the dataset. With the results, we can solve several tasks on the online discussions and develop the framework for identifying constructiveness and toxicity of Vietnamese social media comments automatically.
## Dataset
The ViCTSD dataset is consist of 10,000 human-annotated comments on 10 domains from Vietnamese users' comments on social media.
The dataset is divided into three parts as below:
1. Train set: 7,000 comments
2. Valid set: 2,000 comments
3. Test set: 1,000 comments
Please feel free to contact us by email luannt@uit.edu.vn if you have any further information! | 2,339 | [
[
-0.0145416259765625,
-0.07708740234375,
0.035614013671875,
0.0285186767578125,
-0.037689208984375,
-0.00199127197265625,
-0.0250091552734375,
-0.0299072265625,
-0.007465362548828125,
0.0438232421875,
-0.0198974609375,
-0.055877685546875,
-0.038177490234375,
... |
bbaaaa/iwslt14-de-en-preprocess | 2023-03-28T16:19:35.000Z | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:translation",
"source_datasets:original",
"language:de",
"language:en",
"license:cc-by-nc-nd-4.0",
"region:us"
] | bbaaaa | The IWSLT 2017 Multilingual Task addresses text translation, including zero-shot translation, with a single MT system across all directions including English, German, Dutch, Italian and Romanian. As unofficial task, conventional bilingual text translation is offered between English and Arabic, French, Japanese, Chinese, German and Korean. | @inproceedings{cettolo-etal-2017-overview,
title = "Overview of the {IWSLT} 2017 Evaluation Campaign",
author = {Cettolo, Mauro and
Federico, Marcello and
Bentivogli, Luisa and
Niehues, Jan and
St{\\"u}ker, Sebastian and
Sudoh, Katsuhito and
Yoshino, Koichiro and
Federmann, Christian},
booktitle = "Proceedings of the 14th International Conference on Spoken Language Translation",
month = dec # " 14-15",
year = "2017",
address = "Tokyo, Japan",
publisher = "International Workshop on Spoken Language Translation",
url = "https://aclanthology.org/2017.iwslt-1.1",
pages = "2--14",
} | 0 | 122 | 2023-03-27T03:34:37 | ---
annotations_creators:
- crowdsourced
language:
- de
- en
language_creators:
- expert-generated
license:
- cc-by-nc-nd-4.0
multilinguality:
- translation
pretty_name: IWSLT 2014 with fairseq preprocess
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: iwslt-2014 with fairseq preprocess
---
# Dataset Card for IWSLT 2014 with fairseq preprocess
## Dataset Description
- **Homepage:** [https://sites.google.com/site/iwsltevaluation2014](https://sites.google.com/site/iwsltevaluation2014)
dataset_info:
- config_name: de-en
features:
- name: translation
languages:
- de
- en
splits:
- name: train
num_examples: 160239
- name: test
num_examples: 6750
- name: validation
num_examples: 7283
| 821 | [
[
-0.050445556640625,
-0.005645751953125,
0.0158233642578125,
0.04473876953125,
-0.0291748046875,
0.0016155242919921875,
0.00690460205078125,
-0.0167999267578125,
-0.0072021484375,
0.0268707275390625,
-0.07159423828125,
-0.045166015625,
-0.047149658203125,
0.0... |
clarin-knext/arguana-pl | 2023-06-07T08:18:37.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | 0 | 122 | 2023-06-06T22:10:02 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | 201 | [
[
-0.01541900634765625,
-0.0628662109375,
0.03546142578125,
0.016357421875,
-0.022186279296875,
-0.01036834716796875,
-0.01158905029296875,
-0.0345458984375,
-0.0013093948364257812,
0.028656005859375,
-0.038299560546875,
-0.04815673828125,
-0.0290069580078125,
... |
d0rj/curation-corpus-ru | 2023-06-13T13:31:27.000Z | [
"task_categories:summarization",
"language_creators:translated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:d0rj/curation-corpus",
"language:ru",
"license:cc-by-4.0",
"news",
"summarization",
"region:us"
] | d0rj | null | null | 2 | 122 | 2023-06-12T19:49:36 | ---
dataset_info:
features:
- name: title
dtype: string
- name: summary
dtype: string
- name: url
dtype: string
- name: date
dtype: string
- name: article_content
dtype: string
splits:
- name: train
num_bytes: 237436901.42479068
num_examples: 30454
download_size: 116826702
dataset_size: 237436901.42479068
license: cc-by-4.0
task_categories:
- summarization
multilinguality:
- monolingual
source_datasets:
- d0rj/curation-corpus
language:
- ru
language_creators:
- translated
tags:
- news
- summarization
pretty_name: Curation Corpus (ru)
size_categories:
- 10K<n<100K
---
# curation-corpus-ru
## Dataset Description
- **Repository:** [https://github.com/CurationCorp/curation-corpus](https://github.com/CurationCorp/curation-corpus)
Translated version of [d0rj/curation-corpus](https://huggingface.co/datasets/d0rj/curation-corpus) into Russian. | 898 | [
[
0.0015211105346679688,
-0.031097412109375,
0.013458251953125,
0.023162841796875,
-0.037353515625,
0.0316162109375,
-0.0199737548828125,
-0.0022487640380859375,
0.04705810546875,
0.05596923828125,
-0.04437255859375,
-0.0609130859375,
-0.05096435546875,
0.0166... |
ds4sd/SynthTabNet_OTSL | 2023-08-31T17:14:02.000Z | [
"task_categories:object-detection",
"task_categories:table-to-text",
"size_categories:10K<n<100K",
"license:other",
"table-structure-recognition",
"table-understanding",
"PDF",
"arxiv:2305.03393",
"region:us"
] | ds4sd | null | null | 1 | 122 | 2023-08-31T16:07:02 | ---
license: other
pretty_name: SynthTabNet-OTSL
size_categories:
- 10K<n<100K
tags:
- table-structure-recognition
- table-understanding
- PDF
task_categories:
- object-detection
- table-to-text
---
# Dataset Card for SynthTabNet_OTSL
## Dataset Description
- **Homepage:** https://ds4sd.github.io
- **Paper:** https://arxiv.org/pdf/2305.03393
### Dataset Summary
This dataset is a conversion of the original [SynthTabNet](https://github.com/IBM/SynthTabNet) into the OTSL format presented in our paper "Optimized Table Tokenization for Table Structure Recognition". The dataset includes the original annotations amongst new additions.
SynthTabNet is organized into 4 parts of 150k tables (600k in total). Each part contains tables with different appearances in regard to their size, structure, style and content. All parts are divided into Train, Test and Val splits.
| Appearance style | Records |
|------------------|---------|
| Fintabnet | 150k |
| Marketing | 150k |
| PubTabNet | 150k |
| Sparse | 150k |
### Dataset Structure
* cells: origunal dataset cell groundtruth (content).
* otsl: new reduced table structure token format
* html: original dataset groundtruth HTML (structure).
* html_restored: generated HTML from OTSL.
* cols: grid column length.
* rows: grid row length.
* image: PIL image
### OTSL Vocabulary:
**OTSL**: new reduced table structure token format
More information on the OTSL table structure format and its concepts can be read from our paper.
Format of this dataset extends work presented in a paper, and introduces slight modifications:
* "fcel" - cell that has content in it
* "ecel" - cell that is empty
* "lcel" - left-looking cell (to handle horizontally merged cells)
* "ucel" - up-looking cell (to handle vertically merged cells)
* "xcel" - 2d span cells, in this dataset - covers entire area of a merged cell
* "nl" - new line token
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Additional Information
### Dataset Curators
The dataset is converted by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
Curators:
- Maksym Lysak, [@maxmnemonic](https://github.com/maxmnemonic)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Nikos Livathinos, [@nikos-livathinos](https://github.com/nikos-livathinos)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Citation Information
```bib
@misc{lysak2023optimized,
title={Optimized Table Tokenization for Table Structure Recognition},
author={Maksym Lysak and Ahmed Nassar and Nikolaos Livathinos and Christoph Auer and Peter Staar},
year={2023},
eprint={2305.03393},
archivePrefix={arXiv},
primaryClass={cs.CV}
}```
| 2,943 | [
[
-0.0279541015625,
-0.0298614501953125,
0.029541015625,
-0.0004944801330566406,
-0.04046630859375,
-0.0037975311279296875,
0.0018377304077148438,
-0.027374267578125,
0.04388427734375,
0.01763916015625,
-0.028564453125,
-0.06829833984375,
-0.009307861328125,
0... |
jangmin/ecommerce_purchase_history | 2023-10-14T13:35:03.000Z | [
"size_categories:10K<n<100K",
"language:ko",
"region:us"
] | jangmin | null | null | 1 | 122 | 2023-09-21T05:09:07 | ---
language:
- ko
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: user_id
dtype: int64
- name: day
dtype: string
- name: order_ts
dtype: string
- name: positive_prod_id
dtype: int64
- name: negative_prod_id
dtype: int64
- name: negative_prod_ids
sequence: int64
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 122282877.9602969
num_examples: 58535
- name: test
num_bytes: 52690471.08509643
num_examples: 17332
- name: rigorous_test
num_bytes: 24661037.47070749
num_examples: 8112
download_size: 33220918
dataset_size: 199634386.51610082
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: rigorous_test
path: data/rigorous_test-*
---
# Dataset Card for "ecommerce_purchase_history"
## Dataset Description
# Dataset Summary
이 데이터셋은 특정 이커머스 회사의 추천 시스템 연구 개발을 위한 데이터셋이다. 특정 기간에 대해 약 90일 동안의 구매 히스토리로부터 생성되었다. 구매 히스토리를 텍스트로 기술하였다.
llama2 토크나이저 기준 2,048 개의 토큰 미만의 예제 쌍만을 남기도록 수정하였다.
또한, test 스플릿의 경우 user_id, positive_prod_id 기준으로 train_split에 등장하지 않는 것만을 남겼다.
# Supported Tasks and Leaderboards
# Languages
This dataset is only made of `ko`(korean).
# Dataset Structure | 1,307 | [
[
-0.0198974609375,
-0.051025390625,
-0.004825592041015625,
0.027679443359375,
-0.0313720703125,
0.00479888916015625,
0.0140380859375,
-0.007709503173828125,
0.0300140380859375,
0.043365478515625,
-0.057952880859375,
-0.07196044921875,
-0.0017080307006835938,
... |
sehyun66/Finnhub-News | 2023-10-12T11:55:56.000Z | [
"region:us"
] | sehyun66 | null | null | 2 | 122 | 2023-09-28T13:37:56 | ---
configs:
- config_name: clean
data_files:
- split: clean
path: clean/clean-*
- config_name: default
data_files:
- split: finbert
path: data/finbert-*
- split: train
path: data/train-*
dataset_info:
config_name: clean
features:
- name: datetime
dtype: int64
- name: image
dtype: string
- name: related
dtype: string
- name: source
dtype: string
- name: summary
dtype: string
- name: url
dtype: string
- name: id
dtype: int64
- name: category
dtype: string
- name: headline
dtype: string
splits:
- name: clean
num_bytes: 150902085
num_examples: 316086
download_size: 78262136
dataset_size: 150902085
---
---
configs:
- config_name: clean
data_files:
- split: clean
path: clean/clean-*
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
config_name: distill_bert
features:
- name: headline
dtype: string
- name: summary
dtype: string
- name: headline_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
- name: summary_sentiment
struct:
- name: postive
dtype: string
- name: negative
dtype: string
- name: neutral
dtype: string
splits:
- name: default
num_bytes: 131086592
num_examples: 316086
download_size: 0
dataset_size: 131086592
dataset_info:
- config_name: clean
features:
- name: datetime
dtype: int64
- name: image
dtype: string
- name: related
dtype: string
- name: source
dtype: string
- name: summary
dtype: string
- name: url
dtype: string
- name: id
dtype: int64
- name: category
dtype: string
- name: headline
dtype: string
splits:
- name: clean
num_bytes: 150902085
num_examples: 316086
download_size: 78262136
dataset_size: 150902085
- config_name: default
features:
- name: related
dtype: string
- name: datetime
dtype: int64
- name: image
dtype: string
- name: url
dtype: string
- name: headline
dtype: string
- name: finbert_sentiment
struct:
- name: negative
dtype: float64
- name: neutral
dtype: float64
- name: postive
dtype: float64
- name: source
dtype: string
- name: summary
dtype: string
- name: id
dtype: int64
- name: category
dtype: string
splits:
- name: train
num_bytes: 251731744
num_examples: 515851
download_size: 113022298
dataset_size: 251731744
tags:
- finance
---
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,732 | [
[
-0.060150146484375,
-0.0273590087890625,
0.0203094482421875,
0.0168609619140625,
-0.031158447265625,
0.00705718994140625,
-0.01165008544921875,
0.001834869384765625,
0.035980224609375,
0.0255126953125,
-0.056793212890625,
-0.044677734375,
-0.04327392578125,
... |
owkin/nct-crc-he | 2023-10-26T09:42:47.000Z | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-3.0",
"biology",
"medical",
"cancer",
"colorectal cancer",
"region:us"
] | owkin | null | null | 0 | 122 | 2023-10-13T11:31:07 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': ADI
'1': BACK
'2': DEB
'3': LYM
'4': MUC
'5': MUS
'6': NORM
'7': STR
'8': TUM
splits:
- name: nct_crc_he_100
num_bytes: 15058006
num_examples: 99
- name: nct_crc_he_1k
num_bytes: 151950686
num_examples: 999
- name: crc_val_he_7k
num_bytes: 1092855241.74
num_examples: 7180
download_size: 1095677324
dataset_size: 1259863933.74
configs:
- config_name: default
data_files:
- split: nct_crc_he_100
path: data/nct_crc_he_100-*
- split: nct_crc_he_1k
path: data/nct_crc_he_1k-*
- split: crc_val_he_7k
path: data/crc_val_he_7k-*
license: cc-by-sa-3.0
task_categories:
- image-classification
language:
- en
tags:
- biology
- medical
- cancer
- colorectal cancer
pretty_name: NCT_CRC
size_categories:
- 10K<n<100K
---
# Dataset Card for NCT-CRC-HE
### Dataset Summary
The NCT-CRC-HE dataset consists of images of human tissue slides, some of which contain cancer.
### Data Splits
The dataset contains tissues from different parts of the body. Examples from each of the 9 classes can be seen below

### Initial Data Collection and Normalization
NCT biobank (National Center for Tumor Diseases) and the UMM pathology archive (University Medical Center Mannheim). Images were normalized using Macenko normalization.
### Licensing Information
CC-BY-SA
### Citation Information
Owkin claims no ownership of the dataset. This is simply an upload of the original dataset onto HF.
[Link to original paper](https://zenodo.org/records/1214456)
| 1,941 | [
[
-0.0129241943359375,
-0.015106201171875,
0.01457977294921875,
-0.013336181640625,
-0.0462646484375,
0.0218505859375,
0.0008788108825683594,
-0.01210784912109375,
0.0283203125,
0.05450439453125,
-0.033416748046875,
-0.0777587890625,
-0.03631591796875,
0.01341... |
Alexandre-Numind/Hallu_IE | 2023-10-25T08:39:57.000Z | [
"region:us"
] | Alexandre-Numind | null | null | 0 | 122 | 2023-10-16T15:33:29 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Raspberry-ai/monse-v4 | 2023-10-23T18:16:56.000Z | [
"region:us"
] | Raspberry-ai | null | null | 0 | 122 | 2023-10-23T18:16:54 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 5524086.0
num_examples: 70
download_size: 6518045
dataset_size: 5524086.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "monse-v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 471 | [
[
-0.05267333984375,
0.006618499755859375,
0.02862548828125,
0.001911163330078125,
-0.01003265380859375,
-0.00738525390625,
0.03961181640625,
-0.017578125,
0.07171630859375,
0.040008544921875,
-0.0601806640625,
-0.05352783203125,
-0.041290283203125,
0.00502777... |
bbaw_egyptian | 2023-04-05T09:36:39.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:extended|wikipedia",
"language:de",
"language:egy",
"language:en",
"license:cc-by-4.0",
"region:us"
] | null | This dataset comprises parallel sentences of hieroglyphic encodings, transcription and translation
as used in the paper Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian
Hieroglyph. The data triples are extracted from the digital corpus of Egyptian texts compiled by
the project "Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache". | @misc{OPUS4-2919,
title = {Teilauszug der Datenbank des Vorhabens "Strukturen und Transformationen des Wortschatzes der {\"a}gyptischen Sprache" vom Januar 2018},
institution = {Akademienvorhaben Strukturen und Transformationen des Wortschatzes der {\"a}gyptischen Sprache. Text- und Wissenskultur im alten {\"A}gypten},
type = {other},
year = {2018},
} | 5 | 121 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- de
- egy
- en
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|wikipedia
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: BbawEgyptian
dataset_info:
features:
- name: transcription
dtype: string
- name: translation
dtype: string
- name: hieroglyphs
dtype: string
splits:
- name: train
num_bytes: 18546162
num_examples: 100736
download_size: 35348686
dataset_size: 18546162
---
# Dataset Card for "bbaw_egyptian"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://edoc.bbaw.de/frontdoor/index/index/docId/2919](https://edoc.bbaw.de/frontdoor/index/index/docId/2919)
- **Repository:** [Github](https://phiwi.github.io/all.json)
- **Paper:** [Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian Hieroglyph](https://zenodo.org/record/3524924)
- **Point of Contact:** [Philipp Wiesenbach](https://www.cl.uni-heidelberg.de/~wiesenbach/index.html)
- **Size of downloaded dataset files:** 35.65 MB
### Dataset Summary
This dataset comprises parallel sentences of hieroglyphic encodings, transcription and translation as used in the paper [Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian Hieroglyph](https://zenodo.org/record/3524924). The data triples are extracted from the [digital corpus of Egyptian texts](https://edoc.bbaw.de/frontdoor/index/index/docId/2919) compiled by the project "Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache".
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The dataset consists of parallel triples of
- `hieroglyphs`: [Encoding of the hieroglyphs with the [Gardiner's sign list](https://en.wikipedia.org/wiki/Gardiner%27s_sign_list)
- `transcription`: Transliteration of the above mentioned hieroglyphs with a [transliteration scheme](https://en.wikipedia.org/wiki/Transliteration_of_Ancient_Egyptian)
- `translation`: Translation in mostly German language (with some English mixed in)
## Dataset Structure
The dataset is not divided into 'train', 'dev' and 'test' splits as it was not built for competitive purposes and we encourage all scientists to use individual partitioning schemes to suit their needs (due to the low resource setting it might be advisable to use cross validation anyway). The only available split 'all' therefore comprises the full 100,708 translation triples, 35,503 of which possess hieroglyphic encodings (the remaining 65,205 triples have empty `hieroglyph` entries).
### Data Instances
An example of a data triple looks the following way:
```
{
"transcription": "n rḏi̯(.w) gꜣ =j r dbḥ.t m pr-ḥḏ",
"translation": "I was not let to suffer lack in the treasury with respect to what was needed;",
"hieroglyphs": "D35 D21 -D37 G1&W11 -V32B A1 D21 D46 -D58 *V28 -F18 *X1 -A2 G17 [? *O2 *?]"
}
```
*Important*: Only about a third of the instance actually cover hieroglyphic encodings (the rest is the empty string `""`) as the leftover encodings have not yet been incorporated into the BBAW's project database.
### Data Fields
#### plain_text
- `transcription`: a `string` feature.
- `translation`: a `string` feature.
- `hieroglyphs`: a `string` feature.
### Data Splits
| name |all|
|----------|----:|
|plain_text|100708|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
The data source comes from the project "Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache" which is compiling an extensively annotated digital corpus of Egyptian texts. Their [publication](https://edoc.bbaw.de/frontdoor/index/index/docId/2919) comprises an excerpt of the internal database's contents.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The corpus has not been preprocessed as we encourage every scientist to prepare the corpus to their desired needs. This means, that all textcritic symbols are still included in the transliteration and translation. This concerns the following annotations:
- `()`: defective
- `[]`: lost
- `{}`: surplus
- `〈〉`: omitted
- `⸢⸣`: damaged
- `⸮?`: unclear
- `{{}}`: erasure
- `(())`: above
- `[[]]`: overstrike
- `〈〈〉〉`: haplography
Their exists a similar sign list for the annotation of the hieroglyphic encoding. If you wish access to this list, please get in contact with the author.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
Source corpus:
```
@misc{OPUS4-2919,
title = {Teilauszug der Datenbank des Vorhabens "Strukturen und Transformationen des Wortschatzes der {\"a}gyptischen Sprache" vom Januar 2018},
institution = {Akademienvorhaben Strukturen und Transformationen des Wortschatzes der {\"a}gyptischen Sprache. Text- und Wissenskultur im alten {\"A}gypten},
type = {other},
year = {2018},
}
```
Translation paper:
```
@article{wiesenbach19,
title = {Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian Hieroglyphs},
author = {Wiesenbach, Philipp and Riezler, Stefan},
journal = {Proceedings of the International Workshop on Spoken Language Translation},
journal-abbrev = {IWSLT},
year = {2019},
url = {https://www.cl.uni-heidelberg.de/statnlpgroup/publications/IWSLT2019_v2.pdf}
}
```
### Contributions
Thanks to [@phiwi](https://github.com/phiwi) for adding this dataset. | 7,902 | [
[
-0.03961181640625,
-0.03936767578125,
0.00560760498046875,
0.01361846923828125,
-0.032806396484375,
-0.009246826171875,
-0.034881591796875,
-0.05584716796875,
0.04583740234375,
0.036224365234375,
-0.0401611328125,
-0.0689697265625,
-0.05792236328125,
0.02731... |
kor_ner | 2023-01-25T14:33:50.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ko",
"license:mit",
"region:us"
] | null | Korean named entity recognition dataset | @InProceedings{Kim:2016,
title = "Korean Named Entity Recognition Dataset",
authors = "Jae-Hoon Kim",
publisher = "GitHub",
year = "2016"
} | 1 | 121 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- ko
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: KorNER
dataset_info:
features:
- name: text
dtype: string
- name: annot_text
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': SO
'1': SS
'2': VV
'3': XR
'4': VCP
'5': JC
'6': VCN
'7': JKB
'8': MM
'9': SP
'10': XSN
'11': SL
'12': NNP
'13': NP
'14': EP
'15': JKQ
'16': IC
'17': XSA
'18': EC
'19': EF
'20': SE
'21': XPN
'22': ETN
'23': SH
'24': XSV
'25': MAG
'26': SW
'27': ETM
'28': JKO
'29': NNB
'30': MAJ
'31': NNG
'32': JKV
'33': JKC
'34': VA
'35': NR
'36': JKG
'37': VX
'38': SF
'39': JX
'40': JKS
'41': SN
- name: ner_tags
sequence:
class_label:
names:
'0': I
'1': O
'2': B_OG
'3': B_TI
'4': B_LC
'5': B_DT
'6': B_PS
splits:
- name: train
num_bytes: 3948938
num_examples: 2928
- name: test
num_bytes: 476850
num_examples: 366
- name: validation
num_bytes: 486178
num_examples: 366
download_size: 3493175
dataset_size: 4911966
---
# Dataset Card for KorNER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/kmounlp/NER)
- **Repository:** [Github](https://github.com/kmounlp/NER)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Each row consists of the following fields:
- `text`: The full text, as is
- `annot_text`: Annotated text including POS-tagged information
- `tokens`: An ordered list of tokens from the full text
- `pos_tags`: Part-of-speech tags for each token
- `ner_tags`: Named entity recognition tags for each token
Note that by design, the length of `tokens`, `pos_tags`, and `ner_tags` will always be identical.
`pos_tags` corresponds to the list below:
```
['SO', 'SS', 'VV', 'XR', 'VCP', 'JC', 'VCN', 'JKB', 'MM', 'SP', 'XSN', 'SL', 'NNP', 'NP', 'EP', 'JKQ', 'IC', 'XSA', 'EC', 'EF', 'SE', 'XPN', 'ETN', 'SH', 'XSV', 'MAG', 'SW', 'ETM', 'JKO', 'NNB', 'MAJ', 'NNG', 'JKV', 'JKC', 'VA', 'NR', 'JKG', 'VX', 'SF', 'JX', 'JKS', 'SN']
```
`ner_tags` correspond to the following:
```
["I", "O", "B_OG", "B_TI", "B_LC", "B_DT", "B_PS"]
```
The prefix `B` denotes the first item of a phrase, and an `I` denotes any non-initial word. In addition, `OG` represens an organization; `TI`, time; `DT`, date, and `PS`, person.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset. | 5,227 | [
[
-0.0322265625,
-0.03643798828125,
0.0198974609375,
0.006763458251953125,
-0.024200439453125,
0.003143310546875,
-0.021636962890625,
-0.017669677734375,
0.037750244140625,
0.041015625,
-0.05145263671875,
-0.08831787109375,
-0.04559326171875,
0.013603210449218... |
parsinlu_reading_comprehension | 2023-08-16T17:04:40.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|wikipedia|google",
"language:fa",
"license:cc-by-nc-sa-4.0",
"arxiv:20... | null | A Persian reading comprehenion task (generating an answer, given a question and a context paragraph).
The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers. | @article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
} | 1 | 121 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|wikipedia|google
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: null
pretty_name: PersiNLU (Reading Comprehension)
dataset_info:
features:
- name: question
dtype: string
- name: url
dtype: string
- name: context
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: answer_text
dtype: string
config_name: parsinlu-repo
splits:
- name: train
num_bytes: 747679
num_examples: 600
- name: test
num_bytes: 674711
num_examples: 570
- name: validation
num_bytes: 163161
num_examples: 125
download_size: 4105495
dataset_size: 1585527
---
# Dataset Card for PersiNLU (Reading Comprehension)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** [email](d.khashabi@gmail.com)
### Dataset Summary
A Persian reading comprehenion task (generating an answer, given a question and a context paragraph).
The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```
{
'question': 'پیامبر در چه سالی به پیامبری رسید؟',
'url': 'https://fa.wikipedia.org/wiki/%D9%85%D8%AD%D9%85%D8%AF',
'passage': 'محمد که از روش زندگی مردم مکه ناخشنود بود، گهگاه در غار حرا در یکی از کوه\u200cهای اطراف آن دیار به تفکر و عبادت می\u200cپرداخت. به باور مسلمانان، محمد در همین مکان و در حدود ۴۰ سالگی از طرف خدا به پیامبری برگزیده، و وحی بر او فروفرستاده شد. در نظر آنان، دعوت محمد همانند دعوت دیگر پیامبرانِ کیش یکتاپرستی مبنی بر این بود که خداوند (الله) یکتاست و تسلیم شدن برابر خدا راه رسیدن به اوست.',
'answers': [
{'answer_start': 160, 'answer_text': 'حدود ۴۰ سالگی'}
]
}
```
### Data Fields
- `question`: the question, mined using Google auto-complete.
- `passage`: the passage that contains the answer.
- `url`: the url from which the passage was mined.
- `answers`: a list of answers, containing the string and the index of the answer with the fields `answer_start` and `answer_text`. Note that in the test set, some `answer_start` values are missing and replaced with `-1`
### Data Splits
The train/test split contains 600/575 samples.
## Dataset Creation
### Curation Rationale
The question were collected via Google auto-complete.
The answers were annotated by native speakers.
For more details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset. | 5,416 | [
[
-0.03759765625,
-0.05877685546875,
0.018890380859375,
0.00952911376953125,
-0.020263671875,
-0.00921630859375,
-0.029998779296875,
-0.01387786865234375,
0.029144287109375,
0.0243377685546875,
-0.04705810546875,
-0.049835205078125,
-0.04046630859375,
0.032989... |
weibo_ner | 2023-01-25T15:02:04.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zh",
"license:unknown",
"region:us"
] | null | Tags: PER(人名), LOC(地点名), GPE(行政区名), ORG(机构名)
Label Tag Meaning
PER PER.NAM 名字(张三)
PER.NOM 代称、类别名(穷人)
LOC LOC.NAM 特指名称(紫玉山庄)
LOC.NOM 泛称(大峡谷、宾馆)
GPE GPE.NAM 行政区的名称(北京)
ORG ORG.NAM 特定机构名称(通惠医院)
ORG.NOM 泛指名称、统称(文艺公司) | null | 6 | 121 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- zh
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: weibo-ner
pretty_name: Weibo NER
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-GPE.NAM
'1': B-GPE.NOM
'2': B-LOC.NAM
'3': B-LOC.NOM
'4': B-ORG.NAM
'5': B-ORG.NOM
'6': B-PER.NAM
'7': B-PER.NOM
'8': I-GPE.NAM
'9': I-GPE.NOM
'10': I-LOC.NAM
'11': I-LOC.NOM
'12': I-ORG.NAM
'13': I-ORG.NOM
'14': I-PER.NAM
'15': I-PER.NOM
'16': O
splits:
- name: train
num_bytes: 1179589
num_examples: 1350
- name: validation
num_bytes: 232380
num_examples: 270
- name: test
num_bytes: 237407
num_examples: 270
download_size: 750687
dataset_size: 1649376
train-eval-index:
- config: default
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
col_mapping:
tokens: tokens
ner_tags: tags
metrics:
- type: seqeval
name: seqeval
---
# Dataset Card for "Weibo NER"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** None
- **Repository:** https://github.com/OYE93/Chinese-NLP-Corpus/tree/master/NER/Weibo
- **Paper:** [More Information Needed]
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
- **Point of Contact:** [More Information Needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. | 4,082 | [
[
-0.0249786376953125,
-0.037017822265625,
-0.01197052001953125,
0.0307769775390625,
-0.0111083984375,
0.004486083984375,
-0.0245819091796875,
-0.026458740234375,
0.045196533203125,
0.037994384765625,
-0.04620361328125,
-0.066650390625,
-0.0496826171875,
0.003... |
SetFit/ethos | 2022-02-03T08:31:19.000Z | [
"region:us"
] | SetFit | ETHOS: onlinE haTe speecH detectiOn dataSet. This repository contains a dataset for hate speech
detection on social media platforms, called Ethos. There are two variations of the dataset:
Ethos_Dataset_Binary: contains 998 comments in the dataset alongside with a label
about hate speech presence or absence. 565 of them do not contain hate speech,
while the rest of them, 433, contain.
Ethos_Dataset_Multi_Label: which contains 8 labels for the 433 comments with hate speech content.
These labels are violence (if it incites (1) or not (0) violence), directed_vs_general (if it is
directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like,
gender, race, national_origin, disability, religion and sexual_orientation. | @misc{mollas2020ethos,
title={ETHOS: an Online Hate Speech Detection Dataset},
author={Ioannis Mollas and Zoe Chrysopoulou and Stamatis Karlos and Grigorios Tsoumakas},
year={2020},
eprint={2006.08328},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 0 | 121 | 2022-03-02T23:29:22 | # Ethos
This dataset is a clone of the official [`ethos` dataset](https://huggingface.co/datasets/ethos) on the Hub. It contains both `binary` and `multilabel` subsets. | 169 | [
[
-0.0518798828125,
-0.044921875,
0.004730224609375,
0.0069732666015625,
-0.01421356201171875,
0.034454345703125,
0.01629638671875,
-0.03741455078125,
0.09063720703125,
0.043304443359375,
-0.054656982421875,
-0.03204345703125,
-0.037017822265625,
0.00474929809... |
SetFit/student-question-categories | 2022-01-16T18:32:48.000Z | [
"region:us"
] | SetFit | null | null | 1 | 121 | 2022-03-02T23:29:22 | This is the [IITJEE NEET AIIMS Students Questions Data](https://www.kaggle.com/mrutyunjaybiswal/iitjee-neet-aims-students-questions-data) dataset.
It categorizes university entry questions into 4 categories: Physics, Chemistry, Biology, and Mathematics. | 256 | [
[
-0.0233612060546875,
-0.0517578125,
0.023834228515625,
0.011810302734375,
0.01751708984375,
0.0214385986328125,
0.03704833984375,
-0.0022430419921875,
-0.0061798095703125,
0.02716064453125,
-0.045074462890625,
-0.0137481689453125,
-0.0204925537109375,
0.0325... |
eugenesiow/Urban100 | 2022-10-21T03:58:53.000Z | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:cc-by-4.0",
"other-image-super-resolution",
"region:us"
] | eugenesiow | The Urban100 dataset contains 100 images of urban scenes.
It commonly used as a test set to evaluate the performance of super-resolution models. | @inproceedings{martin2001database,
title={A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics},
author={Martin, David and Fowlkes, Charless and Tal, Doron and Malik, Jitendra},
booktitle={Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001},
volume={2},
pages={416--423},
year={2001},
organization={IEEE}
} | 0 | 121 | 2022-03-02T23:29:22 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language: []
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- other
task_ids: []
pretty_name: Urban100
tags:
- other-image-super-resolution
---
# Dataset Card for Urban100
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: https://github.com/jbhuang0604/SelfExSR
- **Repository**: https://huggingface.co/datasets/eugenesiow/Urban100
- **Paper**: https://openaccess.thecvf.com/content_cvpr_2015/html/Huang_Single_Image_Super-Resolution_2015_CVPR_paper.html
- **Leaderboard**: https://github.com/eugenesiow/super-image#scale-x2
### Dataset Summary
The Urban100 dataset contains 100 images of urban scenes. It commonly used as a test set to evaluate the performance of super-resolution models. It was first published by [Huang et al. (2015)](https://openaccess.thecvf.com/content_cvpr_2015/html/Huang_Single_Image_Super-Resolution_2015_CVPR_paper.html) in the paper "Single Image Super-Resolution From Transformed Self-Exemplars".
Install with `pip`:
```bash
pip install datasets super-image
```
Evaluate a model with the [`super-image`](https://github.com/eugenesiow/super-image) library:
```python
from datasets import load_dataset
from super_image import EdsrModel
from super_image.data import EvalDataset, EvalMetrics
dataset = load_dataset('eugenesiow/Urban100', 'bicubic_x2', split='validation')
eval_dataset = EvalDataset(dataset)
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
EvalMetrics().evaluate(model, eval_dataset)
```
### Supported Tasks and Leaderboards
The dataset is commonly used for evaluation of the `image-super-resolution` task.
Unofficial [`super-image`](https://github.com/eugenesiow/super-image) leaderboard for:
- [Scale 2](https://github.com/eugenesiow/super-image#scale-x2)
- [Scale 3](https://github.com/eugenesiow/super-image#scale-x3)
- [Scale 4](https://github.com/eugenesiow/super-image#scale-x4)
- [Scale 8](https://github.com/eugenesiow/super-image#scale-x8)
### Languages
Not applicable.
## Dataset Structure
### Data Instances
An example of `validation` for `bicubic_x2` looks as follows.
```
{
"hr": "/.cache/huggingface/datasets/downloads/extracted/Urban100_HR/img_001.png",
"lr": "/.cache/huggingface/datasets/downloads/extracted/Urban100_LR_x2/img_001.png"
}
```
### Data Fields
The data fields are the same among all splits.
- `hr`: a `string` to the path of the High Resolution (HR) `.png` image.
- `lr`: a `string` to the path of the Low Resolution (LR) `.png` image.
### Data Splits
| name |validation|
|-------|---:|
|bicubic_x2|100|
|bicubic_x3|100|
|bicubic_x4|100|
## Dataset Creation
### Curation Rationale
The authors have created Urban100 containing 100 HR images with a variety of real-world structures.
### Source Data
#### Initial Data Collection and Normalization
The authors constructed this dataset using images from Flickr (under CC license) using keywords such as urban, city, architecture, and structure.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
No annotations.
#### Who are the annotators?
No annotators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- **Original Authors**: [Huang et al. (2015)](https://github.com/jbhuang0604/SelfExSR)
### Licensing Information
The dataset provided uses images from Flikr under the CC (CC-BY-4.0) license.
### Citation Information
```bibtex
@InProceedings{Huang_2015_CVPR,
author = {Huang, Jia-Bin and Singh, Abhishek and Ahuja, Narendra},
title = {Single Image Super-Resolution From Transformed Self-Exemplars},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}
```
### Contributions
Thanks to [@eugenesiow](https://github.com/eugenesiow) for adding this dataset.
| 5,300 | [
[
-0.0535888671875,
-0.03594970703125,
0.0209503173828125,
-0.0078277587890625,
-0.0032596588134765625,
-0.006404876708984375,
-0.002323150634765625,
-0.031982421875,
0.028106689453125,
0.01983642578125,
-0.044769287109375,
-0.04833984375,
-0.0173797607421875,
... |
Aniemore/resd | 2023-06-10T22:15:40.000Z | [
"task_categories:audio-classification",
"task_ids:audio-emotion-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ru",
"lice... | Aniemore | null | null | 3 | 121 | 2022-05-23T22:57:03 | ---
license:
- mit
annotations_creators:
- expert-generated
language_creators:
- expert-generated
- crowdsourced
language:
- ru
multilinguality:
- monolingual
pretty_name: Russian Emotional Speech Dialogs
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- audio-classification
task_ids:
- audio-emotion-recognition
dataset_info:
features:
- name: name
dtype: string
- name: path
dtype: string
- name: emotion
dtype: string
- name: speech
dtype: audio
splits:
- name: test
num_bytes: 96603538.0
num_examples: 280
- name: train
num_bytes: 398719157.336
num_examples: 1116
download_size: 485403675
dataset_size: 495322695.336
---
# Dataset Card for resd
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://huggingface.co/datasets/Aniemore/resd**
- **Repository: https://github.com/aniemore/Aniemore**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Russian dataset of emotional speech dialogues. This dataset was assembled from ~3.5 hours of live speech by actors who voiced pre-distributed emotions in the dialogue for ~3 minutes each.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This dataset was created by Artem Amentes, Nikita Davidchuk and Ilya Lubenets
### Citation Information
```
@misc{Aniemore,
author = {Артем Аментес, Илья Лубенец, Никита Давидчук},
title = {Открытая библиотека искусственного интеллекта для анализа и выявления эмоциональных оттенков речи человека},
year = {2022},
publisher = {Hugging Face},
journal = {Hugging Face Hub},
howpublished = {\url{https://huggingface.com/aniemore/Aniemore}},
email = {hello@socialcode.ru}
}
```
### Contributions
Thanks to [@Ar4ikov](https://github.com/Ar4ikov) for adding this dataset. | 3,813 | [
[
-0.0290069580078125,
-0.0408935546875,
0.0023651123046875,
0.0162506103515625,
-0.0155487060546875,
0.00518035888671875,
-0.0181121826171875,
-0.02838134765625,
0.054168701171875,
0.0267181396484375,
-0.0789794921875,
-0.07122802734375,
-0.0423583984375,
0.0... |
graphs-datasets/AIDS | 2023-02-07T16:38:52.000Z | [
"task_categories:graph-ml",
"arxiv:2007.08663",
"region:us"
] | graphs-datasets | null | null | 1 | 121 | 2022-09-02T10:51:25 | ---
licence: unknown
task_categories:
- graph-ml
---
# Dataset Card for AIDS
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://wiki.nci.nih.gov/display/NCIDTPdata/AIDS+Antiviral+Screen+Data)**
- **Paper:**: (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-aids)
### Dataset Summary
The `AIDS` dataset is a dataset containing compounds checked for evidence of anti-HIV activity..
### Supported Tasks and Leaderboards
`AIDS` should be used for molecular classification, a binary classification task. The score used is accuracy with cross validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | medium |
| #graphs | 1999 |
| average #nodes | 15.5875 |
| average #edges | 32.39 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under license unknown.
### Citation Information
```
@inproceedings{Morris+2020,
title={TUDataset: A collection of benchmark datasets for learning with graphs},
author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann},
booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)},
archivePrefix={arXiv},
eprint={2007.08663},
url={www.graphlearning.io},
year={2020}
}
```
```
@InProceedings{10.1007/978-3-540-89689-0_33,
author="Riesen, Kaspar
and Bunke, Horst",
editor="da Vitoria Lobo, Niels
and Kasparis, Takis
and Roli, Fabio
and Kwok, James T.
and Georgiopoulos, Michael
and Anagnostopoulos, Georgios C.
and Loog, Marco",
title="IAM Graph Database Repository for Graph Based Pattern Recognition and Machine Learning",
booktitle="Structural, Syntactic, and Statistical Pattern Recognition",
year="2008",
publisher="Springer Berlin Heidelberg",
address="Berlin, Heidelberg",
pages="287--297",
abstract="In recent years the use of graph based representation has gained popularity in pattern recognition and machine learning. As a matter of fact, object representation by means of graphs has a number of advantages over feature vectors. Therefore, various algorithms for graph based machine learning have been proposed in the literature. However, in contrast with the emerging interest in graph based representation, a lack of standardized graph data sets for benchmarking can be observed. Common practice is that researchers use their own data sets, and this behavior cumbers the objective evaluation of the proposed methods. In order to make the different approaches in graph based machine learning better comparable, the present paper aims at introducing a repository of graph data sets and corresponding benchmarks, covering a wide spectrum of different applications.",
isbn="978-3-540-89689-0"
}
``` | 4,507 | [
[
-0.022125244140625,
-0.03656005859375,
0.0118865966796875,
-0.00917816162109375,
-0.0152740478515625,
0.007305145263671875,
-0.007843017578125,
-0.0299530029296875,
0.0260467529296875,
0.0060577392578125,
-0.0188751220703125,
-0.06256103515625,
-0.05517578125,
... |
heegyu/namuwiki-extracted | 2023-01-15T09:46:31.000Z | [
"task_categories:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:ko",
"license:cc-by-nc-sa-2.0",
"region:us"
] | heegyu | null | null | 2 | 121 | 2022-10-01T01:27:07 | ---
license: cc-by-nc-sa-2.0
language:
- ko
language_creators:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
task_categories:
- other
---
# namu.wiki database dump
##
https://namu.wiki/ database dump 2022/03/01<br/>
- 571308rows
- download size: 2.19GB
## 주의사항
namu-wiki-extractor를 이용하여 전처리, 추가로 아래 전처리를 수행했습니다
1. 헤더 제거 `== 개요 ==`
1. 테이블 제거
1. `[age(1997-01-01)]` 는 전처리 시점 기준으로 적용(2022년 10월 2일)
1. `[math(a / b + c)]` 는 제거하지 않음.
1. math 마크다운이 각주 내에 있을 경우, 각주가 전처리되지 않은 문제 있음.
## Usage
```bash
pip install datasets
```
```python
from datasets import load_dataset
dataset = load_dataset("heegyu/namuwiki-extracted")
print(dataset["train"][0])
```
```
{
'title': '!!아앗!!',
'text': '!!ああっと!! ▲신 세계수의 미궁 2에서 뜬 !!아앗!! 세계수의 미궁 시리즈에 전통으로 등장하는 대사. 2편부터 등장했으며 훌륭한 사망 플래그의 예시이다. 세계수의 모험가들이 탐험하는 던전인 수해의 구석구석에는 채취/벌채/채굴 포인트가 있으며, 이를 위한 채집 스킬에 ...',
'contributors': '110.46.34.123,kirby10,max0243,218.54.117.149,ruby3141,121.165.63.239,iviyuki,1.229.200.194,anatra95,kiri47,175.127.134.2,nickchaos71,chkong1998,kiwitree2,namubot,huwieblusnow',
'namespace': ''
}
``` | 1,081 | [
[
-0.037353515625,
-0.0447998046875,
0.01131439208984375,
0.01288604736328125,
-0.024566650390625,
-0.028656005859375,
-0.0093536376953125,
-0.0090484619140625,
0.03179931640625,
0.03643798828125,
-0.039154052734375,
-0.038848876953125,
-0.039215087890625,
0.0... |
pkavumba/balanced-copa | 2022-10-03T00:39:01.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|copa",
"language:en",
"license:cc-by-4.0",
"region:us"
] | pkavumba | null | null | 0 | 121 | 2022-10-03T00:33:09 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: BCOPA
size_categories:
- unknown
source_datasets:
- extended|copa
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
---
# Dataset Card for "Balanced COPA"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://balanced-copa.github.io/](https://balanced-copa.github.io/)
- **Repository:** [Balanced COPA](https://github.com/Balanced-COPA/Balanced-COPA)
- **Paper:** [When Choosing Plausible Alternatives, Clever Hans can be Clever](https://aclanthology.org/D19-6004/)
- **Point of Contact:** [@pkavumba](https://github.com/pkavumba)
### Dataset Summary
Bala-COPA: An English language Dataset for Training Robust Commonsense Causal Reasoning Models
The Balanced Choice of Plausible Alternatives dataset is a benchmark for training machine learning models that are robust to superficial cues/spurious correlations. The dataset extends the COPA dataset(Roemmele et al. 2011) with mirrored instances that mitigate against token-level superficial cues in the original COPA answers. The superficial cues in the original COPA datasets result from an unbalanced token distribution between the correct and the incorrect answer choices, i.e., some tokens appear more in the correct choices than the incorrect ones. Balanced COPA equalizes the token distribution by adding mirrored instances with identical answer choices but different labels.
The details about the creation of Balanced COPA and the implementation of the baselines are available in the paper.
Balanced COPA language en
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
- English
## Dataset Structure
### Data Instances
An example of 'validation' looks as follows.
```
{
"id": 1,
"premise": "My body cast a shadow over the grass.",
"choice1": "The sun was rising.",
"choice2": "The grass was cut.",
"question": "cause",
"label": 1,
"mirrored": false,
}
{
"id": 1001,
"premise": "The garden looked well-groomed.",
"choice1": "The sun was rising.",
"choice2": "The grass was cut.",
"question": "cause",
"label": 1,
"mirrored": true,
}
```
### Data Fields
The data fields are the same among all splits.
#### en
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
- `id`: a `int32` feature.
- `mirrored`: a `bool` feature.
### Data Splits
| validation | test |
| ---------: | ---: |
| 1,000 | 500 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@inproceedings{kavumba-etal-2019-choosing,
title = "When Choosing Plausible Alternatives, Clever Hans can be Clever",
author = "Kavumba, Pride and
Inoue, Naoya and
Heinzerling, Benjamin and
Singh, Keshav and
Reisert, Paul and
Inui, Kentaro",
booktitle = "Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-6004",
doi = "10.18653/v1/D19-6004",
pages = "33--42",
abstract = "Pretrained language models, such as BERT and RoBERTa, have shown large improvements in the commonsense reasoning benchmark COPA. However, recent work found that many improvements in benchmarks of natural language understanding are not due to models learning the task, but due to their increasing ability to exploit superficial cues, such as tokens that occur more often in the correct answer than the wrong one. Are BERT{'}s and RoBERTa{'}s good performance on COPA also caused by this? We find superficial cues in COPA, as well as evidence that BERT exploits these cues.To remedy this problem, we introduce Balanced COPA, an extension of COPA that does not suffer from easy-to-exploit single token cues. We analyze BERT{'}s and RoBERTa{'}s performance on original and Balanced COPA, finding that BERT relies on superficial cues when they are present, but still achieves comparable performance once they are made ineffective, suggesting that BERT learns the task to a certain degree when forced to. In contrast, RoBERTa does not appear to rely on superficial cues.",
}
@inproceedings{roemmele2011choice,
title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning},
author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S},
booktitle={2011 AAAI Spring Symposium Series},
year={2011},
url={https://people.ict.usc.edu/~gordon/publications/AAAI-SPRING11A.PDF},
}
```
### Contributions
Thanks to [@pkavumba](https://github.com/pkavumba) for adding this dataset.
| 7,942 | [
[
-0.03985595703125,
-0.05682373046875,
0.0171661376953125,
0.030975341796875,
-0.0231781005859375,
-0.004817962646484375,
-0.0231170654296875,
-0.04296875,
0.031280517578125,
0.03076171875,
-0.053009033203125,
-0.05419921875,
-0.039825439453125,
0.00872802734... |
NeelNanda/c4-code-tokenized-2b | 2022-11-13T21:54:56.000Z | [
"region:us"
] | NeelNanda | null | null | 1 | 121 | 2022-11-13T21:42:47 | ---
dataset_info:
features:
- name: tokens
sequence: int64
splits:
- name: train
num_bytes: 13581607992
num_examples: 1657102
download_size: 2953466988
dataset_size: 13581607992
---
# Dataset Card for "c4-code-tokenized-2b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 380 | [
[
-0.0261077880859375,
-0.01396942138671875,
0.005062103271484375,
0.03497314453125,
-0.020843505859375,
0.0186920166015625,
0.0171051025390625,
-0.0266876220703125,
0.051788330078125,
0.031585693359375,
-0.04168701171875,
-0.05670166015625,
-0.040985107421875,
... |
bigbio/progene | 2022-12-22T15:46:19.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | bigbio | The Protein/Gene corpus was developed at the JULIE Lab Jena under supervision of Prof. Udo Hahn.
The executing scientist was Dr. Joachim Wermter.
The main annotator was Dr. Rico Pusch who is an expert in biology.
The corpus was developed in the context of the StemNet project (http://www.stemnet.de/). | @inproceedings{faessler-etal-2020-progene,
title = "{P}ro{G}ene - A Large-scale, High-Quality Protein-Gene Annotated Benchmark Corpus",
author = "Faessler, Erik and
Modersohn, Luise and
Lohr, Christina and
Hahn, Udo",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.564",
pages = "4585--4596",
abstract = "Genes and proteins constitute the fundamental entities of molecular genetics. We here introduce ProGene (formerly called FSU-PRGE), a corpus that reflects our efforts to cope with this important class of named entities within the framework of a long-lasting large-scale annotation campaign at the Jena University Language {\&} Information Engineering (JULIE) Lab. We assembled the entire corpus from 11 subcorpora covering various biological domains to achieve an overall subdomain-independent corpus. It consists of 3,308 MEDLINE abstracts with over 36k sentences and more than 960k tokens annotated with nearly 60k named entity mentions. Two annotators strove for carefully assigning entity mentions to classes of genes/proteins as well as families/groups, complexes, variants and enumerations of those where genes and proteins are represented by a single class. The main purpose of the corpus is to provide a large body of consistent and reliable annotations for supervised training and evaluation of machine learning algorithms in this relevant domain. Furthermore, we provide an evaluation of two state-of-the-art baseline systems {---} BioBert and flair {---} on the ProGene corpus. We make the evaluation datasets and the trained models available to encourage comparable evaluations of new methods in the future.",
language = "English",
ISBN = "979-10-95546-34-4",
} | 2 | 121 | 2022-11-13T22:11:35 |
---
language:
- en
bigbio_language:
- English
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: ProGene
homepage: https://zenodo.org/record/3698568#.YlVHqdNBxeg
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for ProGene
## Dataset Description
- **Homepage:** https://zenodo.org/record/3698568#.YlVHqdNBxeg
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
The Protein/Gene corpus was developed at the JULIE Lab Jena under supervision of Prof. Udo Hahn.
The executing scientist was Dr. Joachim Wermter.
The main annotator was Dr. Rico Pusch who is an expert in biology.
The corpus was developed in the context of the StemNet project (http://www.stemnet.de/).
## Citation Information
```
@inproceedings{faessler-etal-2020-progene,
title = "{P}ro{G}ene - A Large-scale, High-Quality Protein-Gene Annotated Benchmark Corpus",
author = "Faessler, Erik and
Modersohn, Luise and
Lohr, Christina and
Hahn, Udo",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.564",
pages = "4585--4596",
abstract = "Genes and proteins constitute the fundamental entities of molecular genetics. We here introduce ProGene (formerly called FSU-PRGE), a corpus that reflects our efforts to cope with this important class of named entities within the framework of a long-lasting large-scale annotation campaign at the Jena University Language {\&} Information Engineering (JULIE) Lab. We assembled the entire corpus from 11 subcorpora covering various biological domains to achieve an overall subdomain-independent corpus. It consists of 3,308 MEDLINE abstracts with over 36k sentences and more than 960k tokens annotated with nearly 60k named entity mentions. Two annotators strove for carefully assigning entity mentions to classes of genes/proteins as well as families/groups, complexes, variants and enumerations of those where genes and proteins are represented by a single class. The main purpose of the corpus is to provide a large body of consistent and reliable annotations for supervised training and evaluation of machine learning algorithms in this relevant domain. Furthermore, we provide an evaluation of two state-of-the-art baseline systems {---} BioBert and flair {---} on the ProGene corpus. We make the evaluation datasets and the trained models available to encourage comparable evaluations of new methods in the future.",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
| 2,753 | [
[
-0.04364013671875,
-0.0287017822265625,
0.01056671142578125,
-0.005039215087890625,
-0.01384735107421875,
-0.0018739700317382812,
-0.01320648193359375,
-0.045013427734375,
0.031402587890625,
0.0255279541015625,
-0.0439453125,
-0.040130615234375,
-0.0528564453125... |
Muennighoff/natural-instructions | 2022-12-23T20:08:44.000Z | [
"task_categories:other",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"language:en",
"region:us"
] | Muennighoff | null | null | 22 | 121 | 2022-12-17T21:45:01 | ---
annotations_creators:
- crowdsourced
- expert-generated
language:
- en
multilinguality:
- monolingual
size_categories:
- 100M<n<1B
task_categories:
- other
---
Preprocessed version of Super-Natural-Instructions from https://github.com/allenai/natural-instructions/tree/master/splits. The same inputs may appear with different outputs, thus to avoid duplicate inputs, you can deduplicate by the `id` or the `inputs` field.
Train Tasks:
```
['task001_quoref_question_generation', 'task002_quoref_answer_generation', 'task022_cosmosqa_passage_inappropriate_binary', 'task023_cosmosqa_question_generation', 'task024_cosmosqa_answer_generation', 'task025_cosmosqa_incorrect_answer_generation', 'task026_drop_question_generation', 'task027_drop_answer_type_generation', 'task028_drop_answer_generation', 'task043_essential_terms_answering_incomplete_questions', 'task044_essential_terms_identifying_essential_words', 'task045_miscellaneous_sentence_paraphrasing', 'task046_miscellaneous_question_typing', 'task047_miscellaneous_answering_science_questions', 'task059_ropes_story_generation', 'task060_ropes_question_generation', 'task061_ropes_answer_generation', 'task062_bigbench_repeat_copy_logic', 'task063_first_i_elements', 'task064_all_elements_except_first_i', 'task065_timetravel_consistent_sentence_classification', 'task066_timetravel_binary_consistency_classification', 'task067_abductivenli_answer_generation', 'task068_abductivenli_incorrect_answer_generation', 'task069_abductivenli_classification', 'task070_abductivenli_incorrect_classification', 'task071_abductivenli_answer_generation', 'task072_abductivenli_answer_generation', 'task073_commonsenseqa_answer_generation', 'task074_squad1.1_question_generation', 'task075_squad1.1_answer_generation', 'task076_splash_correcting_sql_mistake', 'task077_splash_explanation_to_sql', 'task078_all_elements_except_last_i', 'task079_conala_concat_strings', 'task080_piqa_answer_generation', 'task081_piqa_wrong_answer_generation', 'task082_babi_t1_single_supporting_fact_question_generation', 'task083_babi_t1_single_supporting_fact_answer_generation', 'task084_babi_t1_single_supporting_fact_identify_relevant_fact', 'task085_unnatural_addsub_arithmetic', 'task087_new_operator_addsub_arithmetic', 'task088_identify_typo_verification', 'task089_swap_words_verification', 'task090_equation_learner_algebra', 'task091_all_elements_from_index_i_to_j', 'task092_check_prime_classification', 'task093_conala_normalize_lists', 'task094_conala_calculate_mean', 'task095_conala_max_absolute_value', 'task096_conala_list_index_subtraction', 'task097_conala_remove_duplicates', 'task098_conala_list_intersection', 'task099_reverse_elements_between_index_i_and_j', 'task100_concatenate_all_elements_from_index_i_to_j', 'task101_reverse_and_concatenate_all_elements_from_index_i_to_j', 'task103_facts2story_long_text_generation', 'task104_semeval_2019_task10_closed_vocabulary_mathematical_answer_generation', 'task105_story_cloze-rocstories_sentence_generation', 'task107_splash_question_to_sql', 'task1087_two_number_sum', 'task1088_array_of_products', 'task1089_check_monotonic_array', 'task108_contextualabusedetection_classification', 'task109_smsspamcollection_spamsmsdetection', 'task110_logic2text_sentence_generation', 'task111_asset_sentence_simplification', 'task112_asset_simple_sentence_identification', 'task1135_xcsr_en_commonsense_mc_classification', 'task113_count_frequency_of_letter', 'task1146_country_capital', 'task1147_country_currency', 'task1148_maximum_ascii_value', 'task1149_item_check_edible', 'task114_is_the_given_word_longest', 'task1150_delete_max_min', 'task1151_swap_max_min', 'task115_help_advice_classification', 'task1167_penn_treebank_coarse_pos_tagging', 'task1168_brown_coarse_pos_tagging', 'task116_com2sense_commonsense_reasoning', 'task1186_nne_hrngo_classification', 'task1188_count_max_freq_char', 'task1189_check_char_in_string', 'task118_semeval_2019_task10_open_vocabulary_mathematical_answer_generation', 'task1190_add_integer_to_list', 'task1191_food_veg_nonveg', 'task1192_food_flavor_profile', 'task1193_food_course_classification', 'task1194_kth_largest_element', 'task1196_atomic_classification_oeffect', 'task1197_atomic_classification_oreact', 'task1198_atomic_classification_owant', 'task1199_atomic_classification_xattr', 'task119_semeval_2019_task10_geometric_mathematical_answer_generation', 'task1200_atomic_classification_xeffect', 'task1201_atomic_classification_xintent', 'task1202_atomic_classification_xneed', 'task1203_atomic_classification_xreact', 'task1204_atomic_classification_hinderedby', 'task1205_atomic_classification_isafter', 'task1206_atomic_classification_isbefore', 'task1207_atomic_classification_atlocation', 'task1208_atomic_classification_xreason', 'task1209_atomic_classification_objectuse', 'task1210_atomic_classification_madeupof', 'task1211_atomic_classification_hassubevent', 'task1212_atomic_classification_hasproperty', 'task1213_atomic_classification_desires', 'task1214_atomic_classification_xwant', 'task1215_atomic_classification_capableof', 'task1216_atomic_classification_causes', 'task1217_atomic_answer_generation', 'task122_conala_list_index_addition', 'task123_conala_sort_dictionary', 'task124_conala_pair_averages', 'task125_conala_pair_differences', 'task126_scan_structured_text_generation_command_action_all', 'task127_scan_long_text_generation_action_command_all', 'task1283_hrngo_quality_classification', 'task1284_hrngo_informativeness_classification', 'task1285_kpa_keypoint_matching', 'task1286_openbookqa_question_answering', 'task1288_glue_mrpc_paraphrasing', 'task1289_trec_classification', 'task128_scan_structured_text_generation_command_action_short', 'task1290_xsum_summarization', 'task1291_multi_news_summarization', 'task1292_yelp_review_full_text_categorization', 'task1293_kilt_tasks_hotpotqa_question_answering', 'task1294_wiki_qa_answer_verification', 'task1295_adversarial_qa_question_answering', 'task1296_wiki_hop_question_answering', 'task129_scan_long_text_generation_action_command_short', 'task1308_amazonreview_category_classification', 'task1309_amazonreview_summary_classification', 'task130_scan_structured_text_generation_command_action_long', 'task1310_amazonreview_rating_classification', 'task1311_amazonreview_rating_classification', 'task1312_amazonreview_polarity_classification', 'task1313_amazonreview_polarity_classification', 'task1314_country_abbreviation', 'task1315_find_range_array', 'task1316_remove_duplicates_string', 'task1317_country_calling_code', 'task1318_country_national_dish', 'task1319_country_by_barcode_prefix', 'task131_scan_long_text_generation_action_command_long', 'task1320_country_domain_tld', 'task1321_country_continent', 'task1322_country_government_type', 'task1325_qa_zre_question_generation_on_subject_relation', 'task1326_qa_zre_question_generation_from_answer', 'task1327_qa_zre_answer_generation_from_question', 'task1328_qa_zre_relation_generation_from_question', 'task132_dais_text_modification', 'task1331_reverse_array', 'task1332_check_leap_year', 'task1333_check_validity_date_ddmmyyyy', 'task1336_peixian_equity_evaluation_corpus_gender_classifier', 'task1338_peixian_equity_evaluation_corpus_sentiment_classifier', 'task1339_peixian_equity_evaluation_corpus_text_completion', 'task1340_msr_text_compression_compression', 'task1341_msr_text_classification', 'task1346_glue_cola_grammatical_correctness_classification', 'task1347_glue_sts-b_similarity_classification', 'task1354_sent_comp_classification', 'task1355_sent_comp_summarization', 'task1359_numer_sense_answer_generation', 'task1360_numer_sense_multiple_choice_qa_generation', 'task1361_movierationales_classification', 'task1364_hans_answer_generation', 'task1366_healthfact_classification', 'task1368_healthfact_sentence_generation', 'task1369_healthfact_sentence_generation', 'task1378_quarel_correct_answer_generation', 'task1379_quarel_incorrect_answer_generation', 'task137_detoxifying-lms_classification_toxicity', 'task1380_quarel_correct_option_generation', 'task1381_quarel_incorrect_option_generation', 'task1382_quarel_write_correct_answer', 'task1383_quarel_write_incorrect_answer', 'task1384_deal_or_no_dialog_classification', 'task1389_hellaswag_completion', 'task138_detoxifying-lms_classification_fluency', 'task1398_obqa_question_generation', 'task1399_obqa_answer_generation', 'task139_detoxifying-lms_classification_topicality', 'task1400_obqa_incorrect_answer_generation', 'task1401_obqa_sentence_generation', 'task1403_check_validity_date_mmddyyyy', 'task1404_date_conversion', 'task1405_find_median', 'task1406_kth_smallest_element', 'task140_detoxifying-lms_classification_style', 'task1412_web_questions_question_answering', 'task1418_bless_semantic_relation_classification', 'task1419_mathqa_gain', 'task141_odd-man-out_classification_category', 'task1420_mathqa_general', 'task1421_mathqa_other', 'task1422_mathqa_physics', 'task1423_mathqa_geometry', 'task1424_mathqa_probability', 'task1425_country_iso_numeric', 'task1426_country_independence_year', 'task1427_country_region_in_world', 'task1428_country_surface_area', 'task1429_evalution_semantic_relation_classification', 'task142_odd-man-out_classification_no_category', 'task1431_head_qa_answer_generation', 'task1434_head_qa_classification', 'task143_odd-man-out_classification_generate_category', 'task1443_string_to_number', 'task1444_round_power_of_two', 'task1445_closest_integers', 'task1446_farthest_integers', 'task1447_drug_extraction_ade', 'task1448_disease_entity_extraction_ncbi_dataset', 'task1449_disease_entity_extraction_bc5cdr_dataset', 'task144_subjqa_question_answering', 'task1451_drug_dose_extraction', 'task1452_location_entity_extraction_btc_corpus', 'task1453_person_entity_extraction_btc_corpus', 'task145_afs_argument_similarity_death_penalty', 'task146_afs_argument_similarity_gun_control', 'task1479_organization_entity_extraction_btc_corpus', 'task147_afs_argument_similarity_gay_marriage', 'task1480_gene_extraction_jnlpba_dataset', 'task1481_gene_extraction_bc2gm_dataset', 'task1482_gene_extraction_chemprot_dataset', 'task1483_chemical_extraction_chemprot_dataset', 'task1484_gene_extraction_linnaeus_dataset', 'task1485_organ_extraction_anem_dataset', 'task1486_cell_extraction_anem_dataset', 'task1487_organism_substance_extraction_anem_dataset', 'task1488_sarcasmdetection_headline_classification', 'task1489_sarcasmdetection_tweet_classification', 'task148_afs_argument_quality_gay_marriage', 'task1495_adverse_drug_event_classification', 'task1498_24hour_to_12hour_clock', 'task1499_dstc3_summarization', 'task149_afs_argument_quality_death_penalty', 'task1500_dstc3_classification', 'task1501_dstc3_answer_generation', 'task1502_hatexplain_classification', 'task1503_hatexplain_classification', 'task1504_hatexplain_answer_generation', 'task1505_root09_semantic_relation_classification', 'task1506_celebrity_minimal_dob_span', 'task1507_boolean_temporal_reasoning', 'task1508_wordnet_antonyms', 'task1509_evalution_antonyms', 'task150_afs_argument_quality_gun_control', 'task1510_evalution_relation_extraction', 'task1517_limit_classfication', 'task1518_limit_answer_generation', 'task1519_qa_srl_question_generation', 'task151_tomqa_find_location_easy_clean', 'task1520_qa_srl_answer_generation', 'task152_tomqa_find_location_easy_noise', 'task153_tomqa_find_location_hard_clean', 'task1541_agnews_classification', 'task1542_every_ith_element_from_starting', 'task1548_wiqa_binary_classification', 'task1549_wiqa_answer_generation_missing_step', 'task154_tomqa_find_location_hard_noise', 'task1551_every_ith_element_from_kth_element', 'task1553_cnn_dailymail_summarization', 'task1559_blimp_binary_classification', 'task155_count_nouns_verbs', 'task1560_blimp_binary_classification', 'task1564_triviaqa_answer_generation', 'task1565_triviaqa_classification', 'task1566_propara_structured_text_generation', 'task1567_propara_question_generation', 'task1568_propara_classification', 'task156_codah_classification_adversarial', 'task1572_samsum_summary', 'task1573_samsum_classification', 'task157_count_vowels_and_consonants', 'task1580_eqasc-perturbed_question_generation', 'task1581_eqasc-perturbed_answer_generation', 'task1582_bless_hypernym_generation', 'task1583_bless_meronym_classification', 'task1584_evalution_meronym_classification', 'task1585_root09_hypernym_generation', 'task158_count_frequency_of_words', 'task1590_diplomacy_text_generation', 'task1592_yahoo_answers_topics_classfication', 'task1593_yahoo_answers_topics_classification', 'task1594_yahoo_answers_topics_question_generation', 'task1595_event2mind_text_generation_1', 'task1596_event2mind_text_generation_2', 'task1599_smcalflow_classification', 'task159_check_frequency_of_words_in_sentence_pair', 'task1600_smcalflow_sentence_generation', 'task1601_webquestions_answer_generation', 'task1602_webquestion_question_genreation', 'task1603_smcalflow_sentence_generation', 'task1604_ethos_text_classification', 'task1605_ethos_text_classification', 'task1606_ethos_text_classification', 'task1607_ethos_text_classification', 'task1608_xquad_en_answer_generation', 'task1609_xquad_en_question_generation', 'task160_replace_letter_in_a_sentence', 'task161_count_words_containing_letter', 'task162_count_words_starting_with_letter', 'task163_count_words_ending_with_letter', 'task1645_medical_question_pair_dataset_text_classification', 'task164_mcscript_question_answering_text', 'task1656_gooaq_answer_generation', 'task1657_gooaq_question_generation', 'task165_mcscript_question_answering_commonsense', 'task1660_super_glue_question_generation', 'task1661_super_glue_classification', 'task1665_trainglecopa_question_generation', 'task1669_md_gender_bias_text_modification', 'task166_clariq_sentence_generation', 'task1670_md_gender_bias_text_modification', 'task1678_mathqa_answer_selection', 'task167_strategyqa_question_generation', 'task168_strategyqa_question_decomposition', 'task169_strategyqa_sentence_generation', 'task1703_ljspeech_textmodification', 'task1704_ljspeech_textmodification', 'task1705_ljspeech_classification', 'task1706_ljspeech_classification', 'task170_hotpotqa_answer_generation', 'task1711_poki_text_generation', 'task1712_poki_classification', 'task1713_convai3_sentence_generation', 'task1714_convai3_sentence_generation', 'task1720_civil_comments_toxicity_classification', 'task1721_civil_comments_obscenity_classification', 'task1722_civil_comments_threat_classification', 'task1723_civil_comments_sexuallyexplicit_classification', 'task1724_civil_comments_insult_classification', 'task1725_civil_comments_severtoxicity_classification', 'task1726_mathqa_correct_answer_generation', 'task1727_wiqa_what_is_the_effect', 'task1729_personachat_generate_next', 'task1730_personachat_choose_next', 'task1731_quartz_question_answering', 'task176_break_decompose_questions', 'task177_para-nmt_paraphrasing', 'task178_quartz_question_answering', 'task179_participant_extraction', 'task180_intervention_extraction', 'task181_outcome_extraction', 'task182_duorc_question_generation', 'task183_rhyme_generation', 'task184_break_generate_question', 'task191_hotpotqa_question_generation', 'task192_hotpotqa_sentence_generation', 'task193_duorc_question_generation', 'task194_duorc_answer_generation', 'task195_sentiment140_classification', 'task196_sentiment140_answer_generation', 'task205_remove_even_elements', 'task206_collatz_conjecture', 'task207_max_element_lists', 'task208_combinations_of_list', 'task209_stancedetection_classification', 'task210_logic2text_structured_text_generation', 'task211_logic2text_classification', 'task212_logic2text_classification', 'task223_quartz_explanation_generation', 'task227_clariq_classification', 'task228_arc_answer_generation_easy', 'task229_arc_answer_generation_hard', 'task243_count_elements_in_set_intersection', 'task244_count_elements_in_set_union', 'task245_check_presence_in_set_intersection', 'task246_dream_question_generation', 'task247_dream_answer_generation', 'task248_dream_classification', 'task267_concatenate_and_reverse_all_elements_from_index_i_to_j', 'task268_casehold_legal_answer_generation', 'task269_csrg_counterfactual_story_generation', 'task270_csrg_counterfactual_context_generation', 'task274_overruling_legal_classification', 'task275_enhanced_wsc_paraphrase_generation', 'task276_enhanced_wsc_classification', 'task277_stereoset_sentence_generation_stereotype', 'task278_stereoset_sentence_generation_antistereotype', 'task279_stereoset_classification_stereotype', 'task280_stereoset_classification_stereotype_type', 'task283_dream_incorrect_answer_generation', 'task284_imdb_classification', 'task285_imdb_answer_generation', 'task286_olid_offense_judgment', 'task287_casehold_legal_incorrect_answer_generation', 'task291_semeval_2020_task4_commonsense_validation', 'task292_storycommonsense_character_text_generation', 'task293_storycommonsense_emotion_text_generation', 'task294_storycommonsense_motiv_text_generation', 'task295_semeval_2020_task4_commonsense_reasoning', 'task296_storycloze_correct_end_classification', 'task297_storycloze_incorrect_end_classification', 'task298_storycloze_correct_end_classification', 'task299_storycloze_sentence_generation', 'task300_storycloze_order_generation', 'task301_record_question_generation', 'task302_record_classification', 'task303_record_incorrect_answer_generation', 'task305_jeopardy_answer_generation_normal', 'task306_jeopardy_answer_generation_double', 'task307_jeopardy_answer_generation_final', 'task308_jeopardy_answer_generation_all', 'task309_race_answer_generation', 'task310_race_classification', 'task311_race_question_generation', 'task316_crows-pairs_classification_stereotype', 'task317_crows-pairs_classification_stereotype_type', 'task318_stereoset_classification_gender', 'task319_stereoset_classification_profession', 'task320_stereoset_classification_race', 'task321_stereoset_classification_religion', 'task322_jigsaw_classification_threat', 'task323_jigsaw_classification_sexually_explicit', 'task324_jigsaw_classification_disagree', 'task325_jigsaw_classification_identity_attack', 'task326_jigsaw_classification_obscene', 'task327_jigsaw_classification_toxic', 'task328_jigsaw_classification_insult', 'task333_hateeval_classification_hate_en', 'task335_hateeval_classification_aggresive_en', 'task337_hateeval_classification_individual_en', 'task339_record_answer_generation', 'task340_winomt_classification_gender_pro', 'task341_winomt_classification_gender_anti', 'task342_winomt_classification_profession_pro', 'task343_winomt_classification_profession_anti', 'task344_hybridqa_answer_generation', 'task345_hybridqa_answer_generation', 'task346_hybridqa_classification', 'task347_hybridqa_incorrect_answer_generation', 'task350_winomt_classification_gender_identifiability_pro', 'task351_winomt_classification_gender_identifiability_anti', 'task353_casino_classification_negotiation_elicit_pref', 'task354_casino_classification_negotiation_no_need', 'task355_casino_classification_negotiation_other_need', 'task356_casino_classification_negotiation_self_need', 'task357_casino_classification_negotiation_small_talk', 'task358_casino_classification_negotiation_uv_part', 'task359_casino_classification_negotiation_vouch_fair', 'task363_sst2_polarity_classification', 'task364_regard_social_impact_classification', 'task365_synthetic_remove_vowels', 'task366_synthetic_return_primes', 'task367_synthetic_remove_floats', 'task368_synthetic_even_or_odd_calculation', 'task369_synthetic_remove_odds', 'task370_synthetic_remove_divisible_by_3', 'task371_synthetic_product_of_list', 'task372_synthetic_palindrome_numbers', 'task373_synthetic_round_tens_place', 'task374_synthetic_pos_or_neg_calculation', 'task375_classify_type_of_sentence_in_debate', 'task376_reverse_order_of_words', 'task377_remove_words_of_given_length', 'task378_reverse_words_of_given_length', 'task379_agnews_topic_classification', 'task380_boolq_yes_no_question', 'task381_boolq_question_generation', 'task382_hybridqa_answer_generation', 'task383_matres_classification', 'task384_socialiqa_question_classification', 'task385_socialiqa_incorrect_answer_generation', 'task386_semeval_2018_task3_irony_detection', 'task387_semeval_2018_task3_irony_classification', 'task388_torque_token_classification', 'task389_torque_generate_temporal_question', 'task390_torque_text_span_selection', 'task397_semeval_2018_task1_tweet_anger_detection', 'task398_semeval_2018_task1_tweet_joy_detection', 'task399_semeval_2018_task1_tweet_sadness_detection', 'task400_paws_paraphrase_classification', 'task403_creak_commonsense_inference', 'task405_narrativeqa_question_generation', 'task413_mickey_en_sentence_perturbation_generation', 'task428_senteval_inversion', 'task429_senteval_tense', 'task430_senteval_subject_count', 'task431_senteval_object_count', 'task453_swag_answer_generation', 'task454_swag_incorrect_answer_generation', 'task455_swag_context_generation', 'task456_matres_intention_classification', 'task457_matres_conditional_classification', 'task458_matres_negation_classification', 'task459_matres_static_classification', 'task460_qasper_answer_generation', 'task461_qasper_question_generation', 'task462_qasper_classification', 'task469_mrqa_answer_generation', 'task470_mrqa_question_generation', 'task471_haspart_answer_generation', 'task472_haspart_classification', 'task475_yelp_polarity_classification', 'task476_cls_english_books_classification', 'task477_cls_english_dvd_classification', 'task478_cls_english_music_classification', 'task488_extract_all_alphabetical_elements_from_list_in_order', 'task489_mwsc_question_generation', 'task490_mwsc_options_generation', 'task491_mwsc_answer_generation', 'task492_mwsc_incorrect_answer_generation', 'task493_review_polarity_classification', 'task494_review_polarity_answer_generation', 'task495_semeval_headline_classification', 'task496_semeval_answer_generation', 'task497_extract_all_numbers_from_list_in_order', 'task499_extract_and_add_all_numbers_from_list', 'task504_count_all_alphabetical_elements_in_list', 'task505_count_all_numerical_elements_in_list', 'task506_position_of_all_alphabetical_elements_in_list', 'task507_position_of_all_numerical_elements_in_list', 'task509_collate_of_all_alphabetical_and_numerical_elements_in_list_separately', 'task512_twitter_emotion_classification', 'task513_argument_stance_classification', 'task514_argument_consequence_classification', 'task515_senteval_odd_word_out', 'task516_senteval_conjoints_inversion', 'task517_emo_classify_emotion_of_dialogue', 'task518_emo_different_dialogue_emotions', 'task521_trivia_question_classification', 'task522_news_editorial_summary', 'task523_find_if_numbers_or_alphabets_are_more_in_list', 'task547_alt_translation_entk_en', 'task550_discofuse_sentence_generation', 'task560_alt_translation_en_entk', 'task563_discofuse_answer_generation', 'task564_discofuse_classification', 'task565_circa_answer_generation', 'task566_circa_classification', 'task567_circa_text_generation', 'task568_circa_question_generation', 'task573_air_dialogue_classification', 'task574_air_dialogue_sentence_generation', 'task575_air_dialogue_classification', 'task576_curiosity_dialogs_answer_generation', 'task577_curiosity_dialogs_classification', 'task578_curiosity_dialogs_answer_generation', 'task579_socialiqa_classification', 'task580_socialiqa_answer_generation', 'task581_socialiqa_question_generation', 'task582_naturalquestion_answer_generation', 'task583_udeps_eng_coarse_pos_tagging', 'task584_udeps_eng_fine_pos_tagging', 'task585_preposition_classification', 'task586_amazonfood_polarity_classification', 'task587_amazonfood_polarity_correction_classification', 'task588_amazonfood_rating_classification', 'task589_amazonfood_summary_text_generation', 'task590_amazonfood_summary_correction_classification', 'task591_sciq_answer_generation', 'task592_sciq_incorrect_answer_generation', 'task593_sciq_explanation_generation', 'task594_sciq_question_generation', 'task595_mocha_answer_generation', 'task596_mocha_question_generation', 'task597_cuad_answer_generation', 'task598_cuad_answer_generation', 'task599_cuad_question_generation', 'task600_find_the_longest_common_substring_in_two_strings', 'task605_find_the_longest_common_subsequence_in_two_lists', 'task606_sum_of_all_numbers_in_list_between_positions_i_and_j', 'task607_sbic_intentional_offense_binary_classification', 'task608_sbic_sexual_offense_binary_classification', 'task609_sbic_potentially_offense_binary_classification', 'task610_conllpp_ner', 'task611_mutual_multi_turn_dialogue', 'task615_moviesqa_answer_generation', 'task616_cola_classification', 'task617_amazonreview_category_text_generation', 'task618_amazonreview_summary_text_generation', 'task622_replace_alphabets_in_a_list_by_their_position_in_english_alphabet', 'task625_xlwic_true_or_false_answer_generation', 'task626_xlwic_sentence_based_on_given_word_sentence_generation', 'task627_xlwic_word_with_same_meaning_sentence_generation', 'task628_xlwic_word_with_different_meaning_sentence_generation', 'task629_dbpedia_14_classification', 'task630_dbpedia_14_classification', 'task631_dbpedia_14_incorrect_answer_generation', 'task632_dbpedia_14_classification', 'task633_dbpedia_14_answer_generation', 'task636_extract_and_sort_unique_alphabets_in_a_list', 'task637_extract_and_sort_unique_digits_in_a_list', 'task638_multi_woz_classification', 'task639_multi_woz_user_utterance_generation', 'task649_race_blank_question_generation', 'task664_mmmlu_answer_generation_abstract_algebra', 'task665_mmmlu_answer_generation_anatomy', 'task666_mmmlu_answer_generation_astronomy', 'task667_mmmlu_answer_generation_business_ethics', 'task668_extreme_abstract_summarization', 'task672_amazon_and_yelp_summarization_dataset_summarization', 'task672_nummersense', 'task673_google_wellformed_query_classification', 'task674_google_wellformed_query_sentence_generation', 'task675_google_wellformed_query_sentence_generation', 'task679_hope_edi_english_text_classification', 'task681_hope_edi_malayalam_text_classification', 'task682_online_privacy_policy_text_classification', 'task683_online_privacy_policy_text_purpose_answer_generation', 'task684_online_privacy_policy_text_information_type_generation', 'task685_mmmlu_answer_generation_clinical_knowledge', 'task686_mmmlu_answer_generation_college_biology', 'task687_mmmlu_answer_generation_college_chemistry', 'task688_mmmlu_answer_generation_college_computer_science', 'task689_mmmlu_answer_generation_college_mathematics', 'task690_mmmlu_answer_generation_college_medicine', 'task691_mmmlu_answer_generation_college_physics', 'task692_mmmlu_answer_generation_computer_security', 'task693_mmmlu_answer_generation_conceptual_physics', 'task694_mmmlu_answer_generation_econometrics', 'task695_mmmlu_answer_generation_electrical_engineering', 'task696_mmmlu_answer_generation_elementary_mathematics', 'task697_mmmlu_answer_generation_formal_logic', 'task698_mmmlu_answer_generation_global_facts', 'task699_mmmlu_answer_generation_high_school_biology', 'task700_mmmlu_answer_generation_high_school_chemistry', 'task701_mmmlu_answer_generation_high_school_computer_science', 'task702_mmmlu_answer_generation_high_school_european_history', 'task703_mmmlu_answer_generation_high_school_geography', 'task704_mmmlu_answer_generation_high_school_government_and_politics', 'task705_mmmlu_answer_generation_high_school_macroeconomics', 'task706_mmmlu_answer_generation_high_school_mathematics', 'task707_mmmlu_answer_generation_high_school_microeconomics', 'task708_mmmlu_answer_generation_high_school_physics', 'task709_mmmlu_answer_generation_high_school_psychology', 'task710_mmmlu_answer_generation_high_school_statistics', 'task711_mmmlu_answer_generation_high_school_us_history', 'task712_mmmlu_answer_generation_high_school_world_history', 'task713_mmmlu_answer_generation_human_aging', 'task714_mmmlu_answer_generation_human_sexuality', 'task715_mmmlu_answer_generation_international_law', 'task716_mmmlu_answer_generation_jurisprudence', 'task717_mmmlu_answer_generation_logical_fallacies', 'task718_mmmlu_answer_generation_machine_learning', 'task719_mmmlu_answer_generation_management', 'task720_mmmlu_answer_generation_marketing', 'task721_mmmlu_answer_generation_medical_genetics', 'task722_mmmlu_answer_generation_random_topic', 'task723_mmmlu_answer_generation_moral_disputes', 'task724_mmmlu_answer_generation_moral_scenarios', 'task725_mmmlu_answer_generation_nutrition', 'task726_mmmlu_answer_generation_philosophy', 'task727_mmmlu_answer_generation_prehistory', 'task728_mmmlu_answer_generation_professional_accounting', 'task729_mmmlu_answer_generation_professional_law', 'task730_mmmlu_answer_generation_professional_medicine', 'task731_mmmlu_answer_generation_professional_psychology', 'task732_mmmlu_answer_generation_public_relations', 'task733_mmmlu_answer_generation_security_studies', 'task734_mmmlu_answer_generation_sociology', 'task735_mmmlu_answer_generation_us_foreign_policy', 'task736_mmmlu_answer_generation_virology', 'task737_mmmlu_answer_generation_world_religions', 'task739_lhoestq_question_generation', 'task740_lhoestq_answer_generation_quantity', 'task741_lhoestq_answer_generation_place', 'task742_lhoestq_answer_generation_frequency', 'task745_ai2_arithmetic_questions_arithmetic', 'task746_yelp_restaurant_review_classification', 'task750_aqua_multiple_choice_answering', 'task751_svamp_subtraction_question_answering', 'task752_svamp_multiplication_question_answering', 'task753_svamp_addition_question_answering', 'task754_svamp_common-division_question_answering', 'task755_find_longest_substring_and_replace_its_sorted_lowercase_version_in_both_lists', 'task756_find_longert_substring_and_return_all_unique_alphabets_in_it', 'task761_app_review_classification', 'task766_craigslist_bargains_classification', 'task767_craigslist_bargains_classification', 'task770_pawsx_english_text_modification', 'task819_pec_sentiment_classification', 'task820_protoqa_answer_generation', 'task821_protoqa_question_generation', 'task823_peixian-rtgender_sentiment_analysis', 'task833_poem_sentiment_classification', 'task834_mathdataset_classification', 'task835_mathdataset_answer_generation', 'task843_financial_phrasebank_classification', 'task844_financial_phrasebank_classification', 'task845_pubmedqa_question_generation', 'task846_pubmedqa_classification', 'task847_pubmedqa_question_generation', 'task848_pubmedqa_classification', 'task849_pubmedqa_answer_generation', 'task850_synthetic_longest_palindrome', 'task851_synthetic_multiply_evens', 'task852_synthetic_multiply_odds', 'task853_hippocorpus_long_text_generation', 'task854_hippocorpus_classification', 'task855_conv_ai_2_classification', 'task856_conv_ai_2_classification', 'task857_inquisitive_question_generation', 'task858_inquisitive_span_detection', 'task859_prost_question_generation', 'task860_prost_mcq_generation', 'task861_asdiv_addsub_question_answering', 'task861_prost_mcq_answers_generation', 'task862_asdiv_multidiv_question_answering', 'task863_asdiv_multiop_question_answering', 'task864_asdiv_singleop_question_answering', 'task865_mawps_addsub_question_answering', 'task866_mawps_multidiv_question_answering', 'task867_mawps_multiop_question_answering', 'task868_cfq_mcd1_explanation_to_sql', 'task868_mawps_singleop_question_answering', 'task869_cfq_mcd1_sql_to_explanation', 'task870_msmarco_answer_generation', 'task871_msmarco_question_generation', 'task874_opus_xhosanavy_sr', 'task875_emotion_classification', 'task886_quail_question_generation', 'task887_quail_answer_generation', 'task888_reviews_classification', 'task889_goemotions_classification', 'task897_freebase_qa_topic_question_generation', 'task898_freebase_qa_answer_generation', 'task899_freebase_qa_topic_generation', 'task900_freebase_qa_category_classification', 'task901_freebase_qa_category_question_generation', 'task902_deceptive_opinion_spam_classification', 'task903_deceptive_opinion_spam_classification', 'task904_hate_speech_offensive_classification', 'task905_hate_speech_offensive_classification', 'task906_dialogre_identify_names', 'task907_dialogre_identify_relationships', 'task908_dialogre_identify_familial_relationships', 'task909_dialogre_prevalent_speakers', 'task917_coqa_question_generation', 'task918_coqa_answer_generation', 'task919_coqa_incorrect_answer_generation', 'task921_code_x_glue_information_retreival', 'task922_event2mind_word_generation', 'task923_event2mind_classifier', 'task924_event2mind_word_generation', 'task925_coached_conv_pref_classifier', 'task926_coached_conv_pref_word_generation', 'task927_yelp_negative_to_positive_style_transfer', 'task928_yelp_positive_to_negative_style_transfer', 'task929_products_reviews_classification', 'task933_wiki_auto_style_transfer', 'task934_turk_simplification', 'task955_wiki_auto_style_transfer', 'task956_leetcode_420_strong_password_check', 'task963_librispeech_asr_next_word_prediction', 'task964_librispeech_asr_text_auto_completion', 'task965_librispeech_asr_missing_word_prediction', 'task966_ruletaker_fact_checking_based_on_given_context', 'task967_ruletaker_incorrect_fact_generation_based_on_given_paragraph']
```
Validation Tasks:
```
['task1333_check_validity_date_ddmmyyyy', 'task1403_check_validity_date_mmddyyyy', 'task291_semeval_2020_task4_commonsense_validation']
```
Test Tasks:
```
['task020_mctaco_span_based_question', 'task033_winogrande_answer_generation', 'task034_winogrande_question_modification_object', 'task035_winogrande_question_modification_person', 'task036_qasc_topic_word_to_generate_related_fact', 'task039_qasc_find_overlapping_words', 'task050_multirc_answerability', 'task102_commongen_sentence_generation', 'task104_semeval_2019_task10_closed_vocabulary_mathematical_answer_generation', 'task1152_bard_analogical_reasoning_causation', 'task1153_bard_analogical_reasoning_affordance', 'task1154_bard_analogical_reasoning_travel', 'task1155_bard_analogical_reasoning_trash_or_treasure', 'task1156_bard_analogical_reasoning_tools', 'task1157_bard_analogical_reasoning_rooms_for_containers', 'task1158_bard_analogical_reasoning_manipulating_items', 'task1159_bard_analogical_reasoning_containers', 'task1161_coda19_title_generation', 'task118_semeval_2019_task10_open_vocabulary_mathematical_answer_generation', 'task1195_disflqa_disfluent_to_fluent_conversion', 'task119_semeval_2019_task10_geometric_mathematical_answer_generation', 'task121_zest_text_modification', 'task1336_peixian_equity_evaluation_corpus_gender_classifier', 'task1338_peixian_equity_evaluation_corpus_sentiment_classifier', 'task1339_peixian_equity_evaluation_corpus_text_completion', 'task133_winowhy_reason_plausibility_detection', 'task1342_amazon_us_reviews_title', 'task1344_glue_entailment_classification', 'task1345_glue_qqp_question_paraprashing', 'task1356_xlsum_title_generation', 'task1358_xlsum_title_generation', 'task1385_anli_r1_entailment', 'task1386_anli_r2_entailment', 'task1387_anli_r3_entailment', 'task1388_cb_entailment', 'task1390_wscfixed_coreference', 'task1391_winogrande_easy_answer_generation', 'task1393_superglue_copa_text_completion', 'task1394_meta_woz_task_classification', 'task1407_dart_question_generation', 'task1409_dart_text_generation', 'task1429_evalution_semantic_relation_classification', 'task1439_doqa_cooking_isanswerable', 'task1442_doqa_movies_isanswerable', 'task1509_evalution_antonyms', 'task1510_evalution_relation_extraction', 'task1516_imppres_naturallanguageinference', 'task1529_scitail1.1_classification', 'task1531_daily_dialog_type_classification', 'task1533_daily_dialog_formal_classification', 'task1534_daily_dialog_question_classification', 'task1540_parsed_pdfs_summarization', 'task1554_scitail_classification', 'task1557_jfleg_answer_generation', 'task1562_zest_text_modification', 'task1584_evalution_meronym_classification', 'task1586_scifact_title_generation', 'task1598_nyc_long_text_generation', 'task1612_sick_label_classification', 'task1615_sick_tclassify_b_relation_a', 'task1622_disfl_qa_text_modication', 'task1624_disfl_qa_question_yesno_classification', 'task1631_openpi_answer_generation', 'task1640_aqa1.0_answerable_unanswerable_question_classification', 'task1659_title_generation', 'task1664_winobias_text_generation', 'task1728_web_nlg_data_to_text', 'task190_snli_classification', 'task199_mnli_classification', 'task200_mnli_entailment_classification', 'task201_mnli_neutral_classification', 'task202_mnli_contradiction_classification', 'task219_rocstories_title_answer_generation', 'task220_rocstories_title_classification', 'task226_english_language_answer_relevance_classification', 'task232_iirc_link_number_classification', 'task233_iirc_link_exists_classification', 'task242_tweetqa_classification', 'task249_enhanced_wsc_pronoun_disambiguation', 'task281_points_of_correspondence', 'task288_gigaword_summarization', 'task290_tellmewhy_question_answerability', 'task291_semeval_2020_task4_commonsense_validation', 'task295_semeval_2020_task4_commonsense_reasoning', 'task304_numeric_fused_head_resolution', 'task329_gap_classification', 'task330_gap_answer_generation', 'task333_hateeval_classification_hate_en', 'task335_hateeval_classification_aggresive_en', 'task337_hateeval_classification_individual_en', 'task349_squad2.0_answerable_unanswerable_question_classification', 'task362_spolin_yesand_prompt_response_sub_classification', 'task386_semeval_2018_task3_irony_detection', 'task387_semeval_2018_task3_irony_classification', 'task391_causal_relationship', 'task392_inverse_causal_relationship', 'task393_plausible_result_generation', 'task397_semeval_2018_task1_tweet_anger_detection', 'task398_semeval_2018_task1_tweet_joy_detection', 'task399_semeval_2018_task1_tweet_sadness_detection', 'task401_numeric_fused_head_reference', 'task402_grailqa_paraphrase_generation', 'task418_persent_title_generation', 'task428_senteval_inversion', 'task429_senteval_tense', 'task430_senteval_subject_count', 'task431_senteval_object_count', 'task442_com_qa_paraphrase_question_generation', 'task495_semeval_headline_classification', 'task496_semeval_answer_generation', 'task500_scruples_anecdotes_title_generation', 'task510_reddit_tifu_title_summarization', 'task515_senteval_odd_word_out', 'task516_senteval_conjoints_inversion', 'task520_aquamuse_answer_given_in_passage', 'task569_recipe_nlg_text_generation', 'task602_wikitext-103_answer_generation', 'task613_politifact_text_generation', 'task614_glucose_cause_event_detection', 'task619_ohsumed_abstract_title_generation', 'task620_ohsumed_medical_subject_headings_answer_generation', 'task623_ohsumed_yes_no_answer_generation', 'task640_esnli_classification', 'task641_esnli_classification', 'task642_esnli_classification', 'task645_summarization', 'task648_answer_generation', 'task670_ambigqa_question_generation', 'task671_ambigqa_text_generation', 'task677_ollie_sentence_answer_generation', 'task738_perspectrum_classification', 'task743_eurlex_summarization', 'task760_msr_sqa_long_text_generation', 'task769_qed_summarization', 'task827_copa_commonsense_reasoning', 'task828_copa_commonsense_cause_effect', 'task879_schema_guided_dstc8_classification', 'task880_schema_guided_dstc8_classification', 'task890_gcwd_classification', 'task891_gap_coreference_resolution', 'task892_gap_reverse_coreference_resolution', 'task893_gap_fill_the_blank_coreference_resolution', 'task909_dialogre_prevalent_speakers', 'task935_defeasible_nli_atomic_classification', 'task936_defeasible_nli_snli_classification', 'task937_defeasible_nli_social_classification', 'task957_e2e_nlg_text_generation_generate', 'task970_sherliic_causal_relationship']
``` | 39,733 | [
[
-0.04022216796875,
-0.04925537109375,
0.0170135498046875,
0.0184783935546875,
-0.016082763671875,
0.0010738372802734375,
-0.01267242431640625,
-0.023040771484375,
0.02606201171875,
0.026947021484375,
-0.06024169921875,
-0.0478515625,
-0.0229949951171875,
0.0... |
jxu124/refcocog | 2023-05-20T19:00:12.000Z | [
"region:us"
] | jxu124 | null | null | 0 | 121 | 2023-04-26T12:00:59 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: split
dtype: string
- name: sentences
list:
- name: raw
dtype: string
- name: sent
dtype: string
- name: sent_id
dtype: int64
- name: tokens
sequence: string
- name: file_name
dtype: string
- name: category_id
dtype: int64
- name: ann_id
dtype: int64
- name: sent_ids
sequence: int64
- name: ref_id
dtype: int64
- name: raw_anns
dtype: string
- name: raw_image_info
dtype: string
- name: raw_sentences
dtype: string
- name: image_path
dtype: string
- name: bbox
sequence: float64
- name: captions
sequence: string
- name: global_image_id
dtype: string
- name: anns_id
dtype: string
splits:
- name: test
num_bytes: 10341980
num_examples: 5023
- name: train
num_bytes: 87352599
num_examples: 42226
- name: validation
num_bytes: 5236723
num_examples: 2573
download_size: 45968855
dataset_size: 102931302
---
# Dataset Card for "refcocog"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,207 | [
[
-0.040679931640625,
-0.01241302490234375,
0.0019435882568359375,
0.01029205322265625,
-0.0224761962890625,
-0.0039215087890625,
0.02008056640625,
-0.022064208984375,
0.0594482421875,
0.04876708984375,
-0.06451416015625,
-0.0474853515625,
-0.0266876220703125,
... |
JasiekKaczmarczyk/pianofor-ai-sustain-masked | 2023-10-02T11:08:48.000Z | [
"region:us"
] | JasiekKaczmarczyk | null | null | 0 | 121 | 2023-10-02T11:07:50 | ---
dataset_info:
features:
- name: midi_filename
dtype: string
- name: source
dtype: string
- name: pitch
sequence: int16
length: 128
- name: dstart
sequence: float32
length: 128
- name: duration
sequence: float32
length: 128
- name: velocity
sequence: int16
length: 128
- name: masking_spaces
struct:
- name: <Random Mask>
sequence: bool
length: 128
- name: <LH Mask>
sequence: bool
length: 128
- name: <RH Mask>
sequence: bool
length: 128
- name: <Harmonic Root Mask>
sequence: bool
length: 128
- name: <Harmonic Outliers Mask>
sequence: bool
length: 128
splits:
- name: train
num_bytes: 348650644
num_examples: 184108
- name: validation
num_bytes: 45493168
num_examples: 24183
- name: test
num_bytes: 38444406
num_examples: 20548
download_size: 198351270
dataset_size: 432588218
---
# Dataset Card for "pianofor-ai-sustain-masked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,143 | [
[
-0.041259765625,
-0.0218963623046875,
0.01210784912109375,
0.0265655517578125,
-0.01061248779296875,
0.01360321044921875,
-0.004367828369140625,
-0.01087188720703125,
0.05780029296875,
0.0477294921875,
-0.0745849609375,
-0.06024169921875,
-0.03607177734375,
... |
yimingzhang/lichess-2022 | 2023-10-21T22:17:53.000Z | [
"region:us"
] | yimingzhang | null | null | 0 | 121 | 2023-10-04T05:08:38 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.035064697265625,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.01702880859375,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283203... |
stepkurniawan/qa_sustainability_wiki | 2023-10-05T20:51:30.000Z | [
"license:mit",
"region:us"
] | stepkurniawan | null | null | 0 | 121 | 2023-10-05T19:45:20 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: ground_truths
dtype: string
splits:
- name: train
num_bytes: 195625.12855377008
num_examples: 647
- name: test
num_bytes: 48981.87144622991
num_examples: 162
download_size: 149066
dataset_size: 244607.0
---
The purpose of this dataset is to have a question - answer (ground truth) in a table format. The question and answer are all created by using langchain x gpt-4
since it will take a long time for me to create it manually. However, as a due diligent, I have checked randomly more than 50% of the questions and answers,
and judged that it is safe to use.
The source of this questions and answer is from a private wiki page called Sustainable Methods Wiki, created by Prof. Henrik .v. Wahrden.
Link: https://sustainabilitymethods.org/index.php/Main_Page
| 1,005 | [
[
-0.037445068359375,
-0.06597900390625,
0.041259765625,
-0.0070648193359375,
-0.037109375,
-0.0350341796875,
0.01262664794921875,
-0.02081298828125,
0.0152435302734375,
0.054107666015625,
-0.052703857421875,
-0.0458984375,
-0.0003066062927246094,
0.0123672485... |
com_qa | 2023-06-27T07:38:08.000Z | [
"task_categories:question-answering",
"language:en",
"region:us"
] | null | ComQA is a dataset of 11,214 questions, which were collected from WikiAnswers, a community question answering website.
By collecting questions from such a site we ensure that the information needs are ones of interest to actual users.
Moreover, questions posed there are often cannot be answered by commercial search engines or QA technology, making them
more interesting for driving future research compared to those collected from an engine's query log. The dataset contains
questions with various challenging phenomena such as the need for temporal reasoning, comparison (e.g., comparatives,
superlatives, ordinals), compositionality (multiple, possibly nested, subquestions with multiple entities), and
unanswerable questions (e.g., Who was the first human being on Mars?). Through a large crowdsourcing effort, questions
in ComQA are grouped into 4,834 paraphrase clusters that express the same information need. Each cluster is annotated
with its answer(s). ComQA answers come in the form of Wikipedia entities wherever possible. Wherever the answers are
temporal or measurable quantities, TIMEX3 and the International System of Units (SI) are used for normalization. | @inproceedings{abujabal-etal-2019-comqa,
title = "{ComQA: A Community-sourced Dataset for Complex Factoid Question Answering with Paraphrase Clusters",
author = {Abujabal, Abdalghani and
Saha Roy, Rishiraj and
Yahya, Mohamed and
Weikum, Gerhard},
booktitle = {Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)},
month = {jun},
year = {2019},
address = {Minneapolis, Minnesota},
publisher = {Association for Computational Linguistics},
url = {https://www.aclweb.org/anthology/N19-1027},
doi = {10.18653/v1/N19-1027{,
pages = {307--317},
} | 2 | 120 | 2022-03-02T23:29:22 | ---
language:
- en
paperswithcode_id: comqa
pretty_name: ComQA
dataset_info:
features:
- name: cluster_id
dtype: string
- name: questions
sequence: string
- name: answers
sequence: string
splits:
- name: train
num_bytes: 696645
num_examples: 3966
- name: test
num_bytes: 273384
num_examples: 2243
- name: validation
num_bytes: 131945
num_examples: 966
download_size: 1671684
dataset_size: 1101974
task_categories:
- question-answering
---
# Dataset Card for "com_qa"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://qa.mpi-inf.mpg.de/comqa/](http://qa.mpi-inf.mpg.de/comqa/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.67 MB
- **Size of the generated dataset:** 1.10 MB
- **Total amount of disk used:** 2.78 MB
### Dataset Summary
ComQA is a dataset of 11,214 questions, which were collected from WikiAnswers, a community question answering website.
By collecting questions from such a site we ensure that the information needs are ones of interest to actual users.
Moreover, questions posed there are often cannot be answered by commercial search engines or QA technology, making them
more interesting for driving future research compared to those collected from an engine's query log. The dataset contains
questions with various challenging phenomena such as the need for temporal reasoning, comparison (e.g., comparatives,
superlatives, ordinals), compositionality (multiple, possibly nested, subquestions with multiple entities), and
unanswerable questions (e.g., Who was the first human being on Mars?). Through a large crowdsourcing effort, questions
in ComQA are grouped into 4,834 paraphrase clusters that express the same information need. Each cluster is annotated
with its answer(s). ComQA answers come in the form of Wikipedia entities wherever possible. Wherever the answers are
temporal or measurable quantities, TIMEX3 and the International System of Units (SI) are used for normalization.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 1.67 MB
- **Size of the generated dataset:** 1.10 MB
- **Total amount of disk used:** 2.78 MB
An example of 'validation' looks as follows.
```
{
"answers": ["https://en.wikipedia.org/wiki/north_sea"],
"cluster_id": "cluster-922",
"questions": ["what sea separates the scandinavia peninsula from britain?", "which sea separates britain from scandinavia?"]
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `cluster_id`: a `string` feature.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 3966| 966|2243|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{abujabal-etal-2019-comqa,
title = "{ComQA: A Community-sourced Dataset for Complex Factoid Question Answering with Paraphrase Clusters",
author = {Abujabal, Abdalghani and
Saha Roy, Rishiraj and
Yahya, Mohamed and
Weikum, Gerhard},
booktitle = {Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)},
month = {jun},
year = {2019},
address = {Minneapolis, Minnesota},
publisher = {Association for Computational Linguistics},
url = {https://www.aclweb.org/anthology/N19-1027},
doi = {10.18653/v1/N19-1027{,
pages = {307--317},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. | 7,603 | [
[
-0.047332763671875,
-0.0545654296875,
0.01910400390625,
0.00691986083984375,
-0.0162506103515625,
0.0028629302978515625,
-0.0186920166015625,
-0.021331787109375,
0.043182373046875,
0.032989501953125,
-0.059051513671875,
-0.062347412109375,
-0.0323486328125,
... |
hind_encorp | 2022-11-03T16:15:40.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:translation",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"language:hi",
"license:cc-by-nc-sa-3.0",
"regio... | null | HindEnCorp parallel texts (sentence-aligned) come from the following sources:
Tides, which contains 50K sentence pairs taken mainly from news articles. This dataset was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008).
Commentaries by Daniel Pipes contain 322 articles in English written by a journalist Daniel Pipes and translated into Hindi.
EMILLE. This corpus (Baker et al., 2002) consists of three components: monolingual, parallel and annotated corpora. There are fourteen monolingual sub- corpora, including both written and (for some lan- guages) spoken data for fourteen South Asian lan- guages. The EMILLE monolingual corpora contain in total 92,799,000 words (including 2,627,000 words of transcribed spoken data for Bengali, Gujarati, Hindi, Punjabi and Urdu). The parallel corpus consists of 200,000 words of text in English and its accompanying translations into Hindi and other languages.
Smaller datasets as collected by Bojar et al. (2010) include the corpus used at ACL 2005 (a subcorpus of EMILLE), a corpus of named entities from Wikipedia (crawled in 2009), and Agriculture domain parallel corpus.

For the current release, we are extending the parallel corpus using these sources:
Intercorp (Čermák and Rosen,2012) is a large multilingual parallel corpus of 32 languages including Hindi. The central language used for alignment is Czech. Intercorp’s core texts amount to 202 million words. These core texts are most suitable for us because their sentence alignment is manually checked and therefore very reliable. They cover predominately short sto- ries and novels. There are seven Hindi texts in Inter- corp. Unfortunately, only for three of them the English translation is available; the other four are aligned only with Czech texts. The Hindi subcorpus of Intercorp contains 118,000 words in Hindi.
TED talks 3 held in various languages, primarily English, are equipped with transcripts and these are translated into 102 languages. There are 179 talks for which Hindi translation is available.
The Indic multi-parallel corpus (Birch et al., 2011; Post et al., 2012) is a corpus of texts from Wikipedia translated from the respective Indian language into English by non-expert translators hired over Mechanical Turk. The quality is thus somewhat mixed in many respects starting from typesetting and punctuation over capi- talization, spelling, word choice to sentence structure. A little bit of control could be in principle obtained from the fact that every input sentence was translated 4 times. We used the 2012 release of the corpus.
Launchpad.net is a software collaboration platform that hosts many open-source projects and facilitates also collaborative localization of the tools. We downloaded all revisions of all the hosted projects and extracted the localization (.po) files.
Other smaller datasets. This time, we added Wikipedia entities as crawled in 2013 (including any morphological variants of the named entitity that appears on the Hindi variant of the Wikipedia page) and words, word examples and quotes from the Shabdkosh online dictionary. | @InProceedings{hindencorp05:lrec:2014,
author = {Ond{\v{r}}ej Bojar and Vojt{\v{e}}ch Diatka
and Pavel Rychl{\'{y}} and Pavel Stra{\v{n}}{\'{a}}k
and V{\'{}}t Suchomel and Ale{\v{s}} Tamchyna and Daniel Zeman},
title = "{HindEnCorp - Hindi-English and Hindi-only Corpus for Machine
Translation}",
booktitle = {Proceedings of the Ninth International Conference on Language
Resources and Evaluation (LREC'14)},
year = {2014},
month = {may},
date = {26-31},
address = {Reykjavik, Iceland},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and
Thierry Declerck and Hrafn Loftsson and Bente Maegaard and Joseph Mariani
and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-8-4},
language = {english}
} | 1 | 120 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- machine-generated
language:
- en
- hi
license:
- cc-by-nc-sa-3.0
multilinguality:
- translation
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: hindencorp
pretty_name: HindEnCorp
dataset_info:
features:
- name: id
dtype: string
- name: source
dtype: string
- name: alignment_type
dtype: string
- name: alignment_quality
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- hi
splits:
- name: train
num_bytes: 78945714
num_examples: 273885
download_size: 23899723
dataset_size: 78945714
---
# Dataset Card for HindEnCorp
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0023-625F-0
- **Repository:** https://lindat.mff.cuni.cz/repository/xmlui/
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2014/pdf/835_Paper.pdf
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
HindEnCorp parallel texts (sentence-aligned) come from the following sources:
Tides, which contains 50K sentence pairs taken mainly from news articles. This dataset was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008).
Commentaries by Daniel Pipes contain 322 articles in English written by a journalist Daniel Pipes and translated into Hindi.
EMILLE. This corpus (Baker et al., 2002) consists of three components: monolingual, parallel and annotated corpora. There are fourteen monolingual sub- corpora, including both written and (for some lan- guages) spoken data for fourteen South Asian lan- guages. The EMILLE monolingual corpora contain in total 92,799,000 words (including 2,627,000 words of transcribed spoken data for Bengali, Gujarati, Hindi, Punjabi and Urdu). The parallel corpus consists of 200,000 words of text in English and its accompanying translations into Hindi and other languages.
Smaller datasets as collected by Bojar et al. (2010) include the corpus used at ACL 2005 (a subcorpus of EMILLE), a corpus of named entities from Wikipedia (crawled in 2009), and Agriculture domain parallel corpus.

For the current release, we are extending the parallel corpus using these sources:
Intercorp (Čermák and Rosen,2012) is a large multilingual parallel corpus of 32 languages including Hindi. The central language used for alignment is Czech. Intercorp’s core texts amount to 202 million words. These core texts are most suitable for us because their sentence alignment is manually checked and therefore very reliable. They cover predominately short sto- ries and novels. There are seven Hindi texts in Inter- corp. Unfortunately, only for three of them the English translation is available; the other four are aligned only with Czech texts. The Hindi subcorpus of Intercorp contains 118,000 words in Hindi.
TED talks 3 held in various languages, primarily English, are equipped with transcripts and these are translated into 102 languages. There are 179 talks for which Hindi translation is available.
The Indic multi-parallel corpus (Birch et al., 2011; Post et al., 2012) is a corpus of texts from Wikipedia translated from the respective Indian language into English by non-expert translators hired over Mechanical Turk. The quality is thus somewhat mixed in many respects starting from typesetting and punctuation over capi- talization, spelling, word choice to sentence structure. A little bit of control could be in principle obtained from the fact that every input sentence was translated 4 times. We used the 2012 release of the corpus.
Launchpad.net is a software collaboration platform that hosts many open-source projects and facilitates also collaborative localization of the tools. We downloaded all revisions of all the hosted projects and extracted the localization (.po) files.
Other smaller datasets. This time, we added Wikipedia entities as crawled in 2013 (including any morphological variants of the named entitity that appears on the Hindi variant of the Wikipedia page) and words, word examples and quotes from the Shabdkosh online dictionary.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Hindi, English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
HindEncorp Columns:
- source identifier (where do the segments come from)
- alignment type (number of English segments - number of Hindi segments)
- alignment quality, which is one of the following:
"manual" ... for sources that were sentence-aligned manually
"implied" ... for sources where one side was constructed by translating
segment by segment
float ... a value somehow reflecting the goodness of the automatic
alignment; not really reliable
- English segment or segments
- Hindi segment or segments
Each of the segments field is in the plaintext or export format as described
above.
If there are more than one segments on a line (e.g. for lines with alignment
type 2-1 where there are two English segments), then the segments are delimited
with `<s>` in the text field.
### Data Splits
[More Information Needed]
## Dataset Creation
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Daniel Pipes,Baker,Bojar,"Čermák and Rosen,2012","Birch et al., 2011; Post et al., 2012"
### Annotations
#### Annotation process
the 1st part of data TIDES was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008).
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
Bojar, Ondřej ; Diatka, Vojtěch ; Straňák, Pavel ; Tamchyna, Aleš ; Zeman, Daniel
### Licensing Information
CC BY-NC-SA 3.0
### Citation Information
@InProceedings{hindencorp05:lrec:2014,
author = {Ond{\v{r}}ej Bojar and Vojt{\v{e}}ch Diatka
and Pavel Rychl{\'{y}} and Pavel Stra{\v{n}}{\'{a}}k
and V{\'{\i}}t Suchomel and Ale{\v{s}} Tamchyna and Daniel Zeman},
title = "{HindEnCorp - Hindi-English and Hindi-only Corpus for Machine
Translation}",
booktitle = {Proceedings of the Ninth International Conference on Language
Resources and Evaluation (LREC'14)},
year = {2014},
month = {may},
date = {26-31},
address = {Reykjavik, Iceland},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and
Thierry Declerck and Hrafn Loftsson and Bente Maegaard and Joseph Mariani
and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-8-4},
language = {english}
}
### Contributions
Thanks to [@rahul-art](https://github.com/rahul-art) for adding this dataset. | 8,570 | [
[
-0.0277252197265625,
-0.03466796875,
-0.0019817352294921875,
0.03631591796875,
-0.0243988037109375,
0.01432037353515625,
-0.032745361328125,
-0.043853759765625,
0.035675048828125,
0.02032470703125,
-0.04205322265625,
-0.0455322265625,
-0.049102783203125,
0.0... |
ohsumed | 2022-11-18T21:34:41.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | null | The OHSUMED test collection is a set of 348,566 references from
MEDLINE, the on-line medical information database, consisting of
titles and/or abstracts from 270 medical journals over a five-year
period (1987-1991). The available fields are title, abstract, MeSH
indexing terms, author, source, and publication type. | @InProceedings{10.1007/978-1-4471-2099-5_20,
author="Hersh, William
and Buckley, Chris
and Leone, T. J.
and Hickam, David",
editor="Croft, Bruce W.
and van Rijsbergen, C. J.",
title="OHSUMED: An Interactive Retrieval Evaluation and New Large Test Collection for Research",
booktitle="SIGIR '94",
year="1994",
publisher="Springer London",
address="London",
pages="192--201",
abstract="A series of information retrieval experiments was carried out with a computer installed in a medical practice setting for relatively inexperienced physician end-users. Using a commercial MEDLINE product based on the vector space model, these physicians searched just as effectively as more experienced searchers using Boolean searching. The results of this experiment were subsequently used to create a new large medical test collection, which was used in experiments with the SMART retrieval system to obtain baseline performance data as well as compare SMART with the other searchers.",
isbn="978-1-4471-2099-5"
} | 1 | 120 | 2022-03-02T23:29:22 | ---
pretty_name: Ohsumed
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
paperswithcode_id: null
dataset_info:
features:
- name: seq_id
dtype: int64
- name: medline_ui
dtype: int64
- name: mesh_terms
dtype: string
- name: title
dtype: string
- name: publication_type
dtype: string
- name: abstract
dtype: string
- name: author
dtype: string
- name: source
dtype: string
config_name: ohsumed
splits:
- name: train
num_bytes: 60117860
num_examples: 54709
- name: test
num_bytes: 338533901
num_examples: 293855
download_size: 139454017
dataset_size: 398651761
---
# Dataset Card for ohsumed
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://davis.wpi.edu/xmdv/datasets/ohsumed.html
- **Repository:** https://trec.nist.gov/data/filtering/t9.filtering.tar.gz
- **Paper:** https://link.springer.com/chapter/10.1007/978-1-4471-2099-5_20
- **Leaderboard:**
- **Point of Contact:** [William Hersh](mailto:hersh@OHSU.EDU) [Aakash Gupta](mailto:aakashg80@gmail.com)
### Dataset Summary
The OHSUMED test collection is a set of 348,566 references from
MEDLINE, the on-line medical information database, consisting of
titles and/or abstracts from 270 medical journals over a five-year
period (1987-1991). The available fields are title, abstract, MeSH
indexing terms, author, source, and publication type. The National
Library of Medicine has agreed to make the MEDLINE references in the
test database available for experimentation, restricted to the
following conditions:
1. The data will not be used in any non-experimental clinical,
library, or other setting.
2. Any human users of the data will explicitly be told that the data
is incomplete and out-of-date.
Please check this [readme](https://trec.nist.gov/data/filtering/README.t9.filtering) for more details
### Supported Tasks and Leaderboards
[Text Classification](https://paperswithcode.com/sota/text-classification-on-ohsumed)
### Languages
The text is primarily in English. The BCP 47 code is `en`
## Dataset Structure
### Data Instances
```
{'seq_id': 7770,
'medline_ui': 87120420,
'mesh_terms': 'Adult; Aged; Aneurysm/CO; Arteriovenous Fistula/*TH; Carotid Arteries; Case Report; Female; Human; Jugular Veins; Male; Methods; Middle Age; Neck/*BS; Vertebral Artery.',
'title': 'Arteriovenous fistulas of the large vessels of the neck: nonsurgical percutaneous occlusion.',
'publication_type': 'JOURNAL ARTICLE.',
'abstract': 'We describe the nonsurgical treatment of arteriovenous fistulas of the large vessels in the neck using three different means of endovascular occlusion of these large lesions, which are surgically difficult to approach and treat.',
'author': 'Vitek JJ; Keller FS.',
'source': 'South Med J 8705; 80(2):196-200'}
```
### Data Fields
Here are the field definitions:
- seg_id: sequential identifier
(important note: documents should be processed in this order)
- medline_ui: MEDLINE identifier (UI)
(<DOCNO> used for relevance judgements)
- mesh_terms: Human-assigned MeSH terms (MH)
- title: Title (TI)
- publication_type : Publication type (PT)
- abstract: Abstract (AB)
- author: Author (AU)
- source: Source (SO)
Note: some abstracts are truncated at 250 words and some references
have no abstracts at all (titles only). We do not have access to the
full text of the documents.
### Data Splits
The files are Train/ Test. Where the training has files from 1987 while the test files has abstracts from 1988-91
Total number of files:
Train: 54710
Test: 348567
## Dataset Creation
### Curation Rationale
The OHSUMED document collection was obtained by William Hersh
(hersh@OHSU.EDU) and colleagues for the experiments described in the
papers below. [Check citation](#citation-information)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The test collection was built as part of a study assessing the use of
MEDLINE by physicians in a clinical setting (Hersh and Hickam, above).
Novice physicians using MEDLINE generated 106 queries. Only a subset
of these queries were used in the TREC-9 Filtering Track. Before
they searched, they were asked to provide a statement of information
about their patient as well as their information need.
The data was collected by William Hersh & colleagues
### Annotations
#### Annotation process
The existing OHSUMED topics describe actual information needs, but the
relevance judgements probably do not have the same coverage provided
by the TREC pooling process. The MeSH terms do not directly represent
information needs, rather they are controlled indexing terms. However,
the assessment should be more or less complete and there are a lot of
them, so this provides an unusual opportunity to work with a very
large topic sample.
The topic statements are provided in the standard TREC format
#### Who are the annotators?
Each query was replicated by four searchers, two physicians
experienced in searching and two medical librarians. The results were
assessed for relevance by a different group of physicians, using a
three point scale: definitely, possibly, or not relevant. The list of
documents explicitly judged to be not relevant is not provided here.
Over 10% of the query-document pairs were judged in duplicate to
assess inter-observer reliability. For evaluation, all documents
judged here as either possibly or definitely relevant were
considered relevant. TREC-9 systems were allowed to distinguish
between these two categories during the learning process if desired.
### Personal and Sensitive Information
No PII data is present in the train, test or query files.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[Aakash Gupta](mailto:aakashg80@gmail.com)
*Th!nkEvolve Consulting* and Researcher at CoronaWhy
### Licensing Information
CC BY-NC 4.0
### Citation Information
Hersh WR, Buckley C, Leone TJ, Hickam DH, OHSUMED: An interactive
retrieval evaluation and new large test collection for research,
Proceedings of the 17th Annual ACM SIGIR Conference, 1994, 192-201.
Hersh WR, Hickam DH, Use of a multi-application computer workstation
in a clinical setting, Bulletin of the Medical Library Association,
1994, 82: 382-389.
### Contributions
Thanks to [@skyprince999](https://github.com/skyprince999) for adding this dataset. | 7,951 | [
[
-0.04107666015625,
-0.0616455078125,
0.0296173095703125,
-0.01532745361328125,
-0.0248260498046875,
-0.01094818115234375,
0.002681732177734375,
-0.026580810546875,
0.041168212890625,
0.035186767578125,
-0.023773193359375,
-0.04986572265625,
-0.036712646484375,
... |
per_sent | 2023-01-25T14:42:26.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|other-MPQA-KBP Challenge-MediaRank",
"language:en",
"license:unknown",
"a... | null | Person SenTiment (PerSenT) is a crowd-sourced dataset that captures the sentiment of an author towards the main entity in a news article. This dataset contains annotation for 5.3k documents and 38k paragraphs covering 3.2k unique entities.
The dataset consists of sentiment annotations on news articles about people. For each article, annotators judge what the author’s sentiment is towards the main (target) entity of the article. The annotations also include similar judgments on paragraphs within the article.
To split the dataset, entities into 4 mutually exclusive sets. Due to the nature of news collections, some entities tend to dominate the collection. In the collection, there were four entities which were the main entity in nearly 800 articles. To avoid these entities from dominating the train or test splits, we moved them to a separate test collection. We split the remaining into a training, dev, and test sets at random. Thus our collection includes one standard test set consisting of articles drawn at random (Test Standard -- `test_random`), while the other is a test set which contains multiple articles about a small number of popular entities (Test Frequent -- `test_fixed`). | @inproceedings{bastan2020authors,
title={Author's Sentiment Prediction},
author={Mohaddeseh Bastan and Mahnaz Koupaee and Youngseo Son and Richard Sicoli and Niranjan Balasubramanian},
year={2020},
eprint={2011.06128},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 0 | 120 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-MPQA-KBP Challenge-MediaRank
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: persent
pretty_name: PerSenT
dataset_info:
features:
- name: DOCUMENT_INDEX
dtype: int64
- name: TITLE
dtype: string
- name: TARGET_ENTITY
dtype: string
- name: DOCUMENT
dtype: string
- name: MASKED_DOCUMENT
dtype: string
- name: TRUE_SENTIMENT
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph0
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph1
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph2
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph3
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph4
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph5
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph6
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph7
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph8
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph9
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph10
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph11
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph12
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph13
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph14
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph15
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
splits:
- name: train
num_bytes: 14595163
num_examples: 3355
- name: test_random
num_bytes: 2629500
num_examples: 579
- name: test_fixed
num_bytes: 3881800
num_examples: 827
- name: validation
num_bytes: 2322922
num_examples: 578
download_size: 23117196
dataset_size: 23429385
---
# Dataset Card for PerSenT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PerSenT](https://stonybrooknlp.github.io/PerSenT/)
- **Repository:** [https://github.com/MHDBST/PerSenT](https://github.com/MHDBST/PerSenT)
- **Paper:** [arXiv](https://arxiv.org/abs/2011.06128)
- **Leaderboard:** NA
- **Point of Contact:** [Mohaddeseh Bastan](mbastan@cs.stonybrook.edu)
### Dataset Summary
PerSenT is a crowd-sourced dataset that captures the sentiment of an author towards the main entity in a news article. This dataset contains annotations for 5.3k documents and 38k paragraphs covering 3.2k unique entities. For each article, annotators judge what the author’s sentiment is towards the main
(target) entity of the article. The annotations also include similar judgments on paragraphs within the article.
### Supported Tasks and Leaderboards
Sentiment Classification: Each document consists of multiple paragraphs. Each paragraph is labeled separately (Positive, Neutral, Negative) and the author’s sentiment towards the whole document is included as a document-level label.
### Languages
English
## Dataset Structure
### Data Instances
```json
{'DOCUMENT': "Germany's Landesbank Baden Wuertemberg won EU approval Tuesday for a state bailout after it promised to shrink its balance sheet by 40 percent and refocus on lending to companies.\n The bank was several state-owned German institutions to run into trouble last year after it ran up more huge losses from investing in high-risk proprietary trading and capital market activities -- a business the EU has now told it to shun.\n Seven current and former managers of the bank are also being investigated by German authorities for risking or damaging the bank's capital by carrying out or failing to block investments in high-risk deals worth hundreds of millions from 2006.\n The European Commission said its Tuesday approval for the state rescue of the bank and its new restructuring plan would allow it become a viable business again -- and that the cutbacks would help limit the unfair advantage over rivals that the bank would get from the state aid.\n Stuttgart-based LBBW earlier this year received a capital injection of (EURO)5 billion from the bank's shareholders all of them public authorities or state-owned including the state of Baden-Wuerttemberg the region's savings bank association and the city of Stuttgart.",
'DOCUMENT_INDEX': 1,
'MASKED_DOCUMENT': "[TGT] won EU approval Tuesday for a state bailout after it promised to shrink its balance sheet by 40 percent and refocus on lending to companies.\n [TGT] was several state-owned German institutions to run into trouble last year after [TGT] ran up more huge losses from investing in high-risk proprietary trading and capital market activities -- a business the EU has now told it to shun.\n Seven current and former managers of [TGT] are also being investigated by German authorities for risking or damaging [TGT]'s capital by carrying out or failing to block investments in high-risk deals worth hundreds of millions from 2006.\n The European Commission said its Tuesday approval for the state rescue of [TGT] and its new restructuring plan would allow it become a viable business again -- and that the cutbacks would help limit the unfair advantage over rivals that [TGT] would get from the state aid.\n Stuttgart-based LBBW earlier this year received a capital injection of (EURO)5 billion from [TGT]'s shareholders all of them public authorities or state-owned including the state of Baden-Wuerttemberg the region's savings bank association and the city of Stuttgart.",
'Paragraph0': 2,
'Paragraph1': 0,
'Paragraph10': -1,
'Paragraph11': -1,
'Paragraph12': -1,
'Paragraph13': -1,
'Paragraph14': -1,
'Paragraph15': -1,
'Paragraph2': 0,
'Paragraph3': 1,
'Paragraph4': 1,
'Paragraph5': -1,
'Paragraph6': -1,
'Paragraph7': -1,
'Paragraph8': -1,
'Paragraph9': -1,
'TARGET_ENTITY': 'Landesbank Baden Wuertemberg',
'TITLE': 'German bank LBBW wins EU bailout approval',
'TRUE_SENTIMENT': 0}
```
### Data Fields
- DOCUMENT_INDEX: ID of the document per original dataset
- TITLE: Title of the article
- DOCUMENT: Text of the article
- MASKED_DOCUMENT: Text of the article with the target entity masked with `[TGT]` token
- TARGET_ENTITY: The entity that the author is expressing opinion about
- TRUE_SENTIMENT: Label for entire article
- Paragraph{0..15}: Label for each paragraph in the article
**Note**: Labels are one of `[Negative, Neutral, Positive]`. Missing labels were replaced with `-1`.
### Data Splits
To split the dataset, entities were split into 4 mutually exclusive sets. Due to the nature of news collections, some entities tend to dominate the collection. In the collection, there were four entities which were the main entity in nearly 800 articles. To avoid these entities from dominating the train or test splits, these were moved them to a separate test collection. The remaining was split into a training, dev, and test sets at random. Thus the collection includes one standard test set consisting of articles drawn at random (Test Standard), while the other is a test set which contains multiple articles about a small number of popular entities (Test Frequent).
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Articles were selected from 3 sources:
1. MPQA (Deng and Wiebe, 2015; Wiebe et al., 2005): This dataset contains news articles manually annotated for opinions, beliefs, emotions, sentiments, speculations, etc. It also has target annotations which are entities and event anchored to the heads of noun or verb phrases. All decisions on this dataset are made on sentence-level and over short spans.
2. KBP Challenge (Ellis et al., 2014): This resource contains TAC 2014 KBP English sentiment slot filling challenge dataset. This is a document-level sentiment filling dataset. In this task, given an entity and a sentiment (positive/negative) from the document, the goal is to find entities toward which
the original entity holds the given sentimental view. We selected documents from this resource which have been used in the following similar work in sentiment analysis task (Choi et al., 2016).
3. Media Rank (Ye and Skiena, 2019): This dataset ranks about 50k news sources along different aspects. It is also used for classifying political ideology of news articles (Kulkarni et al., 2018).
Pre-processing steps:
- First we find all the person entities in each article, using Stanford NER (Name Entity Resolution) tagger (Finkel et al., 2005) and all mentions of them using co-reference resolution (Clark and Manning, 2016; Co, 2017).
- We removed articles which are not likely to have a main entity of focus. We used a simple heuristic of removing articles in which the most frequent person entity is mentioned only three times or less (even when counting co-referent mentions).
- For the articles that remain we deemed the most frequent entity to be the main entity of the article. We also filtered out extremely long and extremely short articles to keep the articles which have at least 3 paragraphs and at most 16 paragraphs.
Documents are randomly separated into train, dev, and two test sets. We ensure that each entity appears in only one of the sets. Our goal here is to avoid easy to learn biases over entities. To avoid the most frequent entities from dominating the training or the test sets, we remove articles that covered the most frequent entities and use them as a separate test set (referred to as frequent test set) in addition to the randomly drawn standard test set.
### Annotations
#### Annotation process
We obtained document and paragraph level annotations with the help of Amazon Mechanical Turk workers. The workers first verified if the target entity we provide is indeed the main entity in the document. Then, they rated each paragraph in a document that contained a direct mention or a reference to the target
entity. Last, they rated the sentiment towards the entity based on the entire document. In both cases, the workers made assessments about the authors view based on what they said about the target entity. For both paragraph and document level sentiment, the workers chose from five rating categories: Negative,
Slightly Negative, Neutral, Slightly Positive, or Positive. We then combine the fine-grained annotations to obtain three coarse-grained classes Negative, Neutral, or Positive.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{bastan2020authors,
title={Author's Sentiment Prediction},
author={Mohaddeseh Bastan and Mahnaz Koupaee and Youngseo Son and Richard Sicoli and Niranjan Balasubramanian},
year={2020},
eprint={2011.06128},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@jeromeku](https://github.com/jeromeku) for adding this dataset. | 13,805 | [
[
-0.05078125,
-0.056549072265625,
0.02581787109375,
0.02679443359375,
-0.0286865234375,
-0.007350921630859375,
-0.0099639892578125,
-0.025238037109375,
0.0318603515625,
0.0374755859375,
-0.04656982421875,
-0.07763671875,
-0.043731689453125,
0.0038928985595703... |
NbAiLab/NPSC | 2023-04-25T09:52:08.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:2G<n<1B",
"source_datasets:original",
"language:no",
"language:nb",
"language:nn",
"license:cc0... | NbAiLab | The Norwegian Parliament Speech Corpus (NPSC) is a corpus for training a Norwegian ASR (Automatic Speech Recognition) models. The corpus is created by Språkbanken at the National Library in Norway.
NPSC is based on sound recording from meeting in the Norwegian Parliament. These talks are orthographically transcribed to either Norwegian Bokmål or Norwegian Nynorsk. In addition to the data actually included in this dataset, there is a significant amount of metadata that is included in the original corpus. Through the speaker id there is additional information about the speaker, like gender, age, and place of birth (ie dialect). Through the proceedings id the corpus can be linked to the official proceedings from the meetings.
The corpus is in total sound recordings from 40 entire days of meetings. This amounts to 140 hours of speech, 65,000 sentences or 1.2 million words. | @inproceedings{johansen2019ner,
title={},
author={},
booktitle={LREC 2022},
year={2022},
url={https://arxiv.org/abs/}
} | 5 | 120 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- 'no'
- nb
- nn
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 2G<n<1B
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
pretty_name: NPSC
tags:
- speech-modeling
---
# Dataset Card for NbAiLab/NPSC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data Fields](#data-fiels)
- [Dataset Creation](#dataset-creation)
- [Statistics](#statistics)
- [Document Types](#document-types)
- [Languages](#languages)
- [Publish Periode](#publish-periode)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.nb.no/sprakbanken/
- **Repository:** https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/
- **Paper:** https://www.nb.no/sprakbanken/
- **Point of Contact:** [Per Erik Solberg](mailto:per.solberg@nb.no)
The Norwegian Parliamentary Speech Corpus (NPSC) is a speech corpus made by the Norwegian Language Bank at the National Library of Norway in 2019-2021. The NPSC consists of recordings of speech from Stortinget, the Norwegian parliament, and corresponding orthographic transcriptions to Norwegian Bokmål and Norwegian Nynorsk. All transcriptions are done manually by trained linguists or philologists, and the manual transcriptions are subsequently proofread to ensure consistency and accuracy. Entire days of Parliamentary meetings are transcribed in the dataset.
This repository contains a version of the NPSC in the 🤗 Dataset Format. Note that the official release of the dataset, which can be found in [the repository of the Norwegian Language Bank](https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/), contains more information than the version found here, including word-level metadata, metadata about the speakers, and detailed documentation.
## How to Use
```python
# Loads the 16K Bokmål corpus in streaming mode
from datasets import load_dataset
data = load_dataset("NbAiLab/NPSC", config="16K_mp3_bokmaal", streaming=True)
```
## Dataset Summary
The NPSC dataset contains JSON lines with language training data. The data loader will add audio data to this structure. Here is an example json object:
```json
{
"sentence_id": 49853,
"sentence_order": 0,
"speaker_id": 32,
"meeting_date": "20170110",
"speaker_name": "Olemic Thommessen",
"sentence_text": "Stortingets møte er lovlig satt",
"sentence_language_code": "nb-NO",
"text": "Stortingets møte er lovlig satt",
"start_time": 320246,
"end_time": 323590,
"normsentence_text": "Stortingets møte er lovlig satt",
"transsentence_text": "Stortingets møte er lovleg sett",
"translated": 1,
"audio": {"path": "audio/20170110-095504_320246_323590.wav","array": [.......]}
}
```
## Data Fields
|**Key** | **Type** | **Description** |
|:-----------|:------------|:------------|
|**sentence_id:** | Integer | Unique identifier of the sentence |
|**sentence_order** | Integer | A number indicating the order of the sentences in the meeting |
|**speaker_id** | Integer | The ID of the speaker. This can be linked to the original dataset containing thorough demographic and dialectal information about the speaker. |
|**meeting_date** | String | The date for the meeting in the format __yyyymmdd__ |
| **speaker_name** | String | Name of the speaker. All speakers were members of the Norwegian Parliament or members of the Norwegian Government at the meeting date |
| **sentence_text** | String | The sentence text. The transcribed text string of the sentence in non-normalized form. This is the text of the manual transcriptions, without any postprocessing (apart from corrections of known errors). It may contain interrupted words, non-standard words and function words with a pronunciation deviating from the written form. Detailed metadata about the words in the sentence can be found in the word-tokenized version of the corpus in the official release of the dataset. |
| **sentence_language_code** | String | The language code of the sentence. The following alternatives exists in the file: ['nb-NO'. 'nn-NO', 'en-US']|
| **text** | String | sentence text. This is a copy of "sentence_text". It is included here to make it more convenient to interleave with other datasets.|
| **start_time** | Integer | The start time of the sentence in milliseconds. This time is relative to the start of audiofile of the entire meeting, which can be accessed in the official release |
| **end_time** | Integer | End time. See comment above. |
| **normsentence_text** | String | Normalized sentence text. In this version of the transcription, numbers and dates are written in digits on standardized formats, and common abbreviations are used. These modifications to the original transcriptions are produced automatically using normalization grammars |
| **transsentence_text** | String | Translated sentence text. Whenever the original transcription is in Bokmål (nb-NO), this field contains a machine-translated version in Nynorsk (nn-NO), and vice versa |
| **translated** | Integer | A flag indicating whether a machine-translated version has been produced or not. Sentences in en-US have not been translated |
| **audio** | Array | The dataloader will encode the accociated audio files and provide them as an array containing 'path', 'sound array','sampling_rate' |
#### Initial Data Collection
The procedure for the dataset creation is described in detail in our paper.
## Statistics
| Feature | Value |
|:---------|-----------:|
| Duration, pauses included | 140,3 hours|
| Duration, pauses not included | 125,7 hours |
| Word count | 1,2 million |
| Sentence count | 64.531 |
| Language distribution | Nynorsk: 12,8%|
| | Bokmål: 87,2%|
| Gender distribution | Female: 38,3% |
| | Male: 61.7% |
## Considerations for Using the Data
This corpus contains speech data. All recordings are of members of Parliament in a public setting, and can be distributed without any restrains.
### Dataset Creators and Curators
The content of the dataset was created by the Norwegian Language Bank (Språkbanken) at the National Library of Norway. [Javier de la Rosa](mailto:versae@nb.no), [Freddy Wetjen](mailto:freddy.wetjen@nb.no), [Per Egil Kummervold](mailto:per.kummervold@nb.no), and [Andre Kaasen](mailto:andre.kasen@nb.no) all contributed in making this into a HuggingFace Dataset. Thanks to the HuggingFace team for assistance.
## License
The sound and the transcriptions are released under the [CC-ZERO-license](https://creativecommons.org/publicdomain/zero/1.0/). The curation of the HuggingFace Dataset is released under [CC-BY-SA-3-license](https://creativecommons.org/licenses/by-sa/3.0/).
### Citation Information
The following article gives detailed information about the corpus. Please refer to the article and this page if you are using this dataset:
```
@inproceedings{solberg2022norwegian,
title={The Norwegian Parliamentary Speech Corpus},
author={Solberg, Per Erik and Ortiz, Pablo},
booktitle={Proceedings of the 13th Language Resources and Evaluation Conference},
url={http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.106.pdf},
year={2022}
}
```
| 7,647 | [
[
-0.030975341796875,
-0.036865234375,
0.006591796875,
0.020751953125,
-0.02618408203125,
-0.01229095458984375,
-0.0328369140625,
-0.024322509765625,
0.037109375,
0.039703369140625,
-0.034210205078125,
-0.059295654296875,
-0.038604736328125,
0.0208282470703125... |
SocialGrep/one-million-reddit-jokes | 2022-07-01T18:48:46.000Z | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | SocialGrep | null | null | 7 | 120 | 2022-03-02T23:29:22 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for one-million-reddit-jokes
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=onemillionjokes)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=onemillionjokes)
### Dataset Summary
This corpus contains a million posts from /r/jokes.
Posts are annotated with their score.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a Reddit post.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': the domain of the data point's link.
- 'url': the destination of the data point's link, if any.
- 'selftext': the self-text of the data point, if any.
- 'title': the title of the post data point.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | 3,411 | [
[
-0.0443115234375,
-0.060150146484375,
0.0199737548828125,
0.041351318359375,
-0.04229736328125,
-0.00725555419921875,
-0.01204681396484375,
-0.01548004150390625,
0.056427001953125,
0.047576904296875,
-0.07598876953125,
-0.07470703125,
-0.04974365234375,
0.02... |
ai4bharat/IndicSentenceSummarization | 2022-10-13T06:08:31.000Z | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:5K<n<112K",
"source_datasets:original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.",
"language:as",
"language:bn",
"language:gu",
... | ai4bharat | This is the sentence summarization dataset released as part of IndicNLG Suite. Each
input sentence is paired with an output summary. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta and te. The total
size of the dataset is 431K. | @inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
} | 0 | 120 | 2022-03-10T09:59:05 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
pretty_name: IndicSentenceSummarization
size_categories:
- 5K<n<112K
source_datasets:
- original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.
task_categories:
- conditional-text-generation
task_ids:
- conditional-text-generation-other-sentence-summarization
---
# Dataset Card for "IndicSentenceSummarization"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicSentenceSummarization is the sentence summarization dataset released as part of IndicNLG Suite. Each
input sentence is paired with an output as summary. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 431K.
### Supported Tasks and Leaderboards
**Tasks:** Sentence Summarization
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{'id': '5',
'input': 'जम्मू एवं कश्मीर के अनंतनाग जिले में शनिवार को सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादियों को मार गिराया गया।',
'target': 'जम्मू-कश्मीर : सुरक्षाबलों के साथ मुठभेड़ में 2 आतंकवादी ढेर',
'url': 'https://www.indiatv.in/india/national-jammu-kashmir-two-millitant-killed-in-encounter-with-security-forces-574529'
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `input (string)`: Input sentence.
- `target (strings)`: Output summary.
- `url (string)`: Source web link of the sentence.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Dev | Test |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 10,812 | 5,232 | 5,452 |
Bengali | bn | 17,035 | 2,355 | 2,384 |
Gujarati | gu | 54,788 | 8,720 | 8,460 |
Hindi | hi | 78,876 | 16,935 | 16,835 |
Kannada | kn | 61,220 | 9,024 | 1,485 |
Malayalam | ml | 2,855 | 1,520 | 1,580 |
Marathi | mr | 27,066 | 3,249 | 3,309 |
Oriya | or | 12,065 | 1,539 | 1,440 |
Punjabi | pa | 31,630 | 4,004 | 3,967 |
Tamil | ta | 23,098 | 2,874 | 2,948 |
Telugu | te | 7,119 | 878 | 862 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
It is a modified subset of [IndicHeadlineGeneration](https://huggingface.co/datasets/ai4bharat/IndicHeadlineGeneration) dataset.
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437) | 6,123 | [
[
-0.030120849609375,
-0.03790283203125,
-0.006671905517578125,
0.035614013671875,
-0.0211181640625,
0.0116424560546875,
-0.046966552734375,
-0.0304718017578125,
0.037322998046875,
0.0248260498046875,
-0.043548583984375,
-0.062286376953125,
-0.0478515625,
0.03... |
pinecone/core-2020-05-10-deduplication | 2022-10-28T03:01:02.000Z | [
"task_categories:other",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:unknown",
"language:en",
"lic... | pinecone | null | null | 1 | 120 | 2022-06-18T15:43:43 | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- unknown
task_categories:
- other
task_ids:
- natural-language-inference
- semantic-similarity-scoring
- text-scoring
pretty_name: CORE Deduplication of Scholarly Documents
tags:
- deduplication
---
# Dataset Card for CORE Deduplication
## Dataset Description
- **Homepage:** [https://core.ac.uk/about/research-outputs](https://core.ac.uk/about/research-outputs)
- **Repository:** [https://core.ac.uk/datasets/core_2020-05-10_deduplication.zip](https://core.ac.uk/datasets/core_2020-05-10_deduplication.zip)
- **Paper:** [Deduplication of Scholarly Documents using Locality Sensitive Hashing and Word Embeddings](http://oro.open.ac.uk/id/eprint/70519)
- **Point of Contact:** [CORE Team](https://core.ac.uk/about#contact)
- **Size of downloaded dataset files:** 204 MB
### Dataset Summary
CORE 2020 Deduplication dataset (https://core.ac.uk/documentation/dataset) contains 100K scholarly documents labeled as duplicates/non-duplicates.
### Languages
The dataset language is English (BCP-47 `en`)
### Citation Information
```
@inproceedings{dedup2020,
title={Deduplication of Scholarly Documents using Locality Sensitive Hashing and Word Embeddings},
author={Gyawali, Bikash and Anastasiou, Lucas and Knoth, Petr},
booktitle = {Proceedings of 12th Language Resources and Evaluation Conference},
month = may,
year = 2020,
publisher = {France European Language Resources Association},
pages = {894-903}
}
```
| 1,602 | [
[
-0.01806640625,
-0.0279998779296875,
0.01004791259765625,
-0.00139617919921875,
-0.04180908203125,
0.00565338134765625,
-0.0019550323486328125,
-0.028533935546875,
0.0247039794921875,
0.032928466796875,
-0.0220947265625,
-0.0518798828125,
-0.05224609375,
0.0... |
bigbio/blurb | 2022-12-22T15:27:48.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | The BioCreative II Gene Mention task. The training corpus for the current task consists mainly of the training and testing corpora (text collections) from the BCI task, and the testing corpus for the current task consists of an additional 5,000 sentences that were held 'in reserve' from the previous task. In the current corpus, tokenization is not provided; instead participants are asked to identify a gene mention in a sentence by giving its start and end characters. As before, the training set consists of a set of sentences, and for each sentence a set of gene mentions (GENE annotations).
- Homepage: https://biocreative.bioinformatics.udel.edu/tasks/biocreative-ii/task-1a-gene-mention-tagging/
- Repository: https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/
- Paper: Overview of BioCreative II gene mention recognition
https://link.springer.com/article/10.1186/gb-2008-9-s2-s2 | @article{gu2021domain,
title = {
Domain-specific language model pretraining for biomedical natural
language processing
},
author = {
Gu, Yu and Tinn, Robert and Cheng, Hao and Lucas, Michael and
Usuyama, Naoto and Liu, Xiaodong and Naumann, Tristan and Gao,
Jianfeng and Poon, Hoifung
},
year = 2021,
journal = {ACM Transactions on Computing for Healthcare (HEALTH)},
publisher = {ACM New York, NY},
volume = 3,
number = 1,
pages = {1--23}
} | 1 | 120 | 2022-10-03T06:19:58 | ---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: MIXED
pretty_name: BLURB
homepage: https://microsoft.github.io/BLURB/tasks.html
bigbio_pubmed: true
bigbio_public: true
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for BLURB
## Dataset Description
- **Homepage:** https://microsoft.github.io/BLURB/tasks.html
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
BLURB is a collection of resources for biomedical natural language processing.
In general domains, such as newswire and the Web, comprehensive benchmarks and
leaderboards such as GLUE have greatly accelerated progress in open-domain NLP.
In biomedicine, however, such resources are ostensibly scarce. In the past,
there have been a plethora of shared tasks in biomedical NLP, such as
BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These
efforts have played a significant role in fueling interest and progress by the
research community, but they typically focus on individual tasks. The advent of
neural language models, such as BERT provides a unifying foundation to leverage
transfer learning from unlabeled text to support a wide range of NLP
applications. To accelerate progress in biomedical pretraining strategies and
task-specific methods, it is thus imperative to create a broad-coverage
benchmark encompassing diverse biomedical tasks.
Inspired by prior efforts toward this direction (e.g., BLUE), we have created
BLURB (short for Biomedical Language Understanding and Reasoning Benchmark).
BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP
applications, as well as a leaderboard for tracking progress by the community.
BLURB includes thirteen publicly available datasets in six diverse tasks. To
avoid placing undue emphasis on tasks with many available datasets, such as
named entity recognition (NER), BLURB reports the macro average across all tasks
as the main score. The BLURB leaderboard is model-agnostic. Any system capable
of producing the test predictions using the same training and development data
can participate. The main goal of BLURB is to lower the entry barrier in
biomedical NLP and help accelerate progress in this vitally important field for
positive societal and human impact.
This implementation contains a subset of 5 tasks as of 2022.10.06, with their original train, dev, and test splits.
## Citation Information
```
@article{gu2021domain,
title = {
Domain-specific language model pretraining for biomedical natural
language processing
},
author = {
Gu, Yu and Tinn, Robert and Cheng, Hao and Lucas, Michael and
Usuyama, Naoto and Liu, Xiaodong and Naumann, Tristan and Gao,
Jianfeng and Poon, Hoifung
},
year = 2021,
journal = {ACM Transactions on Computing for Healthcare (HEALTH)},
publisher = {ACM New York, NY},
volume = 3,
number = 1,
pages = {1--23}
}
```
| 3,020 | [
[
-0.0186767578125,
-0.0487060546875,
0.016845703125,
0.0357666015625,
-0.008575439453125,
-0.00342559814453125,
-0.01439666748046875,
-0.058380126953125,
0.0226898193359375,
0.03656005859375,
-0.047149658203125,
-0.043792724609375,
-0.0443115234375,
0.0128021... |
MohamedRashad/ChatGPT-prompts | 2023-01-26T22:54:31.000Z | [
"region:us"
] | MohamedRashad | null | null | 30 | 120 | 2023-01-26T22:32:41 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# ChatGPT-Prompts Dataset
## Description
This dataset aims to provide an evaluation data for the Language Models to come. It has been generated using [LearnGPT website](https://www.emergentmind.com/).
| 404 | [
[
-0.024932861328125,
-0.0535888671875,
0.01410675048828125,
0.021148681640625,
-0.013702392578125,
0.01312255859375,
-0.008026123046875,
0.0102691650390625,
-0.0152435302734375,
0.018218994140625,
-0.08062744140625,
-0.025604248046875,
-0.0212860107421875,
-0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.