id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
seungheondoh/music-wiki | 2023-08-19T04:16:06.000Z | [
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"music",
"wiki",
"region:us"
] | seungheondoh | null | null | null | 2 | 25 | ---
license: mit
language:
- en
tags:
- music
- wiki
size_categories:
- 100K<n<1M
---
# Dataset Card for "music-wiki"
📚🎵 Introducing **music-wiki**
📊🎶 Our data collection process unfolds as follows:
1) Starting with a seed page from Wikipedia's music section, we navigate through a referenced page graph, employing recursive crawling up to a depth of 20 levels.
2) Simultaneously, tapping into the rich MusicBrainz dump, we encounter a staggering 11 million unique music entities spanning 10 distinct categories. These entities serve as the foundation for utilizing the Wikipedia API to meticulously crawl corresponding pages.
The culmination of these efforts results in the assembly of data: 167k pages from the first method and an additional 193k pages through the second method.
While totaling at 361k pages, this compilation provides a substantial groundwork for establishing a Music-Text-Database. 🎵📚🔍
- **Repository:** [music-wiki](https://github.com/seungheondoh/music-wiki)
[](https://github.com/seungheondoh/music-wiki)
### splits
- wikipedia_music: 167890
- musicbrainz_genre: 1459
- musicbrainz_instrument: 872
- musicbrainz_artist: 7002
- musicbrainz_release: 163068
- musicbrainz_release_group: 15942
- musicbrainz_label: 158
- musicbrainz_work: 4282
- musicbrainz_series: 12
- musicbrainz_place: 49
- musicbrainz_event: 16
- musicbrainz_area: 360 |
neil-code/dialogsum-test | 2023-08-24T03:47:07.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"task_categories:text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"licens... | neil-code | null | null | null | 0 | 25 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
- text2text-generation
- text-generation
task_ids: []
pretty_name: DIALOGSum Corpus
---
# Dataset Card for DIALOGSum Corpus
## Dataset Description
### Links
- **Homepage:** https://aclanthology.org/2021.findings-acl.449
- **Repository:** https://github.com/cylnlp/dialogsum
- **Paper:** https://aclanthology.org/2021.findings-acl.449
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.
### Languages
English
## Dataset Structure
### Data Instances
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues (+1000 tests) split into train, test and validation.
The first instance in the training set:
{'id': 'train_0', 'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.", 'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor.", 'topic': "get a check-up}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- topic: human written topic/one liner of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 12460
- val: 500
- test: 1500
- holdout: 100 [Only 3 features: id, dialogue, topic]
## Dataset Creation
### Curation Rationale
In paper:
We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers.
Compared with previous datasets, dialogues from DialogSum have distinct characteristics:
Under rich real-life scenarios, including more diverse task-oriented scenarios;
Have clear communication patterns and intents, which is valuable to serve as summarization sources;
Have a reasonable length, which comforts the purpose of automatic summarization.
We ask annotators to summarize each dialogue based on the following criteria:
Convey the most salient information;
Be brief;
Preserve important named entities within the conversation;
Be written from an observer perspective;
Be written in formal language.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
## Licensing Information
MIT License
## Citation Information
```
@inproceedings{chen-etal-2021-dialogsum,
title = "{D}ialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset",
author = "Chen, Yulong and
Liu, Yang and
Chen, Liang and
Zhang, Yue",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.449",
doi = "10.18653/v1/2021.findings-acl.449",
pages = "5062--5074",
```
## Contributions
Thanks to [@cylnlp](https://github.com/cylnlp) for adding this dataset. |
mlabonne/Evol-Instruct-Python-26k | 2023-08-25T16:29:36.000Z | [
"region:us"
] | mlabonne | null | null | null | 4 | 25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 39448413.53337422
num_examples: 26588
download_size: 22381182
dataset_size: 39448413.53337422
---
# Evol-Instruct-Python-26k
Filtered version of the [`nickrosh/Evol-Instruct-Code-80k-v1`](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) dataset that only keeps Python code (26,588 samples). You can find a smaller version of it here [`mlabonne/Evol-Instruct-Python-1k`](https://huggingface.co/datasets/mlabonne/Evol-Instruct-Python-1k).
Here is the distribution of the number of tokens in each row (instruction + output) using Llama's tokenizer:
 |
indiejoseph/yue-zh-translation | 2023-10-08T20:52:38.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:yue",
"language:zh",
"license:cc-by-4.0",
"region:us"
] | indiejoseph | null | null | null | 1 | 25 | ---
language:
- yue
- zh
license: cc-by-4.0
size_categories:
- 10K<n<100K
task_categories:
- translation
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: translation
struct:
- name: yue
dtype: string
- name: zh
dtype: string
splits:
- name: train
num_bytes: 16446012
num_examples: 169949
- name: test
num_bytes: 4107525
num_examples: 42361
download_size: 15755469
dataset_size: 20553537
---
This dataset is comprised of:
1. Crawled content that is machine translated from Cantonese to Simplified Chinese.
2. machine translated articlse from zh-yue.wikipedia.org
3. [botisan-ai/cantonese-mandarin-translations](https://huggingface.co/datasets/botisan-ai/cantonese-mandarin-translations)
4. [AlienKevin/LIHKG](https://huggingface.co/datasets/AlienKevin/LIHKG)
|
luiseduardobrito/ptbr-quora-translated | 2023-08-28T15:56:20.000Z | [
"task_categories:text-classification",
"language:pt",
"quora",
"seamless-m4t",
"region:us"
] | luiseduardobrito | null | null | null | 0 | 25 | ---
task_categories:
- text-classification
language:
- pt
tags:
- quora
- seamless-m4t
---
### Dataset Summary
The Quora dataset is composed of question pairs, and the task is to determine if the questions are paraphrases of each
other (have the same meaning). The dataset was translated to Portuguese using the model [seamless-m4t-medium](https://huggingface.co/facebook/seamless-m4t-medium).
### Languages
Portuguese |
SinKove/synthetic_chest_xray | 2023-09-14T12:46:05.000Z | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:openrail",
"medical",
"arxiv:2306.01322",
"region:us"
] | SinKove | Chest XRay dataset with chexpert labels. | null | null | 6 | 25 | ---
task_categories:
- image-classification
tags:
- medical
pretty_name: C
size_categories:
- 10K<n<100K
license: openrail
---
# Dataset Card for Synthetic Chest Xray
## Dataset Description
This is a synthetic chest X-ray dataset created during the development of the *privacy distillation* paper. In particular, this is the $D_{filter}$ dataset described.
- **Paper: https://arxiv.org/abs/2306.01322
- **Point of Contact: pedro.sanchez@ed.ac.uk
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks
Chexpert classification.
https://stanfordmlgroup.github.io/competitions/chexpert/
## Dataset Structure
- Images
- Chexpert Labels
### Data Splits
We did not define data splits. In the paper, all the images were used as training data and real data samples were used as validation and testing data.
## Dataset Creation
We generated the synthetic data samples using the diffusion model finetuned on the [Mimic-CXR dataset](https://physionet.org/content/mimic-cxr/2.0.0/).
### Personal and Sensitive Information
Following GDPR "Personal data is any information that relates to an identified or identifiable living individual."
We make sure that there are not "personal data" (re-identifiable information) by filtering with a deep learning model trained for identifying patients.
## Considerations for Using the Data
### Social Impact of Dataset
We hope that this dataset can used to enhance AI models training for pathology classification in chest X-ray.
### Discussion of Biases
There are biases towards specific pathologies. For example, the "No Findings" label is much bigger than other less common pathologies.
## Additional Information
### Dataset Curators
We used deep learning to filter the dataset.
We filter for re-identification, making sure that none of the images used in the training can be re-identified using samples from this synthetic dataset.
### Licensing Information
We generated the synthetic data samples based on generative model trained on the [Mimic-CXR dataset](https://physionet.org/content/mimic-cxr/2.0.0/). Mimic-CXR uses the [PhysioNet Credentialed Health](https://physionet.org/content/mimic-cxr/view-license/2.0.0/) data license.
The real data license explicitly requires that "The LICENSEE will not share access to PhysioNet restricted data with anyone else". Here, we ensure that none of the synthetic images can be used to re-identify real Mimic-CXR images. Therefore, we do not consider this synthetic dataset to be "PhysioNet restricted data".
This dataset is released under the [Open & Responsible AI license ("OpenRAIL")](https://huggingface.co/blog/open_rail)
### Citation Information
Fernandez, V., Sanchez, P., Pinaya, W. H. L., Jacenków, G., Tsaftaris, S. A., & Cardoso, J. (2023). Privacy Distillation: Reducing Re-identification Risk of Multimodal Diffusion Models. arXiv preprint arXiv:2306.01322.
https://arxiv.org/abs/2306.01322
### Contributions
Pedro P. Sanchez, Walter Pinaya uploaded the dataset to Huggingface. All other co-authors of the papers contributed for creating the dataset. |
sam2ai/oscar-odia-mini | 2023-09-02T17:15:47.000Z | [
"license:apache-2.0",
"region:us"
] | sam2ai | null | null | null | 0 | 25 | ---
license: apache-2.0
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 60710175
num_examples: 58826
download_size: 23304188
dataset_size: 60710175
---
|
taaredikahan23/medical-llama2-5k | 2023-09-04T12:34:50.000Z | [
"region:us"
] | taaredikahan23 | null | null | null | 2 | 25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2165103
num_examples: 5452
download_size: 869829
dataset_size: 2165103
---
# Dataset Card for "medical-llama2-5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
OneFly7/llama2-politosphere-fine-tuning-supp-anti-oth | 2023-09-11T09:07:30.000Z | [
"region:us"
] | OneFly7 | null | null | null | 0 | 25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: label_text
dtype: string
splits:
- name: train
num_bytes: 112065
num_examples: 113
- name: validation
num_bytes: 110701
num_examples: 113
download_size: 47433
dataset_size: 222766
---
# Dataset Card for "llama2-politosphere-fine-tuning-supp-anti-oth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
argilla/squad | 2023-09-10T20:48:49.000Z | [
"size_categories:10K<n<100K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | argilla | null | null | null | 0 | 25 | ---
size_categories: 10K<n<100K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for squad
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("argilla/squad")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("argilla/squad")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/data_model.html) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| question | Question | TextField | True | False |
| context | Context | TextField | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| answer | Answer | TextQuestion | True | N/A | N/A |
**✨ NEW** Additionally, we also have **suggestions**, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.
Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"fields": {
"context": "Architecturally, the school has a Catholic character. Atop the Main Building\u0027s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \"Venite Ad Me Omnes\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.",
"question": "To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?"
},
"metadata": {
"split": "train"
},
"responses": [
{
"status": "submitted",
"values": {
"answer": {
"value": "Saint Bernadette Soubirous"
}
}
}
],
"suggestions": []
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"answer": [
{
"status": "submitted",
"user_id": null,
"value": "Saint Bernadette Soubirous"
}
],
"answer-suggestion": null,
"answer-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"context": "Architecturally, the school has a Catholic character. Atop the Main Building\u0027s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \"Venite Ad Me Omnes\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.",
"external_id": null,
"metadata": "{\"split\": \"train\"}",
"question": "To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?"
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
* **question** is of type `TextField`.
* **context** is of type `TextField`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **answer** is of type `TextQuestion`.
* **✨ NEW** **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **answer-suggestion** is of type `text`.
Additionally, we also have one more field which is optional and is the following:
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
michaelnetbiz/Kendex | 2023-10-09T19:57:39.000Z | [
"task_categories:text-to-speech",
"size_categories:n<1K",
"language:en",
"license:mit",
"region:us"
] | michaelnetbiz | null | null | null | 0 | 25 | ---
language:
- en
license: mit
size_categories:
- n<1K
task_categories:
- text-to-speech
pretty_name: Kendex
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: file
dtype: string
- name: text
dtype: string
- name: duration
dtype: float64
- name: normalized_text
dtype: string
splits:
- name: train
num_bytes: 1221051208.913
num_examples: 6921
- name: test
num_bytes: 138274209.0
num_examples: 783
download_size: 1327307782
dataset_size: 1359325417.913
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ShuaKang/keyframes_d_d_gripper | 2023-09-13T07:13:18.000Z | [
"region:us"
] | ShuaKang | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: keyframes_image
dtype: image
- name: text
dtype: string
- name: gripper_image
dtype: image
splits:
- name: train
num_bytes: 711583897.5
num_examples: 14638
download_size: 700376995
dataset_size: 711583897.5
---
# Dataset Card for "keyframes_d_d_gripper"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
johannes-garstenauer/l_cls_labelled_from_distilbert_seqclass_pretrain_pad_3 | 2023-09-14T09:57:36.000Z | [
"region:us"
] | johannes-garstenauer | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: last_cls
sequence: float32
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1542000
num_examples: 500
download_size: 2136798
dataset_size: 1542000
---
# Dataset Card for "l_cls_labelled_from_distilbert_seqclass_pretrain_pad_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mindchain/demo_25 | 2023-09-24T11:59:44.000Z | [
"region:us"
] | mindchain | null | null | null | 0 | 25 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
HydraLM/SkunkData-002-2 | 2023-09-15T02:11:13.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_conversation_id
dtype: string
- name: embedding
sequence: float64
- name: cluster
dtype: int32
splits:
- name: train
num_bytes: 14849700907
num_examples: 1472917
download_size: 11160683261
dataset_size: 14849700907
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SkunkData-002-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Arrivedercis/finreport-llama2-smallfull | 2023-09-16T02:52:29.000Z | [
"region:us"
] | Arrivedercis | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 42295794
num_examples: 184327
download_size: 21073062
dataset_size: 42295794
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "finreport-llama2-smallfull"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bhavnicksm/PokemonCardsPlus | 2023-09-17T15:22:03.000Z | [
"region:us"
] | bhavnicksm | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: id
dtype: string
- name: name
dtype: string
- name: card_image
dtype: string
- name: pokemon_image
dtype: string
- name: caption
dtype: string
- name: pokemon_intro
dtype: string
- name: pokedex_text
dtype: string
- name: hp
dtype: int64
- name: set_name
dtype: string
- name: blip_caption
dtype: string
splits:
- name: train
num_bytes: 39075923
num_examples: 13139
download_size: 8210056
dataset_size: 39075923
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "PokemonCardsPlus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mychen76/ds_receipts_v2_test | 2023-09-20T21:38:24.000Z | [
"region:us"
] | mychen76 | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 51155438.0
num_examples: 472
download_size: 50770089
dataset_size: 51155438.0
---
# Dataset Card for "ds_receipts_v2_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thanhduycao/data_for_synthesis_with_entities_align_v3 | 2023-09-21T04:46:26.000Z | [
"region:us"
] | thanhduycao | null | null | null | 0 | 25 | ---
dataset_info:
config_name: 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
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: intent
dtype: string
- name: sentence_annotation
dtype: string
- name: entities
list:
- name: type
dtype: string
- name: filler
dtype: string
- name: file
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: origin_transcription
dtype: string
- name: sentence_norm
dtype: string
- name: w2v2_large_transcription
dtype: string
- name: wer
dtype: int64
- name: entities_norm
list:
- name: filler
dtype: string
- name: type
dtype: string
- name: entities_align
dtype: string
splits:
- name: train
num_bytes: 2667449542.4493446
num_examples: 5029
download_size: 632908060
dataset_size: 2667449542.4493446
configs:
- config_name: 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
data_files:
- split: train
path: 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/train-*
---
# Dataset Card for "data_for_synthesis_with_entities_align_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
longface/prontoqa-train | 2023-10-01T16:01:34.000Z | [
"region:us"
] | longface | null | null | null | 0 | 25 | Entry not found |
mattlc/pxcorpus | 2023-09-21T11:04:47.000Z | [
"region:us"
] | mattlc | null | null | null | 1 | 25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 489493761.823
num_examples: 1981
download_size: 464827686
dataset_size: 489493761.823
---
# Dataset Card for "pxcorpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dongyoung4091/hh-generated_flan_t5_large_flan_t5_base_zeroshot | 2023-09-23T00:41:25.000Z | [
"region:us"
] | dongyoung4091 | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: zeroshot_helpfulness
dtype: float64
- name: zeroshot_specificity
dtype: float64
- name: zeroshot_intent
dtype: float64
- name: zeroshot_factuality
dtype: float64
- name: zeroshot_easy-to-understand
dtype: float64
- name: zeroshot_relevance
dtype: float64
- name: zeroshot_readability
dtype: float64
- name: zeroshot_enough-detail
dtype: float64
- name: 'zeroshot_biased:'
dtype: float64
- name: zeroshot_fail-to-consider-individual-preferences
dtype: float64
- name: zeroshot_repetetive
dtype: float64
- name: zeroshot_fail-to-consider-context
dtype: float64
- name: zeroshot_too-long
dtype: float64
splits:
- name: train
num_bytes: 6336357
num_examples: 25600
download_size: 0
dataset_size: 6336357
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hh-generated_flan_t5_large_flan_t5_base_zeroshot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thanhduycao/data_soict_train_synthesis_entity | 2023-09-22T02:39:25.000Z | [
"region:us"
] | thanhduycao | null | null | null | 0 | 25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: sentence_norm
dtype: string
splits:
- name: train
num_bytes: 6498333095
num_examples: 18312
- name: test
num_bytes: 389981876
num_examples: 748
download_size: 1639149838
dataset_size: 6888314971
---
# Dataset Card for "data_soict_train_synthesis_entity"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
minoruskore/numbers | 2023-09-22T13:30:23.000Z | [
"license:other",
"region:us"
] | minoruskore | null | null | null | 0 | 25 | ---
license: other
configs:
- config_name: default
data_files:
- split: train1kk
path: data/train1kk-*
- split: test1kk
path: data/test1kk-*
- split: train10kk
path: data/train10kk-*
- split: test10kk
path: data/test10kk-*
- split: train100k
path: data/train100k-*
- split: test100k
path: data/test100k-*
dataset_info:
features:
- name: number
dtype: int64
- name: text
dtype: string
splits:
- name: train1kk
num_bytes: 51110729
num_examples: 800000
- name: test1kk
num_bytes: 12780276
num_examples: 200000
- name: train10kk
num_bytes: 604734899
num_examples: 8000000
- name: test10kk
num_bytes: 151175106
num_examples: 2000000
- name: train100k
num_bytes: 4170428
num_examples: 80000
- name: test100k
num_bytes: 1040577
num_examples: 20000
download_size: 193519290
dataset_size: 825012015
---
|
EdBerg/baha | 2023-09-24T19:06:38.000Z | [
"license:openrail",
"region:us"
] | EdBerg | null | null | null | 0 | 25 | ---
license: openrail
---
|
tyzhu/eval_tag_nq_test_v0.5 | 2023-09-25T06:07:50.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: question
dtype: string
- name: title
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
- name: answers
struct:
- name: answer_start
sequence: 'null'
- name: text
sequence: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1972
num_examples: 10
- name: validation
num_bytes: 787384
num_examples: 3610
download_size: 488101
dataset_size: 789356
---
# Dataset Card for "eval_tag_nq_test_v0.5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mmnga/wikipedia-ja-20230720-2k | 2023-09-25T08:20:29.000Z | [
"region:us"
] | mmnga | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: curid
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5492016.948562663
num_examples: 2048
download_size: 3161030
dataset_size: 5492016.948562663
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wikipedia-ja-20230720-2k"
This is data extracted randomly from [izumi-lab/wikipedia-ja-20230720](https://huggingface.co/datasets/izumi-lab/wikipedia-ja-20230720), consisting of 2,048 records.
[izumi-lab/wikipedia-ja-20230720](https://huggingface.co/datasets/izumi-lab/wikipedia-ja-20230720)からデータを2k分ランダムに抽出したデータです。
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hyungkwonko/chart-llm | 2023-09-26T15:03:30.000Z | [
"size_categories:1K<n<10K",
"language:en",
"license:bsd-2-clause",
"Vega-Lite",
"Chart",
"Visualization",
"region:us"
] | hyungkwonko | null | null | null | 2 | 25 | ---
license: bsd-2-clause
language:
- en
tags:
- Vega-Lite
- Chart
- Visualization
size_categories:
- 1K<n<10K
--- |
cestwc/SG-subzone-poi-sentiment_ | 2023-10-06T08:25:10.000Z | [
"region:us"
] | cestwc | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: local_created_at
dtype: string
- name: id
dtype: int64
- name: text
dtype: string
- name: source
dtype: string
- name: truncated
dtype: bool
- name: in_reply_to_status_id
dtype: float64
- name: in_reply_to_user_id
dtype: float64
- name: user_id
dtype: int64
- name: user_name
dtype: string
- name: user_screen_name
dtype: string
- name: user_location
dtype: string
- name: user_url
dtype: string
- name: user_verified
dtype: bool
- name: user_default_profile
dtype: bool
- name: user_description
dtype: string
- name: user_followers_count
dtype: int64
- name: user_friends_count
dtype: int64
- name: user_listed_count
dtype: int64
- name: user_favourites_count
dtype: int64
- name: user_statuses_count
dtype: int64
- name: local_user_created_at
dtype: string
- name: place_id
dtype: string
- name: place_url
dtype: string
- name: place_place_type
dtype: string
- name: place_name
dtype: string
- name: place_country_code
dtype: string
- name: place_bounding_box_type
dtype: string
- name: place_bounding_box_coordinates
dtype: string
- name: is_quote_status
dtype: bool
- name: retweet_count
dtype: int64
- name: favorite_count
dtype: int64
- name: entities_hashtags
dtype: string
- name: entities_urls
dtype: string
- name: entities_symbols
dtype: string
- name: entities_user_mentions
dtype: string
- name: favorited
dtype: bool
- name: retweeted
dtype: bool
- name: possibly_sensitive
dtype: bool
- name: lang
dtype: string
- name: latitude
dtype: float64
- name: longitude
dtype: float64
- name: year_created_at
dtype: int64
- name: month_created_at
dtype: int64
- name: day_created_at
dtype: int64
- name: weekday_created_at
dtype: int64
- name: hour_created_at
dtype: int64
- name: minute_created_at
dtype: int64
- name: year_user_created_at
dtype: int64
- name: month_user_created_at
dtype: int64
- name: day_user_created_at
dtype: int64
- name: weekday_user_created_at
dtype: int64
- name: hour_user_created_at
dtype: int64
- name: minute_user_created_at
dtype: int64
- name: subzone
dtype: string
- name: planning_area
dtype: string
- name: poi_flag
dtype: float64
- name: poi_id
dtype: string
- name: poi_dist
dtype: float64
- name: poi_latitude
dtype: float64
- name: poi_longitude
dtype: float64
- name: poi_name
dtype: string
- name: poi_type
dtype: string
- name: poi_cate2
dtype: string
- name: poi_cate3
dtype: string
- name: clean_text
dtype: string
- name: joy_score
dtype: float64
- name: trust_score
dtype: float64
- name: positive_score
dtype: float64
- name: sadness_score
dtype: float64
- name: disgust_score
dtype: float64
- name: anger_score
dtype: float64
- name: anticipation_score
dtype: float64
- name: negative_score
dtype: float64
- name: fear_score
dtype: float64
- name: surprise_score
dtype: float64
- name: words
dtype: string
- name: polarity_score
dtype: float64
- name: labels
dtype: int64
- name: related_0
dtype: string
splits:
- name: '0203'
num_bytes: 1532270629
num_examples: 1025135
download_size: 415982826
dataset_size: 1532270629
configs:
- config_name: default
data_files:
- split: '0203'
path: data/0203-*
---
# Dataset Card for "SG-subzone-poi-sentiment_"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mmathys/profanity | 2023-09-27T09:01:04.000Z | [
"license:mit",
"region:us"
] | mmathys | null | null | null | 0 | 25 | ---
license: mit
---
# The Obscenity List
*by [Surge AI, the world's most powerful NLP data labeling platform and workforce](https://www.surgehq.ai)*
Ever wish you had a ready-made list of profanity? Maybe you want to remove NSFW comments, filter offensive usernames, or build content moderation tools, and you can't dream up enough obscenities on your own.
At Surge AI, we help companies build human-powered datasets to train stunning AI and NLP, and we're creating the world's largest profanity list in 20+ languages.
## Dataset
This repo contains 1600+ popular English profanities and their variations.
**Columns**
* `text`: the profanity
* `canonical_form_1`: the profanity's canonical form
* `canonical_form_2`: an additional canonical form, if applicable
* `canonical_form_3`: an additional canonical form, if applicable
* `category_1`: the profanity's primary category (see below for list of categories)
* `category_2`: the profanity's secondary category, if applicable
* `category_3`: the profanity's tertiary category, if applicable
* `severity_rating`: We asked 5 [Surge AI](https://www.surgehq.ai) data labelers to rate how severe they believed each profanity to be, on a 1-3 point scale. This is the mean of those 5 ratings.
* `severity_description`: We rounded `severity_rating` to the nearest integer. `Mild` corresponds to a rounded mean rating of `1`, `Strong` to `2`, and `Severe` to `3`.
## Categories
We organized the profanity into the following categories:
- sexual anatomy / sexual acts (ass kisser, dick, pigfucker)
- bodily fluids / excrement (shit, cum)
- sexual orientation / gender (faggot, tranny, bitch, whore)
- racial / ethnic (chink, n3gro)
- mental disability (retard, dumbass)
- physical disability (quadriplegic bitch)
- physical attributes (fatass, ugly whore)
- animal references (pigfucker, jackass)
- religious offense (goddamn)
- political (China virus)
## Future
We'll be adding more languages and profanity annotations (e.g., augmenting each profanity with its severity level, type, and other variations) over time.
Check out our other [free datasets](https://www.surgehq.ai/datasets).
Sign up [here](https://forms.gle/u1SKL4zySK2wMp1r7) to receive updates on this dataset and be the first to learn about new datasets we release!
## Contact
Need a larger set of expletives and slurs, or a list of swear words in other languages (Spanish, French, German, Japanese, Portuguese, etc)? We work with top AI and content moderation companies around the world, and we love feedback. Post an issue or reach out to team@surgehq.ai!

Follow us on Twitter at [@HelloSurgeAI](https://www.twitter.com/@HelloSurgeAI).
## Original Repo
You can find the original repository here: https://github.com/surge-ai/profanity/ |
DopeorNope/combined | 2023-09-28T03:32:25.000Z | [
"region:us"
] | DopeorNope | null | null | null | 0 | 25 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 36438102
num_examples: 27085
download_size: 19659282
dataset_size: 36438102
---
# Dataset Card for "combined"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TokenBender/glaive_coder_raw_text | 2023-09-30T11:56:48.000Z | [
"license:apache-2.0",
"region:us"
] | TokenBender | null | null | null | 0 | 25 | ---
license: apache-2.0
---
|
berkouille/guanaco_golf | 2023-10-03T07:01:38.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | berkouille | null | null | null | 0 | 25 | ---
task_categories:
- question-answering
- text-generation
language:
- en
size_categories:
- 1K<n<10K
--- |
mychen76/color_terms_tinyllama2 | 2023-10-01T21:50:22.000Z | [
"region:us"
] | mychen76 | null | null | null | 0 | 25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5073062.918552837
num_examples: 27109
- name: test
num_bytes: 1268406.0814471627
num_examples: 6778
- name: validation
num_bytes: 253756.07058754095
num_examples: 1356
download_size: 2950539
dataset_size: 6595225.070587541
---
# Dataset Card for "color_terms_tinyllama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rbel/jobtitles | 2023-10-09T15:53:31.000Z | [
"license:apache-2.0",
"region:us"
] | rbel | null | null | null | 0 | 25 | ---
license: apache-2.0
---
|
darcycao/finaldataset | 2023-10-09T10:19:10.000Z | [
"region:us"
] | darcycao | null | null | null | 0 | 25 | Entry not found |
euronews | 2023-01-25T14:30:08.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:original",
"language:de",
"language:fr",
"language:nl",
"license:cc0-1.... | null | The corpora comprise of files per data provider that are encoded in the IOB format (Ramshaw & Marcus, 1995). The IOB format is a simple text chunking format that divides texts into single tokens per line, and, separated by a whitespace, tags to mark named entities. The most commonly used categories for tags are PER (person), LOC (location) and ORG (organization). To mark named entities that span multiple tokens, the tags have a prefix of either B- (beginning of named entity) or I- (inside of named entity). O (outside of named entity) tags are used to mark tokens that are not a named entity. | @InProceedings{NEUDECKER16.110,
author = {Clemens Neudecker},
title = {An Open Corpus for Named Entity Recognition in Historic Newspapers},
booktitle = {Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)},
year = {2016},
month = {may},
date = {23-28},
location = {Portorož, Slovenia},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Sara Goggi and Marko Grobelnik and Bente Maegaard and Joseph Mariani and Helene Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
address = {Paris, France},
isbn = {978-2-9517408-9-1},
language = {english}
} | null | 3 | 24 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- de
- fr
- nl
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: europeana-newspapers
pretty_name: Europeana Newspapers
dataset_info:
- config_name: fr-bnf
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
splits:
- name: train
num_bytes: 3340299
num_examples: 1
download_size: 1542418
dataset_size: 3340299
- config_name: nl-kb
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
splits:
- name: train
num_bytes: 3104213
num_examples: 1
download_size: 1502162
dataset_size: 3104213
- config_name: de-sbb
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
splits:
- name: train
num_bytes: 817295
num_examples: 1
download_size: 407756
dataset_size: 817295
- config_name: de-onb
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
splits:
- name: train
num_bytes: 502369
num_examples: 1
download_size: 271252
dataset_size: 502369
- config_name: de-lft
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
splits:
- name: train
num_bytes: 1263429
num_examples: 1
download_size: 677779
dataset_size: 1263429
---
# Dataset Card for Europeana Newspapers
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/EuropeanaNewspapers/ner-corpora)
- **Repository:** [Github](https://github.com/EuropeanaNewspapers/ner-corpora)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/L16-1689/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@jplu](https://github.com/jplu) for adding this dataset. |
imdb_urdu_reviews | 2023-01-25T14:32:49.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ur",
"license:odbl",
"region:us"
] | null | Large Movie translated Urdu Reviews Dataset.
This is a dataset for binary sentiment classification containing substantially more data than previous
benchmark datasets. We provide a set of 40,000 highly polar movie reviews for training, and 10,000 for testing.
To increase the availability of sentiment analysis dataset for a low recourse language like Urdu,
we opted to use the already available IMDB Dataset. we have translated this dataset using google translator.
This is a binary classification dataset having two classes as positive and negative.
The reason behind using this dataset is high polarity for each class.
It contains 50k samples equally divided in two classes. | @InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly,nRaymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y...},
title = {Learning Word Vectors for Sentiment Analysis},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
} | null | 0 | 24 | ---
annotations_creators:
- found
language_creators:
- machine-generated
language:
- ur
license:
- odbl
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: ImDB Urdu Reviews
dataset_info:
features:
- name: sentence
dtype: string
- name: sentiment
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 114670811
num_examples: 50000
download_size: 31510992
dataset_size: 114670811
---
# Dataset Card for ImDB Urdu Reviews
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/mirfan899/Urdu)
- **Repository:** [Github](https://github.com/mirfan899/Urdu)
- **Paper:** [Aclweb](http://www.aclweb.org/anthology/P11-1015)
- **Leaderboard:**
- **Point of Contact:** [Ikram Ali](https://github.com/akkefa)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- sentence: The movie review which was translated into Urdu.
- sentiment: The sentiment exhibited in the review, either positive or negative.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@chaitnayabasava](https://github.com/chaitnayabasava) for adding this dataset. |
PereLluis13/spanish_speech_text | 2022-02-04T17:32:37.000Z | [
"region:us"
] | PereLluis13 | null | null | null | 1 | 24 | Entry not found |
PlanTL-GOB-ES/pharmaconer | 2022-11-18T12:06:36.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"biomedical",
"clinical",
"spanish",
"region:us"
] | PlanTL-GOB-ES | PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje (Plan TL).
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an
open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts
and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets.
The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
In terms of training examples, this translates to a total of 8074, 3764 and 3931 annotated sentences in each set.
The original dataset was distributed in Brat format (https://brat.nlplab.org/standoff.html).
For further information, please visit https://temu.bsc.es/pharmaconer/ or send an email to encargo-pln-life@bsc.es | @inproceedings{,
title = "PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",
author = "Gonzalez-Agirre, Aitor and
Marimon, Montserrat and
Intxaurrondo, Ander and
Rabal, Obdulia and
Villegas, Marta and
Krallinger, Martin",
booktitle = "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-5701",
doi = "10.18653/v1/D19-5701",
pages = "1--10",
abstract = "",
} | null | 4 | 24 | ---
annotations_creators:
- expert-generated
language:
- es
tags:
- biomedical
- clinical
- spanish
multilinguality:
- monolingual
task_categories:
- token-classification
task_ids:
- named-entity-recognition
license:
- cc-by-4.0
---
# PharmaCoNER
## Dataset Description
Manually classified collection of Spanish clinical case studies.
- **Homepage:** [zenodo](https://zenodo.org/record/4270158)
- **Paper:** [PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track](https://aclanthology.org/D19-5701/)
- **Point of Contact:** encargo-pln-life@bsc.es
### Dataset Summary
Manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from [SciELO](https://scielo.org/).
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets.
The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
In terms of training examples, this translates to a total of 8129, 3787 and 3952 annotated sentences in each set.
The original dataset is distributed in [Brat](https://brat.nlplab.org/standoff.html) format.
The annotation of the entire set of entity mentions was carried out by domain experts.
It includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
This dataset was designed for the PharmaCoNER task, sponsored by [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
For further information, please visit [the official website](https://temu.bsc.es/pharmaconer/).
### Supported Tasks
Named Entity Recognition (NER)
### Languages
- Spanish (es)
### Directory Structure
* README.md
* pharmaconer.py
* dev-set_1.1.conll
* test-set_1.1.conll
* train-set_1.1.conll
## Dataset Structure
### Data Instances
Three four-column files, one for each split.
### Data Fields
Every file has four columns:
* 1st column: Word form or punctuation symbol
* 2nd column: Original BRAT file name
* 3rd column: Spans
* 4th column: IOB tag
#### Example
<pre>
La S0004-06142006000900008-1 123_125 O
paciente S0004-06142006000900008-1 126_134 O
tenía S0004-06142006000900008-1 135_140 O
antecedentes S0004-06142006000900008-1 141_153 O
de S0004-06142006000900008-1 154_156 O
hipotiroidismo S0004-06142006000900008-1 157_171 O
, S0004-06142006000900008-1 171_172 O
hipertensión S0004-06142006000900008-1 173_185 O
arterial S0004-06142006000900008-1 186_194 O
en S0004-06142006000900008-1 195_197 O
tratamiento S0004-06142006000900008-1 198_209 O
habitual S0004-06142006000900008-1 210_218 O
con S0004-06142006000900008-1 219-222 O
atenolol S0004-06142006000900008-1 223_231 B-NORMALIZABLES
y S0004-06142006000900008-1 232_233 O
enalapril S0004-06142006000900008-1 234_243 B-NORMALIZABLES
</pre>
### Data Splits
| Split | Size |
| ------------- | ------------- |
| `train` | 8,129 |
| `dev` | 3,787 |
| `test` | 3,952 |
## Dataset Creation
### Curation Rationale
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
### Source Data
#### Initial Data Collection and Normalization
Manually classified collection of clinical case report sections. The clinical cases were not restricted to a single medical discipline, covering a variety of medical disciplines, including oncology, urology, cardiology, pneumology or infectious diseases. This is key to cover a diverse set of chemicals and drugs.
#### Who are the source language producers?
Humans, there is no machine generated data.
### Annotations
#### Annotation process
The annotation process of the PharmaCoNER corpus was inspired by previous annotation schemes and corpora used for the BioCreative CHEMDNER and GPRO tracks, translating the guidelines used for these tracks into Spanish and adapting them to the characteristics and needs of clinically oriented documents by modifying the annotation criteria and rules to cover medical information needs. This adaptation was carried out in collaboration with practicing physicians and medicinal chemistry experts. The adaptation, translation and refinement of the guidelines was done on a sample set of the SPACCC corpus and linked to an iterative process of annotation consistency analysis through interannotator agreement (IAA) studies until a high annotation quality in terms of IAA was reached.
#### Who are the annotators?
Practicing physicians and medicinal chemistry experts.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This corpus contributes to the development of medical language models in Spanish.
### Discussion of Biases
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
For further information, send an email to (plantl-gob-es@bsc.es).
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Licensing information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Citation Information
```bibtex
@inproceedings{,
title = "PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",
author = "Gonzalez-Agirre, Aitor and
Marimon, Montserrat and
Intxaurrondo, Ander and
Rabal, Obdulia and
Villegas, Marta and
Krallinger, Martin",
booktitle = "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-5701",
doi = "10.18653/v1/D19-5701",
pages = "1--10",
}
```
### Contributions
[N/A]
|
laion/laion2B-multi | 2023-05-24T22:53:57.000Z | [
"license:cc-by-4.0",
"region:us"
] | laion | null | null | null | 31 | 24 | ---
license: cc-by-4.0
---
|
blinoff/kinopoisk | 2022-10-23T16:51:58.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ru",
"region:us"
] | blinoff | null | @article{blinov2013research,
title={Research of lexical approach and machine learning methods for sentiment analysis},
author={Blinov, PD and Klekovkina, Maria and Kotelnikov, Eugeny and Pestov, Oleg},
journal={Computational Linguistics and Intellectual Technologies},
volume={2},
number={12},
pages={48--58},
year={2013}
} | null | 3 | 24 | ---
language:
- ru
multilinguality:
- monolingual
pretty_name: Kinopoisk
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
### Dataset Summary
Kinopoisk movie reviews dataset (TOP250 & BOTTOM100 rank lists).
In total it contains 36,591 reviews from July 2004 to November 2012.
With following distribution along the 3-point sentiment scale:
- Good: 27,264;
- Bad: 4,751;
- Neutral: 4,576.
### Data Fields
Each sample contains the following fields:
- **part**: rank list top250 or bottom100;
- **movie_name**;
- **review_id**;
- **author**: review author;
- **date**: date of a review;
- **title**: review title;
- **grade3**: sentiment score Good, Bad or Neutral;
- **grade10**: sentiment score on a 10-point scale parsed from text;
- **content**: review text.
### Python
```python3
import pandas as pd
df = pd.read_json('kinopoisk.jsonl', lines=True)
df.sample(5)
```
### Citation
```
@article{blinov2013research,
title={Research of lexical approach and machine learning methods for sentiment analysis},
author={Blinov, PD and Klekovkina, Maria and Kotelnikov, Eugeny and Pestov, Oleg},
journal={Computational Linguistics and Intellectual Technologies},
volume={2},
number={12},
pages={48--58},
year={2013}
}
```
|
mathigatti/spanish_imdb_synopsis | 2022-10-25T10:12:53.000Z | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:text2text-generation",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"language:es",
"license:apache-2.0",
"region:us"
] | mathigatti | null | null | null | 1 | 24 | ---
annotations_creators:
- no-annotation
language:
- es
license:
- apache-2.0
multilinguality:
- monolingual
task_categories:
- summarization
- text-generation
- text2text-generation
---
# Dataset Card for Spanish IMDb Synopsis
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
## Dataset Description
4969 movie synopsis from IMDb in spanish.
### Dataset Summary
[N/A]
### Languages
All descriptions are in spanish, the other fields have some mix of spanish and english.
## Dataset Structure
[N/A]
### Data Fields
- `description`: IMDb description for the movie (string), should be spanish
- `keywords`: IMDb keywords for the movie (string), mix of spanish and english
- `genre`: The genres of the movie (string), mix of spanish and english
- `year`: The year the movie was published (float)
- `name`: The name of the movie (string), mix of spanish and english
- `director`: The name of the main director in the movie, can be empty (string)
## Dataset Creation
[This kaggle dataset](https://www.kaggle.com/datasets/komalkhetlani/imdb-dataset) was used as a starting point. Then IMDb was scraped downloading the synopsis of the movies that have more than 5000 votes/reviews and those that did not have a synopsis available in Spanish were discarded. |
strombergnlp/twitter_pos_vcb | 2022-10-25T21:42:56.000Z | [
"task_categories:token-classification",
"task_ids:part-of-speech",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | strombergnlp | Part-of-speech information is basic NLP task. However, Twitter text
is difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style.
This data is the vote-constrained bootstrapped data generate to support state-of-the-art results.
The data is about 1.5 million English tweets annotated for part-of-speech using Ritter's extension of the PTB tagset.
The tweets are from 2012 and 2013, tokenized using the GATE tokenizer and tagged
jointly using the CMU ARK tagger and Ritter's T-POS tagger. Only when both these taggers' outputs
are completely compatible over a whole tweet, is that tweet added to the dataset.
This data is recommend for use a training data **only**, and not evaluation data.
For more details see https://gate.ac.uk/wiki/twitter-postagger.html and https://aclanthology.org/R13-1026.pdf | @inproceedings{derczynski2013twitter,
title={Twitter part-of-speech tagging for all: Overcoming sparse and noisy data},
author={Derczynski, Leon and Ritter, Alan and Clark, Sam and Bontcheva, Kalina},
booktitle={Proceedings of the international conference recent advances in natural language processing ranlp 2013},
pages={198--206},
year={2013}
} | null | 2 | 24 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- part-of-speech
paperswithcode_id: twitter-pos-vcb
pretty_name: Twitter PoS VCB
---
# Dataset Card for "twitter-pos-vcb"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://gate.ac.uk/wiki/twitter-postagger.html](https://gate.ac.uk/wiki/twitter-postagger.html)
- **Repository:** [https://github.com/GateNLP/gateplugin-Twitter](https://github.com/GateNLP/gateplugin-Twitter)
- **Paper:** [https://aclanthology.org/R13-1026.pdf](https://aclanthology.org/R13-1026.pdf)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 4.51 MiB
- **Size of the generated dataset:** 26.88 MB
- **Total amount of disk used:** 31.39 MB
### Dataset Summary
Part-of-speech information is basic NLP task. However, Twitter text
is difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style.
This data is the vote-constrained bootstrapped data generate to support state-of-the-art results.
The data is about 1.5 million English tweets annotated for part-of-speech using Ritter's extension of the PTB tagset.
The tweets are from 2012 and 2013, tokenized using the GATE tokenizer and tagged
jointly using the CMU ARK tagger and Ritter's T-POS tagger. Only when both these taggers' outputs
are completely compatible over a whole tweet, is that tweet added to the dataset.
This data is recommend for use a training data **only**, and not evaluation data.
For more details see https://gate.ac.uk/wiki/twitter-postagger.html and https://aclanthology.org/R13-1026.pdf
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
English, non-region-specific. `bcp47:en`
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### twitter_pos_vcb
- `id`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
```
### Data Splits
| name |tokens|sentences|
|---------|----:|---------:|
|twitter-pos-vcb|1 543 126| 159 492|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Creative Commons Attribution 4.0 (CC-BY)
### Citation Information
```
@inproceedings{derczynski2013twitter,
title={Twitter part-of-speech tagging for all: Overcoming sparse and noisy data},
author={Derczynski, Leon and Ritter, Alan and Clark, Sam and Bontcheva, Kalina},
booktitle={Proceedings of the international conference recent advances in natural language processing ranlp 2013},
pages={198--206},
year={2013}
}
```
### Contributions
Author uploaded ([@leondz](https://github.com/leondz)) |
CEBaB/CEBaB | 2022-08-16T21:54:47.000Z | [
"region:us"
] | CEBaB | null | null | null | 5 | 24 | Entry not found |
strombergnlp/offenseval_2020 | 2022-05-12T10:04:57.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"arxiv:2006.07235",
"arxiv:2004.02192",
"arxiv:1908.04531",
"arxi... | strombergnlp | OffensEval 2020 features a multilingual dataset with five languages. The languages included in OffensEval 2020 are:
* Arabic
* Danish
* English
* Greek
* Turkish
The annotation follows the hierarchical tagset proposed in the Offensive Language Identification Dataset (OLID) and used in OffensEval 2019.
In this taxonomy we break down offensive content into the following three sub-tasks taking the type and target of offensive content into account.
The following sub-tasks were organized:
* Sub-task A - Offensive language identification;
* Sub-task B - Automatic categorization of offense types;
* Sub-task C - Offense target identification.
The English training data isn't included here (the text isn't available and needs rehydration of 9 million tweets;
see [https://zenodo.org/record/3950379#.XxZ-aFVKipp](https://zenodo.org/record/3950379#.XxZ-aFVKipp)) | @inproceedings{zampieri-etal-2020-semeval,
title = "{S}em{E}val-2020 Task 12: Multilingual Offensive Language Identification in Social Media ({O}ffens{E}val 2020)",
author = {Zampieri, Marcos and
Nakov, Preslav and
Rosenthal, Sara and
Atanasova, Pepa and
Karadzhov, Georgi and
Mubarak, Hamdy and
Derczynski, Leon and
Pitenis, Zeses and
Coltekin, Cagri,
booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
month = dec,
year = "2020",
address = "Barcelona (online)",
publisher = "International Committee for Computational Linguistics",
url = "https://aclanthology.org/2020.semeval-1.188",
doi = "10.18653/v1/2020.semeval-1.188",
pages = "1425--1447",
} | null | 1 | 24 | ---
annotations_creators:
- expert-generated
language_creators:
- found
languages:
- ar
- da
- en
- gr
- tr
licenses:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: OffensEval 2020
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
- text-classification-other-hate-speech-detection
extra_gated_prompt: "Warning: this repository contains harmful content (abusive language, hate speech)."
paperswithcode_id:
- dkhate
- ogtd
---
# Dataset Card for "offenseval_2020"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://sites.google.com/site/offensevalsharedtask/results-and-paper-submission](https://sites.google.com/site/offensevalsharedtask/results-and-paper-submission)
- **Repository:**
- **Paper:** [https://aclanthology.org/2020.semeval-1.188/](https://aclanthology.org/2020.semeval-1.188/), [https://arxiv.org/abs/2006.07235](https://arxiv.org/abs/2006.07235)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
### Dataset Summary
OffensEval 2020 features a multilingual dataset with five languages. The languages included in OffensEval 2020 are:
* Arabic
* Danish
* English
* Greek
* Turkish
The annotation follows the hierarchical tagset proposed in the Offensive Language Identification Dataset (OLID) and used in OffensEval 2019.
In this taxonomy we break down offensive content into the following three sub-tasks taking the type and target of offensive content into account.
The following sub-tasks were organized:
* Sub-task A - Offensive language identification;
* Sub-task B - Automatic categorization of offense types;
* Sub-task C - Offense target identification.
English training data is omitted so needs to be collected otherwise (see [https://zenodo.org/record/3950379#.XxZ-aFVKipp](https://zenodo.org/record/3950379#.XxZ-aFVKipp))
The source datasets come from:
* Arabic [https://arxiv.org/pdf/2004.02192.pdf](https://arxiv.org/pdf/2004.02192.pdf), [https://aclanthology.org/2021.wanlp-1.13/](https://aclanthology.org/2021.wanlp-1.13/)
* Danish [https://arxiv.org/pdf/1908.04531.pdf](https://arxiv.org/pdf/1908.04531.pdf), [https://aclanthology.org/2020.lrec-1.430/?ref=https://githubhelp.com](https://aclanthology.org/2020.lrec-1.430/)
* English [https://arxiv.org/pdf/2004.14454.pdf](https://arxiv.org/pdf/2004.14454.pdf), [https://aclanthology.org/2021.findings-acl.80.pdf](https://aclanthology.org/2021.findings-acl.80.pdf)
* Greek [https://arxiv.org/pdf/2003.07459.pdf](https://arxiv.org/pdf/2003.07459.pdf), [https://aclanthology.org/2020.lrec-1.629/](https://aclanthology.org/2020.lrec-1.629/)
* Turkish [https://aclanthology.org/2020.lrec-1.758/](https://aclanthology.org/2020.lrec-1.758/)
### Supported Tasks and Leaderboards
* [OffensEval 2020](https://sites.google.com/site/offensevalsharedtask/results-and-paper-submission)
### Languages
Five are covered: bcp47 `ar;da;en;gr;tr`
## Dataset Structure
There are five named configs, one per language:
* `ar` Arabic
* `da` Danish
* `en` English
* `gr` Greek
* `tr` Turkish
The training data for English is absent - this is 9M tweets that need to be rehydrated on their own. See [https://zenodo.org/record/3950379#.XxZ-aFVKipp](https://zenodo.org/record/3950379#.XxZ-aFVKipp)
### Data Instances
An example of 'train' looks as follows.
```
{
'id': '0',
'text': 'PLACEHOLDER TEXT',
'subtask_a': 1,
}
```
### Data Fields
- `id`: a `string` feature.
- `text`: a `string`.
- `subtask_a`: whether or not the instance is offensive; `0: NOT, 1: OFF`
### Data Splits
| name |train|test|
|---------|----:|---:|
|ar|7839|1827|
|da|2961|329|
|en|0|3887|
|gr|8743|1544|
|tr|31277|3515|
## Dataset Creation
### Curation Rationale
Collecting data for abusive language classification. Different rational for each dataset.
### Source Data
#### Initial Data Collection and Normalization
Varies per language dataset
#### Who are the source language producers?
Social media users
### Annotations
#### Annotation process
Varies per language dataset
#### Who are the annotators?
Varies per language dataset; native speakers
### Personal and Sensitive Information
The data was public at the time of collection. No PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
The data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
The datasets is curated by each sub-part's paper authors.
### Licensing Information
This data is available and distributed under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@inproceedings{zampieri-etal-2020-semeval,
title = "{S}em{E}val-2020 Task 12: Multilingual Offensive Language Identification in Social Media ({O}ffens{E}val 2020)",
author = {Zampieri, Marcos and
Nakov, Preslav and
Rosenthal, Sara and
Atanasova, Pepa and
Karadzhov, Georgi and
Mubarak, Hamdy and
Derczynski, Leon and
Pitenis, Zeses and
{\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i}},
booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
month = dec,
year = "2020",
address = "Barcelona (online)",
publisher = "International Committee for Computational Linguistics",
url = "https://aclanthology.org/2020.semeval-1.188",
doi = "10.18653/v1/2020.semeval-1.188",
pages = "1425--1447",
abstract = "We present the results and the main findings of SemEval-2020 Task 12 on Multilingual Offensive Language Identification in Social Media (OffensEval-2020). The task included three subtasks corresponding to the hierarchical taxonomy of the OLID schema from OffensEval-2019, and it was offered in five languages: Arabic, Danish, English, Greek, and Turkish. OffensEval-2020 was one of the most popular tasks at SemEval-2020, attracting a large number of participants across all subtasks and languages: a total of 528 teams signed up to participate in the task, 145 teams submitted official runs on the test data, and 70 teams submitted system description papers.",
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
|
embedding-data/WikiAnswers | 2022-08-02T03:33:01.000Z | [
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-classification",
"language:en",
"license:mit",
"region:us"
] | embedding-data | null | null | null | 1 | 24 | ---
license: mit
language:
- en
paperswithcode_id: embedding-data/WikiAnswers
pretty_name: WikiAnswers
task_categories:
- sentence-similarity
- paraphrase-mining
task_ids:
- semantic-similarity-classification
---
# Dataset Card for "WikiAnswers"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/afader/oqa#wikianswers-corpus](https://github.com/afader/oqa#wikianswers-corpus)
- **Repository:** [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
- **Paper:** [More Information Needed](https://doi.org/10.1145/2623330.2623677)
- **Point of Contact:** [Anthony Fader](https://dl.acm.org/profile/81324489111), [Luke Zettlemoyer](https://dl.acm.org/profile/81100527621), [Oren Etzioni](https://dl.acm.org/profile/99658633129)
### Dataset Summary
The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases.
Each cluster optionally contains an answer provided by WikiAnswers users. There are 30,370,994 clusters containing an average of 25 questions per cluster. 3,386,256 (11%) of the clusters have an answer.
### Supported Tasks
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
### Languages
- English.
## Dataset Structure
Each example in the dataset contains 25 equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".
```
{"set": [sentence_1, sentence_2, ..., sentence_25]}
{"set": [sentence_1, sentence_2, ..., sentence_25]}
...
{"set": [sentence_1, sentence_2, ..., sentence_25]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/WikiAnswers")
```
The dataset is loaded as a `DatasetDict` and has the format for `N` examples:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: N
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
#### Who are the source language producers?
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
#### Who are the annotators?
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Personal and Sensitive Information
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Discussion of Biases
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Other Known Limitations
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Licensing Information
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Citation Information
```
@inproceedings{Fader14,
author = {Anthony Fader and Luke Zettlemoyer and Oren Etzioni},
title = {{Open Question Answering Over Curated and Extracted
Knowledge Bases}},
booktitle = {KDD},
year = {2014}
}
```
### Contributions
|
demelin/wino_x | 2022-07-15T22:28:18.000Z | [
"task_categories:translation",
"task_ids:multiple-choice-qa",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:1K<n<10K",
... | demelin | Wino-X is a parallel dataset of German, French, and Russian Winograd schemas, aligned with their English
counterparts, used to examine whether neural machine translation models can perform coreference resolution that
requires commonsense knowledge and whether multilingual language models are capable of commonsense reasoning across
multiple languages. | @inproceedings{Emelin2021WinoXMW,
title={Wino-X: Multilingual Winograd Schemas for Commonsense Reasoning and Coreference Resolution},
author={Denis Emelin and Rico Sennrich},
booktitle={EMNLP},
year={2021}
} | null | 1 | 24 | ---
annotations_creators:
- no-annotation
language:
- en
- de
- fr
- ru
language_creators:
- machine-generated
- expert-generated
license:
- mit
multilinguality:
- multilingual
- translation
pretty_name: Wino-X
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
- coreference resolution
- commonsense reasoning
task_ids:
- multiple-choice-qa
- language-modeling
---
# Dataset Card for Wino-X
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Wino-X repository](https://github.com/demelin/Wino-X)
- **Repository:** [Wino-X repository](https://github.com/demelin/Wino-X)
- **Paper:** [Wino-X: Multilingual Winograd Schemas for Commonsense Reasoning and Coreference Resolution](https://aclanthology.org/2021.emnlp-main.670/)
- **Leaderboard:** [N/A]
- **Point of Contact:** [Denis Emelin](demelin.github.io)
### Dataset Summary
Wino-X is a parallel dataset of German, French, and Russian Winograd schemas, aligned with their English
counterparts, used to examine whether neural machine translation models can perform coreference resolution that
requires commonsense knowledge, and whether multilingual language models are capable of commonsense reasoning across
multiple languages.
### Supported Tasks and Leaderboards
- translation: The dataset can be used to evaluate translations of ambiguous source sentences, as produced by translation models . A [pretrained transformer-based NMT model](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) can be used for this purpose.
- coreference-resolution: The dataset can be used to rank alternative translations of an ambiguous source sentence that differ in the chosen referent of an ambiguous source pronoun. A [pretrained transformer-based NMT model](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) can be used for this purpose.
- commonsense-reasoning: The dataset can also be used evaluate whether pretrained multilingual language models can perform commonsense reasoning in (or across) multiple languages by identifying the correct filler in a cloze completion task. An [XLM-based model](https://huggingface.co/xlm-roberta-base) can be used for this purpose.
### Languages
The dataset (both its MT and LM portions) is available in the following translation pairs: English-German, English-French, English-Russian. All English sentences included in *Wino-X* were extracted from publicly available parallel corpora, as detailed in the accompanying paper, and represent the dataset-specific language varieties. All non-English sentences were obtained through machine translation and may, as such, exhibit features of translationese.
## Dataset Structure
### Data Instances
The following represents a typical *MT-Wino-X* instance (for the English-German translation pair):
{"qID": "3UDTAB6HH8D37OQL3O6F3GXEEOF09Z-1",
"sentence": "The woman looked for a different vase for the bouquet because it was too small.",
"translation1": "Die Frau suchte nach einer anderen Vase für den Blumenstrauß, weil sie zu klein war.",
"translation2": "Die Frau suchte nach einer anderen Vase für den Blumenstrauß, weil er zu klein war.",
"answer": 1,
"pronoun1": "sie",
"pronoun2": "er",
"referent1_en": "vase",
"referent2_en": "bouquet",
"true_translation_referent_of_pronoun1_de": "Vase",
"true_translation_referent_of_pronoun2_de": "Blumenstrauß",
"false_translation_referent_of_pronoun1_de": "Vase",
"false_translation_referent_of_pronoun2_de": "Blumenstrauß"}
The following represents a typical *LM-Wino-X* instance (for the English-French translation pair):
{"qID": "3UDTAB6HH8D37OQL3O6F3GXEEOF09Z-1",
"sentence": "The woman looked for a different vase for the bouquet because it was too small.",
"context_en": "The woman looked for a different vase for the bouquet because _ was too small.",
"context_fr": "La femme a cherché un vase différent pour le bouquet car _ était trop petit.",
"option1_en": "the bouquet",
"option2_en": "the vase",
"option1_fr": "le bouquet",
"option2_fr": "le vase",
"answer": 2,
"context_referent_of_option1_fr": "bouquet",
"context_referent_of_option2_fr": "vase"}
### Data Fields
For *MT-Wino-X*:
- "qID": Unique identifier ID for this dataset instance.
- "sentence": English sentence containing the ambiguous pronoun 'it'.
- "translation1": First translation candidate.
- "translation2": Second translation candidate.
- "answer": ID of the correct translation.
- "pronoun1": Translation of the ambiguous source pronoun in translation1.
- "pronoun2": Translation of the ambiguous source pronoun in translation2.
- "referent1_en": English referent of the translation of the ambiguous source pronoun in translation1.
- "referent2_en": English referent of the translation of the ambiguous source pronoun in translation2.
- "true_translation_referent_of_pronoun1_[TGT-LANG]": Target language referent of pronoun1 in the correct translation.
- "true_translation_referent_of_pronoun2_[TGT-LANG]": Target language referent of pronoun2 in the correct translation.
- "false_translation_referent_of_pronoun1_[TGT-LANG]": Target language referent of pronoun1 in the incorrect translation.
- "false_translation_referent_of_pronoun2_[TGT-LANG]": Target language referent of pronoun2 in the incorrect translation.
For *LM-Wino-X*:
- "qID": Unique identifier ID for this dataset instance.
- "sentence": English sentence containing the ambiguous pronoun 'it'.
- "context_en": Same English sentence, where 'it' is replaced by a gap.
- "context_fr": Target language translation of the English sentence, where the translation of 'it' is replaced by a gap.
- "option1_en": First filler option for the English sentence.
- "option2_en": Second filler option for the English sentence.
- "option1_[TGT-LANG]": First filler option for the target language sentence.
- "option2_[TGT-LANG]": Second filler option for the target language sentence.
- "answer": ID of the correct gap filler.
- "context_referent_of_option1_[TGT-LANG]": English translation of option1_[TGT-LANG].
- "context_referent_of_option2_[TGT-LANG]": English translation of option2_[TGT-LANG]
### Data Splits
*Wno-X* was designed as an evaluation-only benchmark and therefore is intended to be used for zero-shot testing only. However, users are very welcome to split the data as they wish :) .
## Dataset Creation
### Curation Rationale
Please refer to [Section 2 in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
### Source Data
#### Initial Data Collection and Normalization
Please refer to [Section 2 in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
#### Who are the source language producers?
Please refer to [Section 2 in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
### Annotations
#### Annotation process
Please refer to [Section 2 in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
#### Who are the annotators?
Annotations were generated automatically and verified by the dataset author / curator for correctness.
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
Please refer to ['Ethical Considerations' in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
### Discussion of Biases
Please refer to ['Ethical Considerations' in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
### Other Known Limitations
Please refer to ['Ethical Considerations' in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
## Additional Information
### Dataset Curators
[Denis Emelin](demelin.github.io)
### Licensing Information
MIT
### Citation Information
@inproceedings{Emelin2021WinoXMW,
title={Wino-X: Multilingual Winograd Schemas for Commonsense Reasoning and Coreference Resolution},
author={Denis Emelin and Rico Sennrich},
booktitle={EMNLP},
year={2021}
} |
allenai/ms2_sparse_mean | 2022-11-24T16:29:28.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"lang... | allenai | null | null | null | 0 | 24 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `background` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==17`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4333 | 0.2163 | 0.2051 | 0.2197 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.3780 | 0.1827 | 0.1815 | 0.1792 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.3928 | 0.1898 | 0.1951 | 0.1820 | |
allenai/multinews_sparse_mean | 2022-11-24T21:37:31.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | allenai | null | null | null | 2 | 24 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: Multi-News
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: multi-news
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [Multi-News](https://huggingface.co/datasets/multi_news) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==3`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8793 | 0.7460 | 0.6403 | 0.7417 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8748 | 0.7453 | 0.6361 | 0.7442 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8775 | 0.7480 | 0.6370 | 0.7443 | |
allenai/cochrane_sparse_max | 2022-11-24T14:50:26.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"lang... | allenai | null | null | null | 0 | 24 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `target` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==25`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7014 | 0.3841 | 0.1698 | 0.5471 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7226 | 0.4023 | 0.1729 | 0.5676 |
Retrieval results on the `test` set:
N/A. Test set is blind so we do not have any queries. |
allenai/cochrane_sparse_mean | 2022-11-24T15:04:01.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"lang... | allenai | null | null | null | 0 | 24 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `target` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==9`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7014 | 0.3841 | 0.2976 | 0.4157 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7226 | 0.4023 | 0.3095 | 0.4443 |
Retrieval results on the `test` set:
N/A. Test set is blind so we do not have any queries. |
allenai/cochrane_sparse_oracle | 2022-11-24T14:54:01.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"lang... | allenai | null | null | null | 0 | 24 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `target` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7014 | 0.3841 | 0.3841 | 0.3841 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7226 | 0.4023 | 0.4023 | 0.4023 |
Retrieval results on the `test` set:
N/A. Test set is blind so we do not have any queries. |
allenai/wcep_sparse_oracle | 2022-11-24T15:58:43.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | allenai | null | null | null | 0 | 24 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: WCEP-10
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: wcep
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8753 | 0.6443 | 0.6443 | 0.6443 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8706 | 0.6280 | 0.6280 | 0.6280 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8836 | 0.6658 | 0.6658 | 0.6658 | |
osanseviero/dummy_ja_audio | 2022-10-07T14:23:30.000Z | [
"region:us"
] | osanseviero | null | null | null | 0 | 24 | Entry not found |
allenai/multixscience_dense_max | 2022-11-18T19:56:15.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | allenai | null | null | null | 1 | 24 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
---
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==20`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5270 | 0.2005 | 0.0573 | 0.3785 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5310 | 0.2026 | 0.059 | 0.3831 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5229 | 0.2081 | 0.058 | 0.3794 | |
allenai/multixscience_dense_mean | 2022-11-18T19:58:51.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | allenai | null | null | null | 0 | 24 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
---
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `train`, `validation` and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==4`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5270 | 0.2005 | 0.1551 | 0.2357 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5310 | 0.2026 | 0.1603 | 0.2432 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.5229 | 0.2081 | 0.1612 | 0.2440 | |
allenai/cochrane_dense_mean | 2022-11-18T19:44:03.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"lang... | allenai | null | null | null | 0 | 24 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `train`, `validation` and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `target` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==9`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7790 | 0.4487 | 0.3438 | 0.4800 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7856 | 0.4424 | 0.3534 | 0.4913 |
Retrieval results on the `test` set:
N/A. Test set is blind so we do not have any queries. |
allenai/cochrane_dense_oracle | 2022-11-18T19:46:14.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"lang... | allenai | null | null | null | 0 | 24 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of the `train`, `validation`, and `test` splits have been replaced by a __dense__ retriever.
- __query__: The `target` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7790 | 0.4487 | 0.4487 | 0.4487 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7856 | 0.4424 | 0.4424 | 0.4424 |
Retrieval results on the `test` set:
N/A. Test set is blind so we do not have any queries. |
allenai/ms2_dense_oracle | 2022-11-18T19:48:14.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-MS^2",
"source_datasets:extended|other-Cochrane",
"lang... | allenai | null | null | null | 0 | 24 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of the `train`, `validation`, and `test` splits have been replaced by a __dense__ retriever.
- __query__: The `background` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4764 | 0.2395 | 0.2395 | 0.2395 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4364 | 0.2125 | 0.2125 | 0.2125 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.4481 | 0.2224 | 0.2224 | 0.2224 | |
allenai/wcep_dense_max | 2022-11-18T20:00:07.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | allenai | null | null | null | 0 | 24 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: WCEP-10
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: wcep
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of its `test` split have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==10`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8590 | 0.6490 | 0.5967 | 0.6631 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8578 | 0.6326 | 0.6040 | 0.6401 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8678 | 0.6631 | 0.6301 | 0.6740 | |
allenai/multinews_dense_max | 2022-11-11T01:29:44.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | allenai | null | null | null | 0 | 24 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: Multi-News
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: multi-news
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [Multi-News](https://huggingface.co/datasets/multi_news) dataset, except the input source documents of its `test` split have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==10`
Retrieval results on the `train` set:
Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8661 | 0.6867 | 0.2118 | 0.7966 |
Retrieval results on the `validation` set:
Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8626 | 0.6859 | 0.2083 | 0.7949 |
Retrieval results on the `test` set:
Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8625 | 0.6927 | 0.2096 | 0.7971 | |
allenai/multinews_dense_mean | 2022-11-19T04:38:47.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | allenai | null | null | null | 0 | 24 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: Multi-News
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: multi-news
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
---
This is a copy of the [Multi-News](https://huggingface.co/datasets/multi_news) dataset, except the input source documents of its `train`, `validation` and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==3`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8661 | 0.6867 | 0.5936 | 0.6917 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8626 | 0.6859 | 0.5874 | 0.6925 |
Retrieval results on the `test` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.8625 | 0.6927 | 0.5938 | 0.6993 | |
bigbio/jnlpba | 2022-12-22T15:44:48.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-3.0",
"region:us"
] | bigbio | NER For Bio-Entities | @inproceedings{collier-kim-2004-introduction,
title = "Introduction to the Bio-entity Recognition Task at {JNLPBA}",
author = "Collier, Nigel and Kim, Jin-Dong",
booktitle = "Proceedings of the International Joint Workshop
on Natural Language Processing in Biomedicine and its Applications
({NLPBA}/{B}io{NLP})",
month = aug # " 28th and 29th", year = "2004",
address = "Geneva, Switzerland",
publisher = "COLING",
url = "https://aclanthology.org/W04-1213",
pages = "73--78",
} | null | 0 | 24 |
---
language:
- en
bigbio_language:
- English
license: cc-by-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_3p0
pretty_name: JNLPBA
homepage: http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for JNLPBA
## Dataset Description
- **Homepage:** http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
NER For Bio-Entities
## Citation Information
```
@inproceedings{collier-kim-2004-introduction,
title = "Introduction to the Bio-entity Recognition Task at {JNLPBA}",
author = "Collier, Nigel and Kim, Jin-Dong",
booktitle = "Proceedings of the International Joint Workshop
on Natural Language Processing in Biomedicine and its Applications
({NLPBA}/{B}io{NLP})",
month = aug # " 28th and 29th", year = "2004",
address = "Geneva, Switzerland",
publisher = "COLING",
url = "https://aclanthology.org/W04-1213",
pages = "73--78",
}
```
|
NeelNanda/code-tokenized | 2022-11-14T00:05:01.000Z | [
"region:us"
] | NeelNanda | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: tokens
sequence: int64
splits:
- name: train
num_bytes: 2436318372
num_examples: 297257
download_size: 501062424
dataset_size: 2436318372
---
# Dataset Card for "code-tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NeelNanda/c4-tokenized-2b | 2022-11-14T00:26:59.000Z | [
"region:us"
] | NeelNanda | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: tokens
sequence: int64
splits:
- name: train
num_bytes: 11145289620
num_examples: 1359845
download_size: 2530851147
dataset_size: 11145289620
---
# Dataset Card for "c4-tokenized-2b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
plphuc017/vocal_dataset | 2022-11-26T05:41:49.000Z | [
"region:us"
] | plphuc017 | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 679382539.464
num_examples: 1057
- name: test
num_bytes: 167054773.0
num_examples: 264
download_size: 832476390
dataset_size: 846437312.464
---
# Dataset Card for "vocal_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ZIZOU/arabicSquadSplitted | 2022-11-28T12:52:56.000Z | [
"license:unknown",
"region:us"
] | ZIZOU | null | null | null | 0 | 24 | ---
license: unknown
---
|
bongsoo/news_talk_ko_en | 2023-01-17T01:31:55.000Z | [
"license:apache-2.0",
"region:us"
] | bongsoo | null | null | null | 0 | 24 | ---
license: apache-2.0
---
|
pszemraj/govreport-summarization-8192 | 2023-04-21T22:17:46.000Z | [
"task_categories:summarization",
"size_categories:1K<n<10K",
"source_datasets:ccdv/govreport-summarization",
"language:en",
"license:apache-2.0",
"govreport",
"long document",
"region:us"
] | pszemraj | null | null | null | 1 | 24 | ---
task_categories:
- summarization
language:
- en
pretty_name: GovReport Summarization - 8192 tokens
size_categories:
- 1K<n<10K
source_datasets: ccdv/govreport-summarization
license: apache-2.0
tags:
- govreport
- long document
---
# GovReport Summarization - 8192 tokens
- `ccdv/govreport-summarization` with the changes of:
- data cleaned with the [clean-text python package](https://pypi.org/project/clean-text/)
- total tokens for each column computed and added in new columns according to the `long-t5` tokenizer (_done **after** cleaning_)
---
## train info
```python
RangeIndex: 8200 entries, 0 to 8199
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 report 8200 non-null string
1 summary 8200 non-null string
2 input_token_len 8200 non-null Int64
3 summary_token_len 8200 non-null Int64
dtypes: Int64(2), string(2)
memory usage: 272.4 KB
```
## token length distribution (long-t5)

--- |
BuroIdentidadDigital/recibos_cfe | 2023-10-03T00:22:04.000Z | [
"license:c-uda",
"region:us"
] | BuroIdentidadDigital | null | null | null | 1 | 24 | ---
license: c-uda
---
|
GBaker/MedQA-USMLE-4-options-hf-MiniLM-IR-cs | 2023-02-11T23:26:10.000Z | [
"region:us"
] | GBaker | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: id
dtype: string
- name: sent1
dtype: string
- name: sent2
dtype: string
- name: ending0
dtype: string
- name: ending1
dtype: string
- name: ending2
dtype: string
- name: ending3
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 1933180
num_examples: 1273
- name: validation
num_bytes: 1905261
num_examples: 1272
- name: train
num_bytes: 15360790
num_examples: 10178
download_size: 11125239
dataset_size: 19199231
---
# Dataset Card for "MedQA-USMLE-4-options-hf-MiniLM-IR-cs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceH4/helpful-self-instruct-raw | 2023-02-15T16:04:31.000Z | [
"license:apache-2.0",
"human-feedback",
"region:us"
] | HuggingFaceH4 | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: demonstration
dtype: string
splits:
- name: train
num_bytes: 20412870
num_examples: 82612
download_size: 12532431
dataset_size: 20412870
license: apache-2.0
tags:
- human-feedback
---
# Dataset Card for "helpful-self-instruct-raw"
This dataset is derived from the `finetuning` subset of [Self-Instruct](https://github.com/yizhongw/self-instruct), with some light formatting to remove trailing spaces and `<|endoftext|>` tokens.
|
Supermaxman/esa-hubble | 2023-02-26T13:20:26.000Z | [
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"space",
"region:us"
] | Supermaxman | null | null | null | 8 | 24 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: id
dtype: string
- name: title
dtype: string
- name: description
dtype: string
- name: credits
dtype: string
- name: url
dtype: string
- name: Id
dtype: string
- name: Type
dtype: string
- name: Release date
dtype: string
- name: Related releases
dtype: string
- name: Size
dtype: string
- name: Name
dtype: string
- name: Distance
dtype: string
- name: Constellation
dtype: string
- name: Category
dtype: string
- name: Position (RA)
dtype: string
- name: Position (Dec)
dtype: string
- name: Field of view
dtype: string
- name: Orientation
dtype: string
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_size
dtype: int64
- name: crop_w
dtype: int64
- name: crop_h
dtype: int64
- name: cropped
dtype: bool
- name: Related science announcements
dtype: string
- name: Related announcements
dtype: string
splits:
- name: train
num_bytes: 94474695584.124
num_examples: 2706
download_size: 61236366105
dataset_size: 94474695584.124
license: cc-by-4.0
task_categories:
- text-to-image
language:
- en
tags:
- space
pretty_name: ESA Hubble Deep Space Images & Captions
size_categories:
- 1K<n<10K
---
# Dataset Card for ESA Hubble Deep Space Images & Captions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Examples](#examples)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ESA Hubble](https://esahubble.org/)
- **Repository:** [Hubble Diffusion repository](https://github.com/Supermaxman/hubble-diffusion)
- **Point of Contact:** [Maxwell Weinzierl](mailto:maxwell.weinzierl@utdallas.edu)
### Dataset Summary
The ESA Hubble Deep Space Images & Captions dataset is composed primarily of Hubble deep space scans as high-resolution images,
along with textual descriptions written by ESA/Hubble. Metadata is also included, which enables more detailed filtering and understanding of massive space scans.
The purpose of this dataset is to enable text-to-image generation methods for generating high-quality deep space scans from prompts.
Check out [Hubble Diffusion v2](https://huggingface.co/Supermaxman/hubble-diffusion-2) for an example of a model trained on this dataset!
### Examples
#### A grazing encounter between two spiral galaxies
> In the direction of the constellation Canis Major, two spiral galaxies pass by each other like majestic ships in the night. The near-collision has been caught in images taken by the NASA/ESA Hubble Space Telescope and its Wide Field Planetary Camera 2.
>
>
> Credit: NASA/ESA and The Hubble Heritage Team (STScI)
#### The magnificent starburst galaxy Messier 82
> This mosaic image of the magnificent starburst galaxy, Messier 82 (M82) is the sharpest wide-angle view ever obtained of M82. It is a galaxy remarkable for its webs of shredded clouds and flame-like plumes of glowing hydrogen blasting out from its central regions where young stars are being born 10 times faster than they are inside in our Milky Way Galaxy.
>
>
> Credit: NASA, ESA and the Hubble Heritage Team (STScI/AURA). Acknowledgment: J. Gallagher (University of Wisconsin), M. Mountain (STScI) and P. Puxley (NSF).
#### Extreme star cluster bursts into life in new Hubble image
> The star-forming region NGC 3603 - seen here in the latest Hubble Space Telescope image - contains one of the most impressive massive young star clusters in the Milky Way. Bathed in gas and dust the cluster formed in a huge rush of star formation thought to have occurred around a million years ago. The hot blue stars at the core are responsible for carving out a huge cavity in the gas seen to the right of the star cluster in NGC 3603's centre.
>
>
> Credit: NASA, ESA and the Hubble Heritage (STScI/AURA)-ESA/Hubble Collaboration
#### Statistics
- There are a total of 2,706 deep space images
- The complete uncompressed size of the dataset is 120 GB, so definitely make use of [Streaming](https://huggingface.co/docs/datasets/stream)
- The average image is 44 MB, while the max image size is 432 MB
- The average image has a height of 2,881 pixels, and an average width of 3,267 pixels
### Supported Tasks and Leaderboards
- `text-to-image`: The dataset can be used to train a model for conditional image generation from text. A conditional text-to-image generation model is presented with a text prompt, and is asked to generate an image which aligns with that text prompt. Model performance is typically measured by human judgement, as it is difficult to automatically measure the quality of generated images and how closely they match the text prompt. An example of a text-to-image model is [Stable Diffusion v2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1). An example of a text-to-image model trained on this dataset is [Hubble Diffusion v2](https://huggingface.co/Supermaxman/hubble-diffusion-2).
### Languages
The text describing the images in the dataset is in English, as written by the writers from ESA/Hubble at [https://esahubble.org/](https://esahubble.org/). The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A typical data point comprises a high-quality deep space scan as an image, along with a textual description of that image produced by ESA/Hubble.
The textual description was derived by combining the `title` and the `description` of the image from the ESA/Hubble website.
Additionally, each data point also contains significant metadata about the image, such as the type of image, credits, the URL, the release date, and more.
An example looks as follows:
```json
{
"image": "<encoded image>",
"text":"A grazing encounter between two spiral galaxies: In the direction of the constellation Canis Major, two spiral galaxies pass by each other like majestic ships in the night. The near-collision has been caught in images taken by the NASA/ESA Hubble Space Telescope and its Wide Field Planetary Camera 2.",
"id":"opo9941a",
"title":"A grazing encounter between two spiral galaxies",
"description":"In the direction of the constellation Canis Major, two spiral galaxies pass by each other like majestic ships in the night. The near-collision has been caught in images taken by the NASA/ESA Hubble Space Telescope and its Wide Field Planetary Camera 2.",
"credits":"NASA/ESA and The Hubble Heritage Team (STScI)",
"url":"https://esahubble.org/images/opo9941a/",
"Id":"opo9941a",
"Type":"Local Universe : Galaxy : Type : Interacting",
"Release date":"4 November 1999, 07:00",
"Size":"2907 x 1486 px",
"Name":"IC 2163, NGC 2207",
"Distance":"110 million light years",
"Constellation":"Canis Major",
"Category":"Galaxies",
"Position (RA)":"6 16 25.10",
"Position (Dec)":"-21° 22' 34.62\"",
"Field of view":"4.82 x 2.47 arcminutes",
"Orientation":"North is 191.2\u00b0 right of vertical",
"width":2907,
"height":1486,
"file_size":12959406,
"crop_w":0,
"crop_h":0,
"cropped":false
}
```
### Data Fields
- `image`: encoded RGB `.png` image of the deep space scan
- `text`: text description of image, a combination of `title` + ': ' + `description`
- `id`: id of the image from ESA/Hubble
- `title`: textual title of image from ESA/Hubble URL
- `description`: textual description of image from ESA/Hubble URL
- `credits`: required credits for each image from ESA/Hubble URL
- `url`: ESA/Hubble URL
- `Id`: id of the image from ESA/Hubble (from website metadata)
- `Type`: type of deep space scan
- `Release date`: release date of deep space scan
- `Size`: size of original image
- `Name`: name of celestial entities present in image
- `Distance`: distance from celestial entities present in image
- `Constellation`: constellation of celestial entities present in image
- `Category`: category of celestial entities present in image
- `Position (RA)`: coordinates for deep space scan used by Hubble telescope
- `Position (Dec)`: coordinates for deep space scan used by Hubble telescope
- `Field of view`: coordinates for deep space scan used by Hubble telescope
- `Orientation`: coordinates for deep space scan used by Hubble telescope
- `width`: width of image, same if the image did not need to be cropped, but otherwise could differ from `Size`
- `height`: height of image, same if the image did not need to be cropped, but otherwise could differ from `Size`
- `file_size`: `width` x `height` x 3 bytes, used to estimate size of raw images
- `crop_w`: width starting point of image if cropped, otherwise 0
- `crop_h`: height starting point of image if cropped, otherwise 0
- `cropped`: whether this image needed to be cropped or not
### Data Splits
The data is only provided in a single training split, as the purpose of the dataset is additional fine-tuning for the task of `text-to-image` generation.
## Dataset Creation
### Curation Rationale
The ESA Hubble Deep Space Images & Captions dataset was built to provide ease of access to extremely high-quality Hubble deep space scans.
Images from the Hubble telescope have already inspired millions, and the hope is that this dataset can be used to create inspiring models and approaches to further push interest in space & cosmology.
### Source Data
#### Initial Data Collection
All images were collected from [https://esahubble.org/](https://esahubble.org/).
Fullsize Original images & metadata were crawled from the ESA Hubble website using [Scrapy](https://scrapy.org/).
Images were downloaded as `.tiff` files, while
additional metadata was later collected for each image using the following [code](https://github.com/Supermaxman/hubble-diffusion).
As the ESA Hubble website collects images from a wide variety of sources, images were filtered to try to avoid any non-space scan images as follows:
The ESA Hubble [Advanced Image Search](http://esahubble.org/images/archive/search) enables the following filtering parameters:
- images with Minimum size greater than or equal to 400x300
- Ranking greater than or equal to Fair or better
- Type containing 'Observation'
This reduced significantly the number of images which had nothing to do with Hubble deep space scans.
A total of around 3,000 space images were collected with this method.
#### Filtering
Further automatic and manual filtering was performed to remove the following:
- improperly classified images
- space renders
- diagrams with text
- images of celestial bodies within our solar system
- images with too low a resolution
This brought the total number of deep space images down to 2,593.
This process was not perfect, and there likely remain some images in the dataset that should be removed in the future.
#### Preprocessing
Some of the deep space scans were as large as 34,372x19,345, with a bit depth of 24 (nearly 2 GB).
Unfortunately, these images were too large to upload easily
Therefore, images were automatically subdivided in half if they were above 12,000 pixels in either height or width.
Subdivided images were also tagged with additional metadata, such that users can reconstruct the original images if they would prefer.
Otherwise, metadata was copied across subdivided images.
Additionally, images were converted from RGB/RGBX `.tiff` to RGB `.png` files to avoid encoding issues.
This process resulted in 2,706 final deep space images.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help inspire people to be interested in astronomy.
A system that succeeds at text-to-image generation would be able to generate inspiring deep space scans, providing interesting and inspiring art for those interested in space. This dataset provides a starting-point for building such a system by providing text and image pairs for Hubble deep space scans.
### Discussion of Biases
It is unfortunate that we currently only have English captions for these deep space scans.
In the future, expanding these captions to more languages could help spread interest in astronomy far and wide.
Additionally, these captions may be too technical for the average person to effectively utilize for a text-to-image model.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The dataset was initially created by all the wonderful researchers, engineers, scientists, and more behind the Hubble Telescope, NASA, and the ESA.
Maxwell Weinzierl collected, filtered, and preprocessed this data for ease of use.
### Licensing Information
ESA/Hubble images, videos and web texts are released under the [Creative Commons Attribution 4.0 International license](https://creativecommons.org/licenses/by/4.0/)
and may on a non-exclusive basis be reproduced without fee provided they are clearly and visibly credited.
See [https://esahubble.org/copyright/](https://esahubble.org/copyright/) for additional conditions for reproduction and copyright.
### Citation Information
If you use this dataset, please cite it as:
```bibtex
@misc{weinzierl2023hubble,
author = {Weinzierl, Maxwell A.},
title = {ESA Hubble Deep Space Images & Captions},
year={2023},
howpublished= {\url{https://huggingface.co/datasets/Supermaxman/esa-hubble}}
}
```
### Contributions
Thanks to [@supermaxman](https://github.com/supermaxman) for adding this dataset.
|
amydeng2000/strategy-qa | 2023-02-23T01:57:00.000Z | [
"region:us"
] | amydeng2000 | null | null | null | 0 | 24 | Entry not found |
dctanner/oa_recipes | 2023-02-24T13:42:50.000Z | [
"region:us"
] | dctanner | null | null | null | 3 | 24 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 7600684
num_examples: 4747
download_size: 3325663
dataset_size: 7600684
---
# Dataset Card for Recipes dialogue
Derrived from the Kaggle dataset [Recipes from Tasty](https://www.kaggle.com/datasets/zeeenb/recipes-from-tasty), we turn the recipe ingredients and instructions into chat dialogue using a preset list of user prompt templates.
Dataset license: CC0: Public Domain. |
Isotonic/human_assistant_conversation | 2023-08-31T07:31:15.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:en",
"language:es",
"language:zh",
"license:afl-3.0",
"region:us"
] | Isotonic | null | null | null | 3 | 24 | ---
license: afl-3.0
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2724591096.91667
num_examples: 1494223
- name: test
num_bytes: 681148230.08333
num_examples: 373556
download_size: 1996990227
dataset_size: 3405739327.0
task_categories:
- text-generation
- conversational
language:
- en
- es
- zh
size_categories:
- 100K<n<1M
--- |
DominusTea/GreekLegalSum | 2023-03-19T17:40:22.000Z | [
"task_categories:summarization",
"size_categories:100M<n<1B",
"language:el",
"license:cc-by-nc-4.0",
"region:us"
] | DominusTea | null | null | null | 1 | 24 | ---
license: cc-by-nc-4.0
task_categories:
- summarization
language:
- el
pretty_name: Greek Court Summarization Dataset
size_categories:
- 100M<n<1B
--- |
reginaboateng/cleaned_pubmedqa | 2023-07-21T14:19:05.000Z | [
"language:en",
"region:us"
] | reginaboateng | null | null | null | 1 | 24 | ---
language: en
dataset_info:
features:
- name: pubid
dtype: int32
- name: question
dtype: string
- name: context
sequence:
- name: contexts
dtype: string
- name: labels
dtype: string
- name: meshes
dtype: string
- name: long_answer
dtype: string
- name: final_decision
dtype: string
splits:
- name: train
num_bytes: 443501057
num_examples: 211269
- name: validation
num_bytes: 2052168
num_examples: 1000
download_size: 234483083
dataset_size: 445553225
---
# Dataset Card for "cleaned_pubmedqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kuleshov/alpaca-data | 2023-04-13T23:05:16.000Z | [
"region:us"
] | kuleshov | null | null | null | 0 | 24 | Entry not found |
mstz/yeast | 2023-04-25T09:22:12.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"yeast",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_yeast_110,
author = {Nakai,Kenta},
title = {{Yeast}},
year = {1996},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5KG68}}
} | null | 0 | 24 | ---
language:
- en
tags:
- yeast
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Yeast
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- yeast
- yeast_0
- yeast_1
- yeast_2
- yeast_3
- yeast_4
- yeast_5
- yeast_6
- yeast_7
- yeast_8
- yeast_9
license: cc
---
# Yeast
The [Yeast dataset](https://archive-beta.ics.uci.edu/dataset/110/yeast) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/yeast")["train"]
```
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| yeast | Multiclass classification.| |
| yeast_0 | Binary classification. | Is the instance of class 0? |
| yeast_1 | Binary classification. | Is the instance of class 1? |
| yeast_2 | Binary classification. | Is the instance of class 2? |
| yeast_3 | Binary classification. | Is the instance of class 3? |
| yeast_4 | Binary classification. | Is the instance of class 4? |
| yeast_5 | Binary classification. | Is the instance of class 5? |
| yeast_6 | Binary classification. | Is the instance of class 6? |
| yeast_7 | Binary classification. | Is the instance of class 7? |
| yeast_8 | Binary classification. | Is the instance of class 8? |
| yeast_9 | Binary classification. | Is the instance of class 9? | |
supremezxc/nlpcc_2017 | 2023-04-20T07:07:50.000Z | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:zh",
"license:openrail",
"region:us"
] | supremezxc | null | null | null | 1 | 24 | ---
license: openrail
task_categories:
- summarization
language:
- zh
pretty_name: NLPCC2017中文新闻数据集
size_categories:
- 10K<n<100K
--- |
Thaweewat/onet-m6-social | 2023-05-11T00:42:33.000Z | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:th",
"license:cc-by-sa-3.0",
"social",
"instruction-finetuning",
"region:us"
] | Thaweewat | null | null | null | 0 | 24 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
language:
- th
tags:
- social
- instruction-finetuning
pretty_name: onet-m6
size_categories:
- n<1K
---
# Summary
This is a question-answer dataset for the Grade 12 (M6) Social subject of the Thailand Ordinary National Educational Test (ONET).
The dataset was human-extracted by my team from the official release of publicly available exams [National Institute of Educational Testing Service](https://www.niets.or.th/th/catalog/view/630) during the years 2016-2022.
The exam consists of 510 multiple-choice questions with corresponding answer keys.
It is important to note that only two questions, Q71 and Q85, from the year 2018, require image interpretation, which is not available in this dataset's format.
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
--- |
minoosh/IEMOCAP_Speech_dataset | 2023-05-16T11:58:34.000Z | [
"region:us"
] | minoosh | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: TURN_NAME
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: emotion
dtype:
class_label:
names:
'0': ang
'1': hap
'2': neu
'3': sad
splits:
- name: Session1
num_bytes: 165158903.64
num_examples: 1085
- name: Session2
num_bytes: 154202695.13
num_examples: 1023
- name: Session3
num_bytes: 158294386.59
num_examples: 1151
- name: Session4
num_bytes: 147780976.55
num_examples: 1031
- name: Session5
num_bytes: 170101711.098
num_examples: 1241
download_size: 788474562
dataset_size: 795538673.0080001
---
# Dataset Card for "IEMOCAP_Speech_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
skeskinen/TinyStories-hf | 2023-05-17T18:13:44.000Z | [
"arxiv:2305.07759",
"region:us"
] | skeskinen | null | null | null | 15 | 24 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1911420483
num_examples: 2119719
- name: validation
num_bytes: 19306310
num_examples: 21990
download_size: 1000775442
dataset_size: 1930726793
---
A description of this dataset can be found at https://arxiv.org/abs/2305.07759
Copied from roneneldan/TinyStories
Modified with:
```
import ftfy.bad_codecs
from datasets import Dataset, DatasetDict
train = open('./TinyStories-train.txt', 'r', encoding='sloppy-windows-1252').read()
train = train.split('<|endoftext|>')
train = [l.strip() for l in train]
valid = open('./TinyStories-valid.txt', 'r', encoding='sloppy-windows-1252').read()
valid = valid.split('<|endoftext|>')
valid = [l.strip() for l in valid]
dataset = DatasetDict({
'train': Dataset.from_dict({'text': train }),
'validation': Dataset.from_dict({'text': valid}),
})
dataset.save_to_disk('./TinyStories')
``` |
CVdatasets/food27 | 2023-05-18T20:53:43.000Z | [
"region:us"
] | CVdatasets | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': apple_pie
'1': beef_tartare
'2': beignets
'3': carrot_cake
'4': cheesecake
'5': cheese_plate
'6': chicken_wings
'7': chocolate_cake
'8': chocolate_mousse
'9': dumplings
'10': edamame
'11': filet_mignon
'12': french_fries
'13': fried_calamari
'14': guacamole
'15': ice_cream
'16': macarons
'17': miso_soup
'18': nachos
'19': onion_rings
'20': pizza
'21': poutine
'22': red_velvet_cake
'23': steak
'24': strawberry_shortcake
'25': tiramisu
'26': waffles
splits:
- name: train
num_bytes: 1010337492.0
num_examples: 20250
- name: validation
num_bytes: 334516930.25
num_examples: 6750
download_size: 1327834336
dataset_size: 1344854422.25
---
# Dataset Card for "food27"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Joemgu/sumstew | 2023-06-21T13:07:18.000Z | [
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:en",
"language:de",
"language:fr",
"language:it",
"language:es",
"license:apache-2.0",
"chemistry",
"biology",
"region:us"
] | Joemgu | null | null | null | 5 | 24 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: target
dtype: string
- name: input_tokens
dtype: int64
- name: target_tokens
dtype: int64
- name: subset
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 3338029493
num_examples: 187221
- name: validation
num_bytes: 218403099
num_examples: 14542
- name: test
num_bytes: 201638368
num_examples: 12467
download_size: 1982559322
dataset_size: 3758070960
task_categories:
- summarization
language:
- en
- de
- fr
- it
- es
size_categories:
- 100K<n<1M
license: apache-2.0
tags:
- chemistry
- biology
---
# Dataset Card for "sumstew"
## TL;DR:
Sumstew is a abstractive, multilingual Dataset, with a balanced number of samples from a diverse set of summarization Datasets. The input sizes range up to 16384 tokens.
Filtered using a diverse set of heuristics to encourage high coverage, accuracy and factual consistency. Code to reproduce Dataset available at *TODO*
## Dataset Description
- **Dataset Identifier**: sumstew
- **Dataset Summary**: "SumStew" is a rich multilingual dataset for text summarization. It incorporates diverse data sources such as cnn_dailymail, samsum, mlsum (de, fr, es, it), klexikon, xlsum (fr, en, es), govreport, sciqa, piqa, pumbed_qa, multinews, laysum, booksum, dialogsum, fanpage (it), ilpost (it). This data has been curated by filtering based on n-gram overlap between the source and target documents and normalized to prevent undue bias. Every instance in this dataset is prefixed by an instruction (title, summary, or qa).
## Task Information
- **Task Categories**: The tasks covered by this dataset are primarily summarization tasks.
- **Languages**: This dataset supports multiple languages including English (en), German (de), French (fr), Italian (it), and Spanish (es).
## Dataset Structure
- **Data Instances**: Each data instance in the dataset comprises five fields - 'prompt', 'target', 'task', 'subset', and 'language'.
- 'prompt': The input text for the task. (dtype: string)
- 'target': The expected output for the task. (dtype: string)
- 'subset': The subset of the dataset the instance belongs to. (dtype: string)
- 'language': The language of the instance. (dtype: string)
- **Data Splits**: The dataset is split into two subsets:
- 'train' set: 187221 examples
- 'validation' set: 14542 examples
- 'test' set: 12467 examples
## Dataset Statistics
- **Max Document Length**: The maximum document length is 16384 mlong-t5 tokens.
- **Max Output Length**: The maximum output length is 1024 mlong-t5 tokens.
## Additional Information
- **Data Collection**: The data has been collected from a variety of sources spanning different languages and domains, ensuring a diverse and comprehensive dataset.
- **Data Cleaning**: The dataset has been filtered by checking the ngram overlap between the source and target document and dropping samples which have too much or too little overlap, and also through normalization.
- **Known Limitations**: As the dataset is generated from diverse sources, the inherent biases or limitations of those sources may persist in this dataset as well.
- **Usage Scenarios**: This dataset can be used for training and evaluating models on tasks like summarization and question-answering, in a multilingual context.
## Credits
At this point I want to thank every creator of the underlying datasets (there are too many for me to count). If there are any issues concercining licensing or you want your data removed from the dataset, feel free to DM over Twitter (link in profile).
Special thanks to @pszemraj [https://huggingface.co/pszemraj] for the inspiration.
If interested in collaboration or consulting for your project, feel free to DM https://twitter.com/StutterBuddy |
PatrickHaller/wikitext-18-de | 2023-06-27T20:29:39.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:de",
"license:cc-by-sa-3.0",
"region:us"
] | PatrickHaller | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 138186439
num_examples: 2759
download_size: 79585645
dataset_size: 138186439
license: cc-by-sa-3.0
task_categories:
- text-generation
language:
- de
pretty_name: wikitext german
size_categories:
- 1K<n<10K
---
# Dataset Card for "wikitext-18-de"
## Dataset Summary
The dataset is a german variation of the [wikitext](https://huggingface.co/datasets/wikitext) dataset and is a collection of
ca. 18 million tokens. It follows the same approach by extracting from the "Good and Featured" articles on Wikipedia, but
for [German articles](https://en.wikipedia.org/wiki/Wikipedia:Featured_articles_in_other_languages/German). The dataset is
available under the Creative Commons Attribution-ShareAlike License.
The stated German version contains 2759 articles (visited: 27.06.23). Even though the smalle size of articles, compared to wikitext,
the dataset contains 18 million (whitespace) seperated tokens. Probably due to longer articles lengths and language.
The dataset retains the original case, punctuation, numbers and newlines, excluding images, tables and other data.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
knowrohit07/GPTscience_maths_csml | 2023-06-28T08:07:28.000Z | [
"license:other",
"region:us"
] | knowrohit07 | null | null | null | 1 | 24 | ---
license: other
---
|
iceberg-nlp/climabench | 2023-09-10T22:05:20.000Z | [
"task_categories:text-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"arxiv:2301.04253",
"region:us"
] | iceberg-nlp | The topic of Climate Change (CC) has received limited attention in NLP despite its real world urgency.
Activists and policy-makers need NLP tools in order to effectively process the vast and rapidly growing textual data produced on CC.
Their utility, however, primarily depends on whether the current state-of-the-art models can generalize across various tasks in the CC domain.
In order to address this gap, we introduce Climate Change Benchmark (Climabench), a benchmark collection of existing disparate datasets for evaluating model performance across a diverse set of CC NLU tasks systematically.
Further, we enhance the benchmark by releasing two large-scale labelled text classification and question-answering datasets curated from publicly available environmental disclosures.
Lastly, we provide an analysis of several generic and CC-oriented models answering whether fine-tuning on domain text offers any improvements across these tasks. We hope this work provides a standard assessment tool for research on CC text data. | @misc{laud2023Climabench,
title={ClimaBench: A Benchmark Dataset For Climate Change Text Understanding in English},
author={Tanmay Laud and Daniel Spokoyny and Tom Corringham and Taylor Berg-Kirkpatrick},
year={2023},
eprint={2301.04253},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 0 | 24 | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
paperswithcode_id: climabench
pretty_name: "ClimaBench: A Benchmark Dataset For Climate Change Text Understanding in English"
config_names:
- climate_stance
- climate_eng
- climate_fever
- climatext
- clima_insurance
- clima_insurance_plus
- clima_cdp
- clima_qa
---
### Citation Information
```
@misc{spokoyny2023answering,
title={Towards Answering Climate Questionnaires from Unstructured Climate Reports},
author={Daniel Spokoyny and Tanmay Laud and Tom Corringham and Taylor Berg-Kirkpatrick},
year={2023},
eprint={2301.04253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Den4ikAI/russian_dialogues_2 | 2023-07-16T12:09:36.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:1M<n<10M",
"language:ru",
"license:mit",
"region:us"
] | Den4ikAI | Russian dialogues dataset | null | null | 0 | 24 | ---
license: mit
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- ru
size_categories:
- 1M<n<10M
---
### Den4ikAI/russian_dialogues_2
Датасет русских диалогов для обучения диалоговых моделей.
Количество диалогов - 1.6 миллиона
Формат датасета:
```
{
'sample': ['Привет', 'Привет', 'Как дела?']
}
```
### Citation:
```
@MISC{russian_instructions,
author = {Denis Petrov},
title = {Russian context dialogues dataset for conversational agents},
url = {https://huggingface.co/datasets/Den4ikAI/russian_dialogues_2},
year = 2023
}
``` |
DynamicSuperb/EnvironmentalSoundClassification_ESC50-Animals | 2023-07-12T06:06:28.000Z | [
"region:us"
] | DynamicSuperb | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: label
dtype: string
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 176489932.0
num_examples: 400
download_size: 153702542
dataset_size: 176489932.0
---
# Dataset Card for "environmental_sound_classification_animals_ESC50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
katielink/healthsearchqa | 2023-08-24T21:40:08.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:unknown",
"medical",
"arxiv:2212.13138",
"region:us"
] | katielink | null | null | null | 1 | 24 | ---
license: unknown
task_categories:
- question-answering
language:
- en
tags:
- medical
configs:
- config_name: all_data
data_files: all.csv
- config_name: 140_question_subset
data_files: multimedqa140_subset.csv
size_categories:
- 1K<n<10K
---
# HealthSearchQA
Dataset of consumer health questions released by Google for the Med-PaLM paper ([arXiv preprint](https://arxiv.org/abs/2212.13138)).
From the [paper](https://www.nature.com/articles/s41586-023-06291-2):
We curated our own additional dataset consisting of 3,173 commonly searched consumer questions,
referred to as HealthSearchQA. The dataset was curated using seed medical conditions and their
associated symptoms. We used the seed data to retrieve publicly-available commonly searched questions
generated by a search engine, which were displayed to all users entering the seed terms. We publish the
dataset as an open benchmark for answering medical questions from consumers and hope this will be a useful
resource for the community, as a dataset reflecting real-world consumer concerns.
**Format:** Question only, free text response, open domain.
**Size:** 3,173.
**Example question:** How serious is atrial fibrillation?
**Example question:** What kind of cough comes with Covid?
**Example question:** Is blood in phlegm serious? |
bhuvi/bcorp_web | 2023-08-10T09:53:01.000Z | [
"language:en",
"region:us"
] | bhuvi | null | null | null | 0 | 24 | ---
language:
- en
pretty_name: BCorp Web Data
---
### Dataset Summary
This dataset contains web text crawled using [Hyphe](https://github.com/medialab/hyphe) on [B Corp](https://www.bcorporation.net/en-us/) website. Hyphe found more than 1000
outlinks from B Corp website among which many entities were B Corp certified organisations. Given dataset contains webtext for those organisations. List of B Corp certified
organisation is dynamic so we only select around 600 organisation in this dataset. There is no specific criteria for this selection.
### Languages
Primarily English, but contains we data in French, Spanish as well.
## Dataset Structure
### Data Instances
An instance of data contains the an organisation name certified by BCorp, their web text, list of other B Corp Certified organisation they are connected with and sector
they belong to.
### Data Fields
'name': name of the organisation
'text': web text
'rel': list of certified B Corp organisations mentioned in web text of parent organisation
'shape': Working sector organisation belongs to
### Data Splits
There is only one data split that is 'train'.
|
disham993/alpaca-train-validation-test-split | 2023-08-11T22:30:09.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"region:us"
] | disham993 | null | null | null | 0 | 24 | ---
language:
- en
license: cc-by-nc-4.0
size_categories:
- 10K<n<100K
task_categories:
- text-generation
pretty_name: Alpaca
tags:
- instruction-finetuning
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 33409057
num_examples: 36401
- name: validation
num_bytes: 7159137
num_examples: 7801
- name: test
num_bytes: 7196544
num_examples: 7800
download_size: 24523957
dataset_size: 47764738
---
# Dataset Card for Alpaca
I have just performed train, test and validation split on the original dataset. Repository to reproduce this will be shared here soon. I am including the orignal Dataset card as follows.
## Dataset Description
- **Homepage:** https://crfm.stanford.edu/2023/03/13/alpaca.html
- **Repository:** https://github.com/tatsu-lab/stanford_alpaca
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** Rohan Taori
### Dataset Summary
Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500).
In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl).
### Supported Tasks and Leaderboards
The Alpaca dataset designed for instruction training pretrained language models.
### Languages
The data in Alpaca are in English (BCP-47 en).
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"instruction": "Create a classification task by clustering the given list of items.",
"input": "Apples, oranges, bananas, strawberries, pineapples",
"output": "Class 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
"text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nCreate a classification task by clustering the given list of items.\n\n### Input:\nApples, oranges, bananas, strawberries, pineapples\n\n### Response:\nClass 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
}
```
### Data Fields
The data fields are as follows:
* `instruction`: describes the task the model should perform. Each of the 52K instructions is unique.
* `input`: optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input.
* `output`: the answer to the instruction as generated by `text-davinci-003`.
* `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors for fine-tuning their models.
### Data Splits
| | train |
|---------------|------:|
| alpaca | 52002 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Excerpt the [blog post](https://crfm.stanford.edu/2023/03/13/alpaca.html) accompanying the release of this dataset:
> We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on instruction-following language models, resulting in better science and ultimately new techniques to address the existing deficiencies with these models. At the same time, any release carries some risk. First, we recognize that releasing our training recipe reveals the feasibility of certain capabilities. On one hand, this enables more people (including bad actors) to create models that could cause harm (either intentionally or not). On the other hand, this awareness might incentivize swift defensive action, especially from the academic community, now empowered by the means to perform deeper safety research on such models. Overall, we believe that the benefits for the research community outweigh the risks of this particular release. Given that we are releasing the training recipe, we believe that releasing the data, model weights, and training code incur minimal further risk, given the simplicity of the recipe. At the same time, releasing these assets has enormous benefits for reproducible science, so that the academic community can use standard datasets, models, and code to perform controlled comparisons and to explore extensions. Deploying an interactive demo for Alpaca also poses potential risks, such as more widely disseminating harmful content and lowering the barrier for spam, fraud, or disinformation. We have put into place two risk mitigation strategies. First, we have implemented a content filter using OpenAI’s content moderation API, which filters out harmful content as defined by OpenAI’s usage policies. Second, we watermark all the model outputs using the method described in Kirchenbauer et al. 2023, so that others can detect (with some probability) whether an output comes from Alpaca 7B. Finally, we have strict terms and conditions for using the demo; it is restricted to non-commercial uses and to uses that follow LLaMA’s license agreement. We understand that these mitigation measures can be circumvented once we release the model weights or if users train their own instruction-following models. However, by installing these mitigations, we hope to advance the best practices and ultimately develop community norms for the responsible deployment of foundation models.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The `alpaca` data is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
### Contributions
[More Information Needed] |
nikchar/retrieved_claims_test | 2023-08-31T14:58:13.000Z | [
"region:us"
] | nikchar | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: label
dtype: string
- name: claim
dtype: string
- name: evidence_wiki_url
dtype: string
- name: retrieved_evidence
sequence: string
- name: retrieval_score
sequence: float64
- name: id
dtype: string
- name: text
dtype: string
- name: lines
dtype: string
splits:
- name: train
num_bytes: 6050543
num_examples: 1500
download_size: 2972631
dataset_size: 6050543
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "retrieved_claims_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Arabic-Clip/mscoco_jsonl_full | 2023-09-05T10:21:54.000Z | [
"region:us"
] | Arabic-Clip | null | null | null | 0 | 24 |
To load the dataset on your local device:
```py
# Local loading
from datasets import load_dataset
dataset = load_dataset("/home/think3/Desktop/1. MSCOCO_captions_dataset_edited/dataset_test_jsonl/ImageCaptions.py", split='train[{}:]'.format(1),cache_dir="dataset_test_jsonl/caching")
```
To load the dataset from Huggingface:
```py
# Test the remote repo:
from datasets import load_dataset
dataset = load_dataset("Arabic-Clip/mscoco_jsonl_full", split='train[:]', cache_dir="cache/remote")
``` |
p1atdev/oiocha | 2023-09-18T05:59:05.000Z | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:ja",
"license:mit",
"haiku",
"region:us"
] | p1atdev | null | null | null | 0 | 24 | ---
license: mit
size_categories:
- n<1K
task_categories:
- text-generation
language:
- ja
tags:
- haiku
---
お~いお茶新俳句大賞受賞作品データセット
- 221の俳句が含まれ、うち200前後は作者と審査員のコメントが付属。
データは https://itoen-shinhaiku.jp/ から取得。
### データ構造
- `title`: 大会の名前 (`第三回` など)
- `ordinal`: 受賞した大会の開催回数 (第三回なら `3`)
- `award`: 受賞した賞
- `haiku`: 俳句の本文
- `translation`: 俳句本文が英語の場合の日本語訳
- `language`: 俳句の言語。日本語は `ja`。英語は `en`。
- `comment`: 著者による俳句の解説
- `review`: 審査員による俳句の評価
- `image_pc`: 画像が付属する場合、PC向けのサイズの大きい画像の URL
- `image_sp`: 画像が付属する場合、スマホ向けのサイズの小さい画像の URL
|
Nikhil-trustt/apicode | 2023-09-12T11:01:54.000Z | [
"region:us"
] | Nikhil-trustt | null | null | null | 0 | 24 | Entry not found |
Varun1808/llama_cobol | 2023-09-13T18:38:10.000Z | [
"region:us"
] | Varun1808 | null | null | null | 0 | 24 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: cobol
dtype: string
- name: rule
dtype: string
- name: train_column
dtype: string
splits:
- name: train
num_bytes: 7207806
num_examples: 410
download_size: 1648533
dataset_size: 7207806
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama_cobol"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.