id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
alexandrainst/ddisco | 2023-02-08T18:12:26.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:da",
"license:afl-3.0",
"discourse",
"coherence",
"region:us"
] | alexandrainst | null | null | 1 | 120 | 2023-02-08T18:05:24 | ---
annotations_creators:
- expert-generated
language:
- da
language_creators:
- expert-generated
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: DDisco
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- discourse
- coherence
task_categories:
- text-classification
task_ids: []
dataset_info:
features:
- name: text
dtype: string
- name: domain
dtype: string
- name: rating
dtype: int64
splits:
- name: train
num_bytes: 815571
num_examples: 801
- name: test
num_bytes: 209297
num_examples: 201
download_size: 672202
dataset_size: 1024868
---
# Dataset Card for DDisco
## Dataset Description
The DDisco dataset is a dataset which can be used to train models to classify levels of coherence in _danish_ discourse. Each entry in the dataset is annotated with a discourse coherence label (rating from 1 to 3):
1: low coherence (difficult to understand, unorganized, contained unnecessary details and can not be summarized briefly and easily)
2: medium coherence
3: high coherence (easy to understand, well organized, only contain details that support the main point and can be summarized briefly and easily).
Grammatical and typing errors are ignored (i.e. they do not affect the coherency score) and the coherence of a text is considered within its own domain.
### Additional Information
[DDisCo: A Discourse Coherence Dataset for Danish](https://aclanthology.org/2022.lrec-1.260.pdf)
### Contributions
[@ajders](https://github.com/ajders) | 1,509 | [
[
-0.039581298828125,
-0.04229736328125,
0.028717041015625,
0.00472259521484375,
-0.026031494140625,
0.0130462646484375,
0.00467681884765625,
-0.02471923828125,
0.00878143310546875,
0.02484130859375,
-0.02178955078125,
-0.077392578125,
-0.0518798828125,
0.0210... |
SaylorTwift/the_pile_books3_minus_gutenberg | 2023-03-03T19:46:43.000Z | [
"region:us"
] | SaylorTwift | null | null | 4 | 120 | 2023-03-03T18:44:35 | ---
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: first_name
dtype: string
- name: last_name
dtype: string
splits:
- name: train
num_bytes: 106199627990.47722
num_examples: 192661
download_size: 63006723975
dataset_size: 106199627990.47722
---
# Dataset Card for "the_pile_books3_minus_gutenberg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 512 | [
[
-0.054107666015625,
-0.0066680908203125,
0.006793975830078125,
0.0022678375244140625,
-0.0182952880859375,
-0.014862060546875,
0.024658203125,
-0.0082855224609375,
0.048919677734375,
0.0548095703125,
-0.0457763671875,
-0.056732177734375,
-0.04827880859375,
-... |
d0rj/samsum-ru | 2023-05-13T06:44:23.000Z | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:translated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:samsum",
"language:ru",
"license:cc-by-nc-nd-4.0",
"conversations-summarization",
"arxiv:1911.12237",
"region:us... | d0rj | null | null | 3 | 120 | 2023-05-08T08:57:36 | ---
annotations_creators:
- expert-generated
language_creators:
- translated
language:
- ru
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- samsum
task_categories:
- summarization
task_ids: []
pretty_name: SAMSum Corpus (ru)
tags:
- conversations-summarization
dataset_info:
features:
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 8598724
num_examples: 14731
- name: validation
num_bytes: 471632
num_examples: 818
- name: test
num_bytes: 483686
num_examples: 819
dataset_size: 9554042
train-eval-index:
- config: samsum
task: summarization
task_id: summarization
splits:
eval_split: test
col_mapping:
dialogue: text
summary: target
---
# Dataset Card for SAMSum Corpus (ru)
## Dataset Description
Translated [samsum](https://huggingface.co/datasets/samsum) dataset to russian language.
### Notes
> Row with ID **13828807** was deleted.
### Links
- **Homepage:** hhttps://arxiv.org/abs/1911.12237v2
- **Repository:** https://arxiv.org/abs/1911.12237v2
- **Paper:** https://arxiv.org/abs/1911.12237v2
### Languages
Russian (translated from English [samsum](https://huggingface.co/datasets/samsum) using Google Translator)
## Dataset Structure
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 14731
- val: 818
- test: 819
## Licensing Information
non-commercial licence: CC BY-NC-ND 4.0
## Citation Information
```
@inproceedings{gliwa-etal-2019-samsum,
title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
author = "Gliwa, Bogdan and
Mochol, Iwona and
Biesek, Maciej and
Wawer, Aleksander",
booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-5409",
doi = "10.18653/v1/D19-5409",
pages = "70--79"
}
``` | 2,209 | [
[
-0.0106048583984375,
-0.033935546875,
0.00870513916015625,
0.008941650390625,
-0.04254150390625,
-0.0030689239501953125,
-0.01959228515625,
-0.00998687744140625,
0.037445068359375,
0.028289794921875,
-0.046539306640625,
-0.06671142578125,
-0.034027099609375,
... |
alzoubi36/policy_ie_a | 2023-06-24T07:20:44.000Z | [
"region:us"
] | alzoubi36 | null | null | 0 | 120 | 2023-06-24T07:16:05 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 592707
num_examples: 4109
- name: validation
num_bytes: 16114
num_examples: 100
- name: test
num_bytes: 163819
num_examples: 1041
download_size: 364376
dataset_size: 772640
---
# Dataset for the PolicyIE-A task in the [PrivacyGLUE](https://github.com/infsys-lab/privacy-glue) dataset
| 449 | [
[
-0.0098876953125,
-0.0178985595703125,
0.0046844482421875,
0.006702423095703125,
0.035797119140625,
0.0106658935546875,
0.0160369873046875,
0.005786895751953125,
0.032958984375,
0.0528564453125,
-0.07391357421875,
-0.0584716796875,
-0.0200653076171875,
-0.03... |
argilla/emotion | 2023-08-23T06:37:14.000Z | [
"size_categories:10K<n<100K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | argilla | null | null | 0 | 120 | 2023-08-23T06:33:42 | ---
size_categories: 10K<n<100K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for emotion
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("argilla/emotion")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("argilla/emotion")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/data_model.html) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| text | Text | TextField | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, single choice, or multiple choice.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| label | Label | LabelQuestion | True | N/A | ['0', '1', '2', '3', '4', '5'] |
**✨ NEW** Additionally, we also have **suggestions**, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.
Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"fields": {
"text": "i didnt feel humiliated"
},
"metadata": {
"split": "train"
},
"responses": [
{
"status": "submitted",
"values": {
"label": {
"value": "0"
}
}
}
],
"suggestions": []
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": null,
"label": [
{
"status": "submitted",
"user_id": null,
"value": "0"
}
],
"label-suggestion": null,
"label-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"metadata": "{\"split\": \"train\"}",
"text": "i didnt feel humiliated"
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
* **text** is of type `TextField`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **label** is of type `LabelQuestion` with the following allowed values ['0', '1', '2', '3', '4', '5'].
* **✨ NEW** **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **label-suggestion** is of type `label_selection` with the following allowed values ['0', '1', '2', '3', '4', '5'].
Additionally, we also have one more field which is optional and is the following:
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
Argilla port of [dair-ai/emotion](https://huggingface.co/datasets/dair-ai/emotion).
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 6,935 | [
[
-0.059234619140625,
-0.06298828125,
0.0202484130859375,
0.027099609375,
-0.021148681640625,
-0.032135009765625,
-0.0024280548095703125,
-0.042083740234375,
0.053863525390625,
0.049102783203125,
-0.060760498046875,
-0.06683349609375,
-0.049713134765625,
0.024... |
liyucheng/arc_test | 2023-10-17T16:22:00.000Z | [
"region:us"
] | liyucheng | null | null | 0 | 120 | 2023-10-17T16:21:57 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: answerKey
dtype: string
splits:
- name: test
num_bytes: 375511
num_examples: 1172
download_size: 203808
dataset_size: 375511
---
# Dataset Card for "arc_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 526 | [
[
-0.0555419921875,
-0.0257568359375,
-0.0029773712158203125,
0.0163726806640625,
-0.0015611648559570312,
0.00429534912109375,
0.023712158203125,
-0.01102447509765625,
0.05120849609375,
0.03564453125,
-0.05419921875,
-0.046539306640625,
-0.0274200439453125,
-0... |
thainer | 2023-01-25T14:45:41.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:... | null | ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence
[unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by
[Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/).
It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp).
The NER tags are annotated by [Tirasaroj and Aroonmanakun (2012)]((http://pioneer.chula.ac.th/~awirote/publications/))
for 2,258 sentences and the rest by [@wannaphong](https://github.com/wannaphong/).
The POS tags are done by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`.
[@wannaphong](https://github.com/wannaphong/) is now the only maintainer of this dataset. | @misc{Wannaphong Phatthiyaphaibun_2019,
title={wannaphongcom/thai-ner: ThaiNER 1.3},
url={https://zenodo.org/record/3550546},
DOI={10.5281/ZENODO.3550546},
abstractNote={Thai Named Entity Recognition},
publisher={Zenodo},
author={Wannaphong Phatthiyaphaibun},
year={2019},
month={Nov}
} | 1 | 119 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- found
- expert-generated
language:
- th
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-tirasaroj-aroonmanakun
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
pretty_name: thainer
dataset_info:
features:
- name: id
dtype: int32
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': NOUN
'7': NUM
'8': PART
'9': PRON
'10': PROPN
'11': PUNCT
'12': SCONJ
'13': VERB
- name: ner_tags
sequence:
class_label:
names:
'0': B-DATE
'1': B-EMAIL
'2': B-LAW
'3': B-LEN
'4': B-LOCATION
'5': B-MONEY
'6': B-ORGANIZATION
'7': B-PERCENT
'8': B-PERSON
'9': B-PHONE
'10': B-TIME
'11': B-URL
'12': B-ZIP
'13': B-ไม่ยืนยัน
'14': I-DATE
'15': I-EMAIL
'16': I-LAW
'17': I-LEN
'18': I-LOCATION
'19': I-MONEY
'20': I-ORGANIZATION
'21': I-PERCENT
'22': I-PERSON
'23': I-PHONE
'24': I-TIME
'25': I-URL
'26': I-ไม่ยืนยัน
'27': O
config_name: thainer
splits:
- name: train
num_bytes: 8117902
num_examples: 6348
download_size: 5456461
dataset_size: 8117902
---
# Dataset Card for `thainer`
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/wannaphong/thai-ner
- **Repository:** https://github.com/wannaphong/thai-ner
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** https://github.com/wannaphong/
### Dataset Summary
ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence [unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/). It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp). The NER tags are annotated by [Tirasaroj and Aroonmanakun (2012)]((http://pioneer.chula.ac.th/~awirote/publications/)) for 2,258 sentences and the rest by [@wannaphong](https://github.com/wannaphong/). The POS tags are done by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`. [@wannaphong](https://github.com/wannaphong/) is now the only maintainer of this dataset.
### Supported Tasks and Leaderboards
- named entity recognition
- pos tagging
### Languages
Thai
## Dataset Structure
### Data Instances
```
{'id': 100, 'ner_tags': [27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27], 'pos_tags': [6, 12, 13, 1, 6, 5, 11, 7, 11, 6, 5, 13, 6, 6, 6, 11, 6, 6, 11, 6, 6, 11, 6, 6, 13, 6, 11, 11, 6, 11, 6, 11, 6, 11, 6, 11, 11, 6, 6, 11, 12, 6, 13, 5, 11, 7, 11, 6, 3, 11, 12, 3, 13, 6, 1, 6, 12, 13, 1, 6, 6, 5, 11, 3, 11, 5, 4, 6, 13, 6, 13, 6, 10, 3, 13, 13, 12, 13, 12, 0, 1, 10, 11, 6, 6, 11, 6, 11, 6, 12, 13, 5, 12, 3, 13, 13, 1, 6, 1, 6, 13], 'tokens': ['เชื้อโรค', 'ที่', 'ปรากฏ', 'ใน', 'สัตว์', 'ทั้ง', ' ', '4', ' ', 'ชนิด', 'นี้', 'เป็น', 'เชื้อ', 'โรคไข้หวัด', 'นก', ' ', 'เอช', 'พี', ' ', 'เอ', 'เวียน', ' ', 'อิน', 'ฟลู', 'เอน', 'ซา', ' ', '(', 'Hight', ' ', 'Polygenic', ' ', 'Avain', ' ', 'Influenza', ')', ' ', 'ชนิด', 'รุนแรง', ' ', 'ซึ่ง', 'การ', 'ตั้งชื่อ', 'ทั้ง', ' ', '4', ' ', 'ขึ้น', 'มา', ' ', 'เพื่อที่จะ', 'สามารถ', 'ระบุ', 'เชื้อ', 'ของ', 'ไวรัส', 'ที่', 'ทำอันตราย', 'ตาม', 'สิ่งมีชีวิต', 'ประเภท', 'ต่างๆ', ' ', 'ได้', ' ', 'อีก', 'ทั้ง', 'การ', 'ระบุ', 'สถานที่', 'คือ', 'ประเทศ', 'ไทย', 'จะ', 'ทำให้', 'รู้', 'ว่า', 'พบ', 'ที่', 'แรก', 'ใน', 'ไทย', ' ', 'ส่วน', 'วัน', ' ', 'เดือน', ' ', 'ปี', 'ที่', 'พบ', 'นั้น', 'ก็', 'จะ', 'ทำให้', 'ทราบ', 'ถึง', 'ครั้งแรก', 'ของ', 'การ', 'ค้นพบ']}
{'id': 107, 'ner_tags': [27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27], 'pos_tags': [0, 1, 6, 5, 11, 12, 3, 3, 13, 6, 13, 12, 0, 2, 12, 11, 6, 5, 13, 6, 5, 1, 6, 6, 1, 10, 11, 4, 13, 6, 11, 12, 6, 6, 10, 11, 13, 6, 1, 6, 4, 6, 1, 6, 6, 11, 4, 6, 1, 5, 6, 12, 2, 13, 6, 6, 5, 1, 11, 12, 13, 1, 6, 6, 11, 13, 11, 6, 6, 6, 11, 11, 6, 11, 11, 4, 10, 11, 11, 6, 11], 'tokens': ['ล่าสุด', 'ใน', 'เรื่อง', 'นี้', ' ', 'ทั้งนี้', 'คง', 'ต้อง', 'มี', 'การ', 'ตรวจสอบ', 'ให้', 'ชัดเจน', 'อีกครั้ง', 'ว่า', ' ', 'ไวรัส', 'นี้', 'เป็น', 'ชนิด', 'เดียว', 'กับ', 'ไข้หวัด', 'นก', 'ใน', 'ไทย', ' ', 'หรือ', 'เป็น', 'การกลายพันธุ์', ' ', 'โดยที่', 'คณะ', 'สัตวแพทย์', 'มหาวิทยาลัยเกษตรศาสตร์', ' ', 'จัด', 'ระดมสมอง', 'จาก', 'คณบดี', 'และ', 'ผู้เชี่ยวชาญ', 'จาก', 'คณะ', 'สัตวแพทย์', ' ', 'และ', 'ปศุสัตว์', 'ของ', 'หลาย', 'มหาวิทยาลัย', 'เพื่อ', 'ร่วมกัน', 'หา', 'ข้อมูล', 'เรื่อง', 'นี้', 'ด้วย', ' ', 'โดย', 'ประสาน', 'กับ', 'เจ้าหน้าที่', 'ระหว่างประเทศ', ' ', 'คือ', ' ', 'องค์การ', 'สุขภาพ', 'สัตว์โลก', ' ', '(', 'OIE', ')', ' ', 'และ', 'องค์การอนามัยโลก', ' ', '(', 'WHO', ')']}
```
### Data Fields
- `id`: sentence id
- `tokens`: word tokens by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s dictionary-based tokenizer `newmm`
- `pos_tags`: POS tags tagged by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`
- `ner_tags`: NER tags tagged by humans
### Data Splits
No explicit split is given
## Dataset Creation
### Curation Rationale
ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence [unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/). It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp).
### Source Data
#### Initial Data Collection and Normalization
The earlier part of the dataset is all news articles, whereas the part added by [@wannaphong](https://github.com/wannaphong/) includes news articles, public announcements and [@wannaphong](https://github.com/wannaphong/)'s own chat messages with personal and sensitive information removed.
#### Who are the source language producers?
News articles and public announcements are created by their respective authors. Chat messages are created by [@wannaphong](https://github.com/wannaphong/).
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/) for the earlier 2,258 sentences and [@wannaphong](https://github.com/wannaphong/) for the rest
### Personal and Sensitive Information
News articles and public announcements are not expected to include personal and sensitive information. [@wannaphong](https://github.com/wannaphong/) has removed such information from his own chat messages.
## Considerations for Using the Data
### Social Impact of Dataset
- named entity recognition in Thai
### Discussion of Biases
Since almost all of collection and annotation is done by [@wannaphong](https://github.com/wannaphong/), his biases are expected to be reflected in the dataset.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/) for the earlier 2,258 sentences and [@wannaphong](https://github.com/wannaphong/) for the rest
### Licensing Information
CC-BY 3.0
### Citation Information
```
@misc{Wannaphong Phatthiyaphaibun_2019,
title={wannaphongcom/thai-ner: ThaiNER 1.3},
url={https://zenodo.org/record/3550546},
DOI={10.5281/ZENODO.3550546},
abstractNote={Thai Named Entity Recognition},
publisher={Zenodo},
author={Wannaphong Phatthiyaphaibun},
year={2019},
month={Nov}
}
```
Work extended from:
[Tirasaroj, N. and Aroonmanakun, W. 2012. Thai NER using CRF model based on surface features. In Proceedings of SNLP-AOS 2011, 9-10 February, 2012, Bangkok, pages 176-180.](http://pioneer.chula.ac.th/~awirote/publications/)
### Contributions
Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset. | 10,052 | [
[
-0.04888916015625,
-0.034088134765625,
0.005123138427734375,
0.01544952392578125,
-0.024810791015625,
0.00020813941955566406,
-0.010986328125,
-0.022552490234375,
0.050445556640625,
0.0272064208984375,
-0.03375244140625,
-0.0556640625,
-0.0305328369140625,
0... |
LeoCordoba/CC-NEWS-ES-titles | 2023-02-23T21:53:46.000Z | [
"task_categories:summarization",
"task_categories:text-generation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:cc-news",
"language:es",
"license:mit",
"conditional-text-generation",
"region:us"
] | LeoCordoba | null | 2 | 119 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- es
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- cc-news
task_categories:
- summarization
- text-generation
task_ids: []
tags:
- conditional-text-generation
---
# Dataset Card for CC-NEWS-ES-titles
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [CC-NEWS-ES-titles dataset repository](https://huggingface.co/datasets/LeoCordoba/CC-NEWS-ES-titles)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Leonardo Ignacio Córdoba](https://www.linkedin.com/in/leonardo-ignacio-c%C3%B3rdoba/)
### Dataset Summary
CC-NEWS-ES-titles is a Spanish-language dataset for news titles generation. The text and titles comes from 2019 and 2020 CC-NEWS data (which is part of Common Crawl).
It contains 402.310 pairs of news title and body, splitted in :
- Train: 370.125
- Eval: 16.092
- Test: 16.092
### Supported Tasks and Leaderboards
- `text-classification`, `sentiment-classification`: The dataset can be used to train a model for news title generation which can be considered a subset of abstractive summarization.
### Languages
The text is in Spanish. The BCP-47 code for Spanish is es.
## Dataset Structure
### Data Instances
Each data instance contains the following features: _text_ and _output_text_.
- _text_ is the body of the news.
- _output_text_ is the title of the news.
An example from the CC-NEWS-ES-titles train set looks like the following:
```
{'text': 'Hoy en el Boletín Oficial también se publicó la disposición para universidades, institutos universitarios y de educación superior de todas las jurisdicciones, a las que recomienda que "adecúen las condiciones en que se desarrolla la actividad académica presencial en el marco de la emergencia conforme con las recomendaciones del Ministerio de Salud", según lo publicado por la agencia ',
'output_text': 'Coronavirus: "Seguimos educando", la plataforma online para que los chicos estudien en cuarentena'}
```
### Data Fields
- 'text': a string containing the body of the news.
- 'output_text': a string containing the title of the news.
### Data Splits
The CC-NEWS-ES-titles dataset has 3 splits: _train_, _validation_, and _test_. The splits contain disjoint sets of news.
| Dataset Split | Number of Instances in Split |
| ------------- | ---------------------------- |
| Train | 370.125 |
| Eval | 16.092 |
| Test | 16.092 |
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
#### Initial Data Collection and Normalization
TODO
#### Who are the source language producers?
Common Crawl: https://commoncrawl.org/
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
Abstractive summarization is a complex task and Spanish is a underrepresented language in the NLP domain. As a consequence, adding a Spanish resource may help others to improve their research and educational activities.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
This dataset is maintained by [Leonardo Ignacio Córdoba](https://www.linkedin.com/in/leonardo-ignacio-c%C3%B3rdoba/) and was built with the help of [María Gaska](https://www.linkedin.com/in/mfgaska/).
### Licensing Information
[N/A]
### Citation Information
TODO
### Contributions
[N/A] | 4,700 | [
[
-0.029937744140625,
-0.035491943359375,
0.017120361328125,
0.0297088623046875,
-0.037506103515625,
0.02227783203125,
-0.0306549072265625,
-0.0199127197265625,
0.0400390625,
0.0294189453125,
-0.0555419921875,
-0.0841064453125,
-0.0404052734375,
0.017486572265... | |
saattrupdan/doc-nli | 2022-04-26T18:44:14.000Z | [
"region:us"
] | saattrupdan | null | null | 2 | 119 | 2022-04-26T18:32:39 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
AlekseyKorshuk/fiction-books | 2022-06-12T05:29:38.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | 3 | 119 | 2022-06-12T05:29:30 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
fabiochiu/medium-articles | 2022-07-17T15:17:09.000Z | [
"license:mit",
"region:us"
] | fabiochiu | null | null | 5 | 119 | 2022-07-16T15:34:11 | ---
license: mit
---
# Data source
This data has been collected through a standard scraping process from the [Medium website](https://medium.com/), looking for published articles.
# Data description
Each row in the data is a different article published on Medium. For each article, you have the following features:
- **title** *[string]*: The title of the article.
- **text** *[string]*: The text content of the article.
- **url** *[string]*: The URL associated to the article.
- **authors** *[list of strings]*: The article authors.
- **timestamp** *[string]*: The publication datetime of the article.
- **tags** *[list of strings]*: List of tags associated to the article.
# Data analysis
You can find a very quick data analysis in this [notebook](https://www.kaggle.com/code/fabiochiusano/medium-articles-simple-data-analysis).
# What can I do with this data?
- A multilabel classification model that assigns tags to articles.
- A seq2seq model that generates article titles.
- Text analysis.
- Finetune text generation models on the general domain of Medium, or on specific domains by filtering articles by the appropriate tags.
# Collection methodology
Scraping has been done with Python and the requests library. Starting from a random article on Medium, the next articles to scrape are selected by visiting:
1. The author archive pages.
2. The publication archive pages (if present).
3. The tags archives (if present).
The article HTML pages have been parsed with the [newspaper Python library](https://github.com/codelucas/newspaper).
Published articles have been filtered for English articles only, using the Python [langdetect library](https://pypi.org/project/langdetect/).
As a consequence of the collection methodology, the scraped articles are coming from a not uniform publication date distribution. This means that there are articles published in 2016 and in 2022, but the number of articles in this dataset published in 2016 is not the same as the number of articles published in 2022. In particular, there is a strong prevalence of articles published in 2020. Have a look at the [accompanying notebook](https://www.kaggle.com/code/fabiochiusano/medium-articles-simple-data-analysis) to see the distribution of the publication dates. | 2,258 | [
[
-0.02227783203125,
-0.05755615234375,
0.0390625,
0.016693115234375,
-0.019500732421875,
0.0023632049560546875,
-0.01009368896484375,
-0.047271728515625,
0.0457763671875,
0.0240325927734375,
-0.029449462890625,
-0.040557861328125,
-0.030670166015625,
0.024993... |
lucadiliello/wikipedia_512_pretraining | 2023-03-24T08:03:19.000Z | [
"size_categories:1M<n<10M",
"language:en",
"region:us"
] | lucadiliello | null | null | 1 | 119 | 2023-02-24T18:40:57 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 9828026640.785877
num_examples: 6699666
- name: dev
num_bytes: 146694277.60706097
num_examples: 100000
- name: test
num_bytes: 146694277.60706097
num_examples: 100000
download_size: 6454536577
dataset_size: 10121415196
language:
- en
pretty_name: Wikipedia preprocessed for 512 tokens pretraining.
size_categories:
- 1M<n<10M
---
# Dataset Card for "wikipedia_512_pretraining"
Wikipedia preprocessed for pretraining of models. Each sample in the dataset has an average tokenized length of 512 `RoBERTa-Base` tokens. | 648 | [
[
-0.04864501953125,
-0.022613525390625,
0.0032825469970703125,
-0.0095977783203125,
-0.055511474609375,
-0.01317596435546875,
-0.0157470703125,
-0.0110626220703125,
0.0355224609375,
0.0194549560546875,
-0.0447998046875,
-0.050262451171875,
-0.03009033203125,
... |
bertin-project/alpaca-spanish | 2023-03-24T11:38:19.000Z | [
"task_categories:text-generation",
"language:es",
"license:cc-by-4.0",
"instruction-finetuning",
"region:us"
] | bertin-project | null | null | 19 | 119 | 2023-03-20T11:51:06 | ---
license: cc-by-4.0
language:
- es
tags:
- instruction-finetuning
pretty_name: BERTIN Alpaca Spanish
task_categories:
- text-generation
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 21439975
num_examples: 51942
download_size: 13178075
dataset_size: 21439975
---
# BERTIN Alpaca Spanish
This dataset is a translation to Spanish of [alpaca_data_cleaned.json](https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_cleaned.json), a clean version of the [Alpaca dataset made at Stanford](https://huggingface.co/datasets/tatsu-lab/alpaca).
An [earlier version](https://huggingface.co/datasets/bertin-project/alpaca-spanish/blob/main/nllb/spa_train.json.gz) used [Facebook's NLLB 1.3B model](https://huggingface.co/facebook/nllb-200-1.3B), but the current version uses OpenAI's `gpt-3.5-turbo`, hence this dataset cannot be used to create models that compete in any way against OpenAI. | 1,028 | [
[
-0.036163330078125,
-0.048492431640625,
0.0062103271484375,
0.04364013671875,
-0.030426025390625,
-0.01493072509765625,
-0.0013217926025390625,
-0.060302734375,
0.055633544921875,
0.039764404296875,
-0.06256103515625,
-0.040130615234375,
-0.037750244140625,
... |
gigant/tib | 2023-09-25T12:05:25.000Z | [
"task_categories:summarization",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | gigant | null | null | 0 | 119 | 2023-04-11T15:22:07 | ---
dataset_info:
features:
- name: doi
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: video_url
dtype: string
- name: license
dtype: string
- name: subject
dtype: string
- name: genre
dtype: string
- name: release_year
dtype: string
- name: author
dtype: string
- name: contributors
dtype: string
- name: abstract
dtype: string
- name: transcript
dtype: string
- name: transcript_segments
sequence:
- name: id
dtype: int32
- name: seek
dtype: int32
- name: start
dtype: float32
- name: end
dtype: float32
- name: text
dtype: string
- name: tokens
sequence: int32
- name: temperature
dtype: float32
- name: avg_logprob
dtype: float32
- name: compression_ratio
dtype: float32
- name: no_speech_prob
dtype: float32
- name: keyframes
sequence:
- name: slide
dtype: string
- name: frames
sequence: int32
- name: timestamp
sequence: float32
- name: language
dtype: string
splits:
- name: valid
num_bytes: 101380279
num_examples: 910
- name: train
num_bytes: 827555875
num_examples: 7282
- name: test
num_bytes: 102396941
num_examples: 911
download_size: 502166165
dataset_size: 1031333095
task_categories:
- summarization
language:
- en
pretty_name: "TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records"
size_categories:
- 1K<n<10K
pinned: True
---
# Dataset Card for "TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Description
- **Homepage:** [Dataset page](https://huggingface.co/datasets/gigant/tib)
- **Repository:** [Dataset page](https://huggingface.co/datasets/gigant/tib)
- **Paper:** [TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records
](https://hal.science/hal-04168911)
- **Point of Contact:** [Théo Gigant](mailto:theo.gigant@l2s.centralesupelec.fr)
## Dataset Summary
TIB is an English dataset for abstractive summarization of multimodal presentations, introduced in [*TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records*
](https://hal.science/hal-04168911).
It is a collection of 9,103 videoconference records extracted from the German National Library of Science and Technology (TIB) archive, along with their metadata, an abstract and automatically processed transcripts and key frames.
### Supported Tasks and Leaderboards
- `summarization`
### Languages
The text in the dataset is in English, both for the transcripted audios and the abstracts.
## Usage
To use within the [`datasets`](https://github.com/huggingface/datasets) library:
```python
from datasets import load_dataset
dataset = load_dataset("gigant/tib")
```
## Dataset Structure
### Data Instances
A typical data point represents a videoconference record, the `transcript` and `keyframes` are textual and visual modalities, processed from the video found at `video_url`, and the `abstract` is used as a target abstractive summary.
### Data Fields
Each record consist of the following attributes:
* `doi`: digital object identifier (DOI) of the record or the associated paper
* `title`: title of the presentation
* `url`: URL of the record in the TIB archive
* `video_url`: URL of the video file
* `license`: license of the record
* `subject`: academic field (*eg* Computer Science, Mathematics, ...)
* `genre`: type of presentation (*eg* Lecture, Conference, ...)
* `release_year`: year the record was released
* `author`: name of the author
* `contributors`: name of the contributors
* `abstract`: the abstract of the presentation, that serve as a target summary
* `transcript`: the automatically extracted transcript
* `transcript_segments`: the automatically extracted transcript with time codes, output of the speech recognition system
* `keyframes`: the automatically extracted key frames time codes
`doi`, `title`, `url`, `video_url`, `license`, `subject`, `genre`, `release_year`, `author`, `contributors` and `abstract` are provided as found in the TIB archive. The length, style, quality and content of the abstract can differ from video to video as it was likely provided by each author. For instance, some abstracts can provide very short title-like summaries, introduction of the conference, the lecture or the speaker, or longer descriptions of the content. We provide examples of transcripts and summaries in the paper's Appendix.
### Data Splits
The data is split into a training, validation and test set.
* Train: 7,282 (80%)
* Validation: 910 (10%)
* Test: 911 (10%)
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The dataset was first assembled by crawling the [TIB-AV portal](https://av.tib.eu/) which is a large archive of videos, developed by the German National Library of Science and Technology: *Technische Informationsbibliothek* (TIB).
Entries with missing abstracts or abstracts that were too short (less than 30 characters) were filtered out.
We also filtered out records for which the abstract or the transcript is in another language than English.
In order to keep the abstracts that are relevant to the associated record, we removed documents if the abstract is the same as the abstract for another video. This allowed to get rid of all the abstracts that were written for a set of records such as conferences, instead of specifically written for a single presentation.
More information about the dataset collection and filtering can be found in [TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records
](https://hal.science/hal-04168911).
### Dataset Curators
The dataset was initially created by Théo Gigant, Frédéric Dufaux, Camille Guinaudeau and Marc Decombas.
### Citation Information
```
@inproceedings{gigant:hal-04168911,
TITLE = {{TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records}},
AUTHOR = {GIGANT, Th{\'e}o and Dufaux, Fr{\'e}d{\'e}ric and Guinaudeau, Camille and Decombas, Marc},
URL = {https://hal.science/hal-04168911},
BOOKTITLE = {{Proc. 20th International Conference on Content-based Multimedia Indexing (CBMI 2023)}},
ADDRESS = {Orl{\'e}ans, France},
ORGANIZATION = {{ACM}},
YEAR = {2023},
MONTH = Sep,
KEYWORDS = {multimedia dataset, multimodal documents, automatic summarization},
HAL_ID = {hal-04168911},
HAL_VERSION = {v1},
}
``` | 6,700 | [
[
-0.046478271484375,
-0.04827880859375,
0.01113128662109375,
0.01297760009765625,
-0.034088134765625,
0.0006546974182128906,
-0.026519775390625,
-0.0178070068359375,
0.04632568359375,
-0.0030040740966796875,
-0.028594970703125,
-0.054901123046875,
-0.054901123046... |
checkai/instruction-poems | 2023-04-19T03:02:09.000Z | [
"license:cc-by-4.0",
"region:us"
] | checkai | null | null | 5 | 119 | 2023-04-19T00:36:02 | ---
license: cc-by-4.0
---
Poem dataset to be used with instruction fine tuning | 80 | [
[
-0.00021159648895263672,
-0.038970947265625,
0.00673675537109375,
0.030517578125,
-0.02508544921875,
-0.039031982421875,
-0.032012939453125,
-0.0163726806640625,
-0.01380157470703125,
0.048797607421875,
-0.04412841796875,
-0.052154541015625,
-0.046966552734375,
... |
Thaweewat/alpaca-finance-43k-th | 2023-05-09T19:05:48.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:th",
"license:cc-by-sa-3.0",
"instruction-finetuning",
"region:us"
] | Thaweewat | null | null | 2 | 119 | 2023-05-09T19:01:32 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- th
tags:
- instruction-finetuning
size_categories:
- 10K<n<100K
---
# Summary
🇹🇭 Thai-instructed dataset translated from [gbharti/wealth-alpaca_lora](https://huggingface.co/datasets/gbharti/wealth-alpaca_lora) using Google Cloud Translation.
This dataset is a combination of Stanford's Alpaca (https://github.com/tatsu-lab/stanford_alpaca) and FiQA (https://sites.google.com/view/fiqa/) with another 1.3k pairs custom generated using GPT3.5
Script for tuning through Kaggle's (https://www.kaggle.com) free resources using PEFT/LoRa: https://www.kaggle.com/code/gbhacker23/wealth-alpaca-lora
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
---
| 804 | [
[
-0.026763916015625,
-0.05828857421875,
0.005474090576171875,
0.0311279296875,
-0.048431396484375,
0.002544403076171875,
-0.01288604736328125,
-0.04486083984375,
0.038970947265625,
0.054351806640625,
-0.04541015625,
-0.052734375,
-0.040557861328125,
-0.002346... |
psymon/namuwiki_alpaca_dataset | 2023-06-29T07:29:01.000Z | [
"language:ko",
"license:cc-by-nc-sa-2.0",
"region:us"
] | psymon | null | null | 8 | 119 | 2023-06-29T07:18:44 | ---
license: cc-by-nc-sa-2.0
language:
- ko
---
## namuwiki for Stanford Alpaca
나무위키 덤프 파일을 Stanford Alpaca 학습에 맞게 수정한 데이터셋입니다.
데이터 형식은 Stanford Alpaca와 동일합니다. instruction은 '나무위키 문서 제목' + '에 대해 설명해줘.' 형태이고,<br>
output은 문서 == 개요 == 에 해당하는 내용입니다. 개요가 없는 항목, 개요가 너무 짧은 항목은 제외하였습니다.
| 287 | [
[
-0.0433349609375,
-0.043975830078125,
0.0167694091796875,
0.047882080078125,
-0.06207275390625,
-0.0379638671875,
0.014190673828125,
-0.0228118896484375,
0.056610107421875,
0.03472900390625,
-0.034393310546875,
-0.044769287109375,
-0.0523681640625,
0.0260009... |
loremipsum3658/pet | 2023-08-24T21:28:06.000Z | [
"region:us"
] | loremipsum3658 | null | null | 0 | 119 | 2023-08-24T21:27:59 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: fname
dtype: string
- name: raw_text
dtype: string
- name: aviso_previo
dtype: bool
- name: saldo_de_salario
dtype: bool
- name: ferias
dtype: bool
- name: decimo_terceiro
dtype: bool
- name: fgts
dtype: bool
- name: multa_do_477
dtype: bool
- name: multa_do_467
dtype: bool
- name: horas_extras
dtype: bool
- name: intervalo_intrajornada
dtype: bool
- name: intervalo_interjornada
dtype: bool
- name: adicional_noturno
dtype: bool
- name: adicional_de_insalubridade
dtype: bool
- name: adicional_de_periculosidade
dtype: bool
- name: diferencas_salariais_ou_equiparacao_salarial
dtype: bool
- name: dano_moral
dtype: bool
- name: contribuicao_assistencial
dtype: bool
- name: indenizacao_por_lucros_cessantes
dtype: bool
- name: indenizacao_por_dano_emergente
dtype: bool
- name: multa_normativa
dtype: bool
- name: honorarios_advocaticios
dtype: bool
- name: justica_gratuita
dtype: bool
- name: reconhecimento_de_vinculo
dtype: bool
- name: reflexos_das_parcelas_salariais
dtype: bool
- name: reflexos_de_salarios_oficiosos_e_informais
dtype: bool
- name: outros
dtype: bool
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1654516
num_examples: 1705
- name: test
num_bytes: 351964
num_examples: 366
- name: validation
num_bytes: 332831
num_examples: 366
download_size: 1391885
dataset_size: 2339311
---
# Dataset Card for "pet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,903 | [
[
-0.046356201171875,
-0.0065460205078125,
0.01324462890625,
0.01454925537109375,
-0.0208587646484375,
-0.0052947998046875,
0.0157470703125,
-0.0229339599609375,
0.05859375,
0.03607177734375,
-0.046661376953125,
-0.042510986328125,
-0.02825927734375,
-0.003187... |
aloobun/mini-math23k-v1 | 2023-10-10T12:40:42.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | aloobun | null | null | 5 | 119 | 2023-10-10T08:54:00 | ---
license: mit
language:
- en
size_categories:
- 10K<n<100K
task_categories:
- text-generation
pretty_name: math
---
The mini-math23k-v1 dataset is composed of ~ 23,000 entries of data, from open datasets across the AI landscape, including:
- [TIGER-Lab/MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- [Birchlabs/openai-prm800k-solutions-only](https://huggingface.co/datasets/Birchlabs/openai-prm800k-solutions-only)
Credits:
```
Birchlabs
```
```
@article{yue2023mammoth,
title={MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning},
author={Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen},
journal={arXiv preprint arXiv:2309.05653},
year={2023}
}
``` | 750 | [
[
-0.043304443359375,
-0.049896240234375,
0.010345458984375,
-0.014678955078125,
-0.00246429443359375,
-0.01125335693359375,
0.00045037269592285156,
0.003147125244140625,
0.01308441162109375,
0.036285400390625,
-0.075927734375,
-0.0309906005859375,
-0.013198852539... |
argilla/text-descriptives-metadata | 2023-10-30T13:42:51.000Z | [
"size_categories:1K<n<10K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | argilla | null | null | 0 | 119 | 2023-10-20T13:28:10 | ---
size_categories: 1K<n<10K
tags:
- rlfh
- argilla
- human-feedback
dataset_info:
features:
- name: prompt
dtype: string
id: field
- name: context
dtype: string
id: field
- name: response
list:
- name: user_id
dtype: string
id: question
- name: value
dtype: string
id: suggestion
- name: status
dtype: string
id: question
- name: response-suggestion
dtype: string
id: suggestion
- name: response-suggestion-metadata
struct:
- name: type
dtype: string
id: suggestion-metadata
- name: score
dtype: float32
id: suggestion-metadata
- name: agent
dtype: string
id: suggestion-metadata
- name: external_id
dtype: string
id: external_id
- name: metadata
dtype: string
id: metadata
splits:
- name: train
num_bytes: 3008948
num_examples: 1030
download_size: 1725693
dataset_size: 3008948
---
# Dataset Card for text-descriptives-metadata
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("argilla/text-descriptives-metadata")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("argilla/text-descriptives-metadata")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| prompt | Prompt | FieldTypes.text | True | True |
| context | Context | FieldTypes.text | False | True |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| response | Response | QuestionTypes.text | True | N/A | N/A |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
**✨ NEW** The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"context": null,
"prompt": "Can brain cells move? By movement I mean long distance migration (preferably within the brain only)."
},
"metadata": {
"entropy": 0.4352176404374839,
"flesch_reading_ease": 82.39000000000001,
"n_characters": 85,
"passed_quality_check": "True"
},
"responses": [],
"suggestions": [
{
"agent": null,
"question_name": "response",
"score": null,
"type": null,
"value": "The question is relatively broad and one should take into account that the brain not only consists of neurons, but also glial cells (supportive cells) and pre-mitotic neuronal stem cells. Furthermore, as critical fellow-scientists have indicated, developmental stage is very important, as the developing embryonic brain is very different from the adult brain.\nHowever, after sifting through various publications, the answer to the question is actually remarkably simple: Yes, brain cells migrate.\nIn the adult brain glial cells migrate in the brain (Kl\u00e4mbt, 2009). Glial cells are involved in a myriad of functions, but a notable example of migrating glial cells are the oligodendrocytes that migrate relative long distances to find their target axons onto which they wrap themselves to form the insulating myelin sheath (Tsai and Miller, 2002).\nNeuronal stem cells migrate over long distances in response to injury (Imitola et al., 2004) and they migrate from specific stem-cell locations (e.g., hippocampus and subventricular zone) to other regions (Clarke, 2003).\nPost-mitotic, but non-differentiated neurons have been shown to migrate in the adult brain in fish (Scott et al., 2012), and in mammals and non-human primates as well (Sawada et al., 2011).\nNot surprisingly, glial cells, stem cells and neurons also migrate during embryonic development. Most notably, post-mitotic neurons destined to fulfill peripheral functions have to migrate over relatively long distances from the neural crest to their target locations (Neuroscience, 2nd ed, Neuronal Migration)."
}
]
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"context": null,
"external_id": null,
"metadata": "{\"n_characters\": 85, \"passed_quality_check\": \"True\", \"flesch_reading_ease\": 82.39000000000001, \"entropy\": 0.4352176404374839}",
"prompt": "Can brain cells move? By movement I mean long distance migration (preferably within the brain only).",
"response": [],
"response-suggestion": "The question is relatively broad and one should take into account that the brain not only consists of neurons, but also glial cells (supportive cells) and pre-mitotic neuronal stem cells. Furthermore, as critical fellow-scientists have indicated, developmental stage is very important, as the developing embryonic brain is very different from the adult brain.\nHowever, after sifting through various publications, the answer to the question is actually remarkably simple: Yes, brain cells migrate.\nIn the adult brain glial cells migrate in the brain (Kl\u00e4mbt, 2009). Glial cells are involved in a myriad of functions, but a notable example of migrating glial cells are the oligodendrocytes that migrate relative long distances to find their target axons onto which they wrap themselves to form the insulating myelin sheath (Tsai and Miller, 2002).\nNeuronal stem cells migrate over long distances in response to injury (Imitola et al., 2004) and they migrate from specific stem-cell locations (e.g., hippocampus and subventricular zone) to other regions (Clarke, 2003).\nPost-mitotic, but non-differentiated neurons have been shown to migrate in the adult brain in fish (Scott et al., 2012), and in mammals and non-human primates as well (Sawada et al., 2011).\nNot surprisingly, glial cells, stem cells and neurons also migrate during embryonic development. Most notably, post-mitotic neurons destined to fulfill peripheral functions have to migrate over relatively long distances from the neural crest to their target locations (Neuroscience, 2nd ed, Neuronal Migration).",
"response-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **prompt** is of type `FieldTypes.text`.
* (optional) **context** is of type `FieldTypes.text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **response** is of type `QuestionTypes.text`.
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **response-suggestion** is of type `QuestionTypes.text`.
Additionally, we also have two more fields that are optional and are the following:
* **✨ NEW** **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
This is a supervised fine-tuning dataset that contains instructions. Please write the response to the instruction in the response field. Take the context into account when writing the response.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 12,996 | [
[
-0.042327880859375,
-0.06494140625,
0.03216552734375,
0.0230255126953125,
-0.01033782958984375,
-0.036224365234375,
-0.00211334228515625,
-0.042022705078125,
0.047027587890625,
0.05084228515625,
-0.064208984375,
-0.058013916015625,
-0.034942626953125,
0.0167... |
Cartinoe5930/KoRAE_original | 2023-10-29T09:17:03.000Z | [
"region:us"
] | Cartinoe5930 | null | null | 0 | 119 | 2023-10-29T09:16:51 | ---
dataset_info:
features:
- name: source
dtype: string
- name: prompt
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 95068407
num_examples: 63724
download_size: 48931987
dataset_size: 95068407
---
# Dataset Card for "KoRAE_original_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 508 | [
[
-0.044586181640625,
-0.0205078125,
0.0217742919921875,
0.01132965087890625,
-0.0312347412109375,
-0.016510009765625,
0.033355712890625,
-0.001667022705078125,
0.06756591796875,
0.056854248046875,
-0.054840087890625,
-0.0517578125,
-0.042144775390625,
-0.0205... |
Fraser/mnist-text-small | 2021-02-22T10:21:37.000Z | [
"region:us"
] | Fraser | MNIST dataset adapted to a text-based representation.
*Modified images to be ~1/4 the original area.*
Done by taking a max pool.
This allows testing interpolation quality for Transformer-VAEs.
System is heavily inspired by Matthew Rayfield's work https://youtu.be/Z9K3cwSL6uM
Works by quantising each MNIST pixel into one of 64 characters.
Every sample has an up & down version to encourage the model to learn rotation invarient features.
Use `.array_to_text(` and `.text_to_array(` methods to test your generated data.
Data format:
- text: (16 x 14 tokens, 224 tokens total): Textual representation of MNIST digit, for example:
```
00 down ! ! ! ! ! ! ! ! ! ! ! ! ! !
01 down ! ! ! ! ! ! ! ! ! ! ! ! ! !
02 down ! ! ! ! ! ! % % C L a ^ ! !
03 down ! ! ! - ` ` ` ` ` Y ` Q ! !
04 down ! ! ! % ` ` ` R ^ ! ! ! ! !
05 down ! ! ! ! $ G ` ! ! ! ! ! ! !
06 down ! ! ! ! ! # ` Y < ! ! ! ! !
07 down ! ! ! ! ! ! 5 ` ` F ! ! ! !
08 down ! ! ! ! ! ! ! % ` ` 1 ! ! !
09 down ! ! ! ! ! ! F ` ` ` ! ! ! !
10 down ! ! ! ! 1 ` ` ` ` 4 ! ! ! !
11 down ! ! L ` ` ` ` 5 ! ! ! ! ! !
12 down ! ! ` ` V B ! ! ! ! ! ! ! !
13 down ! ! ! ! ! ! ! ! ! ! ! ! ! !
```
- label: Just a number with the texts matching label. | @dataset{dataset,
author = {Fraser Greenlee},
year = {2021},
month = {1},
pages = {},
title = {MNIST small text dataset.},
doi = {}
} | 0 | 118 | 2022-03-02T23:29:22 | MNIST dataset adapted to a text-based representation.
Modified images to be ~1/4 the original area.
Done by taking a max pool.
This allows testing interpolation quality for Transformer-VAEs.
System is heavily inspired by Matthew Rayfield's work https://youtu.be/Z9K3cwSL6uM
Works by quantising each MNIST pixel into one of 64 characters.
Every sample has an up & down version to encourage the model to learn rotation invarient features.
Use `.array_to_text(` and `.text_to_array(` methods to test your generated data.
Data format:
- text: (16 x 14 tokens, 224 tokens total): Textual representation of MNIST digit, for example:
```
00 down ! ! ! ! ! ! ! ! ! ! ! ! ! !
01 down ! ! ! ! ! ! ! ! ! ! ! ! ! !
02 down ! ! ! ! ! ! % % C L a ^ ! !
03 down ! ! ! - ` ` ` ` ` Y ` Q ! !
04 down ! ! ! % ` ` ` R ^ ! ! ! ! !
05 down ! ! ! ! $ G ` ! ! ! ! ! ! !
06 down ! ! ! ! ! # ` Y < ! ! ! ! !
07 down ! ! ! ! ! ! 5 ` ` F ! ! ! !
08 down ! ! ! ! ! ! ! % ` ` 1 ! ! !
09 down ! ! ! ! ! ! F ` ` ` ! ! ! !
10 down ! ! ! ! 1 ` ` ` ` 4 ! ! ! !
11 down ! ! L ` ` ` ` 5 ! ! ! ! ! !
12 down ! ! ` ` V B ! ! ! ! ! ! ! !
13 down ! ! ! ! ! ! ! ! ! ! ! ! ! !
```
- label: Just a number with the texts matching label. | 1,198 | [
[
-0.044921875,
-0.0225982666015625,
0.0213775634765625,
0.029449462890625,
-0.026458740234375,
0.0014162063598632812,
0.0012884140014648438,
0.0048675537109375,
0.050445556640625,
0.053924560546875,
-0.05859375,
-0.06005859375,
-0.043609619140625,
0.016311645... |
clarin-pl/kpwr-ner | 2023-01-30T22:54:02.000Z | [
"task_categories:other",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:18K",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:cc-by-3.0",
"structure-predict... | clarin-pl | KPWR-NER tagging dataset. | null | 6 | 118 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pl
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- 18K
- 10K<n<100K
source_datasets:
- original
task_categories:
- other
task_ids:
- named-entity-recognition
pretty_name: KPWr-NER
tags:
- structure-prediction
---
# KPWR-NER
## Description
KPWR-NER is a part the Polish Corpus of Wrocław University of Technology (*Korpus Języka Polskiego Politechniki Wrocławskiej*). Its objective is named entity recognition for fine-grained categories of entities. It is the ‘n82’ version of the KPWr, which means that number of classes is restricted to 82 (originally 120). During corpus creation, texts were annotated by humans from various sources, covering many domains and genres.
## Tasks (input, output and metrics)
Named entity recognition (NER) - tagging entities in text with their corresponding type.
**Input** ('*tokens'* column): sequence of tokens
**Output** ('*ner'* column): sequence of predicted tokens’ classes in BIO notation (82 possible classes, described in detail in the annotation guidelines)
**Measurements**: F1-score (seqeval)
**Example**:
Input: `[‘Roboty’, ‘mają’, ‘kilkanaście’, ‘lat’, ‘i’, ‘pochodzą’, ‘z’, ‘USA’, ‘,’, ‘Wysokie’, ‘napięcie’, ‘jest’, ‘dużo’, ‘młodsze’, ‘,’, ‘powstało’, ‘w’, ‘Niemczech’, ‘.’]`
Input (translated by DeepL): `Robots are more than a dozen years old and come from the US, High Voltage is much younger, having been developed in Germany.`
Output: `[‘B-nam_pro_title’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘B-nam_loc_gpe_country’, ‘O’, ‘B-nam_pro_title’, ‘I-nam_pro_title’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘O’, ‘B-nam_loc_gpe_country’, ‘O’]`
## Data splits
| Subset | Cardinality (sentences) |
|--------|------------------------:|
| train | 13959 |
| dev | 0 |
| test | 4323 |
## Class distribution (without "O" and "I-*")
| Class | train | validation | test |
|:----------------------------|--------:|-------------:|----------:|
| B-nam_liv_person | 0.21910 | - | 0.21422 |
| B-nam_loc_gpe_city | 0.10101 | - | 0.09865 |
| B-nam_loc_gpe_country | 0.07467 | - | 0.08059 |
| B-nam_org_institution | 0.05893 | - | 0.06005 |
| B-nam_org_organization | 0.04448 | - | 0.05553 |
| B-nam_org_group_team | 0.03492 | - | 0.03363 |
| B-nam_adj_country | 0.03410 | - | 0.03747 |
| B-nam_org_company | 0.02439 | - | 0.01716 |
| B-nam_pro_media_periodic | 0.02250 | - | 0.01896 |
| B-nam_fac_road | 0.01995 | - | 0.02144 |
| B-nam_liv_god | 0.01934 | - | 0.00790 |
| B-nam_org_nation | 0.01739 | - | 0.01828 |
| B-nam_oth_tech | 0.01724 | - | 0.01377 |
| B-nam_pro_media_web | 0.01709 | - | 0.00903 |
| B-nam_fac_goe | 0.01596 | - | 0.01445 |
| B-nam_eve_human | 0.01573 | - | 0.01761 |
| B-nam_pro_title | 0.01558 | - | 0.00790 |
| B-nam_pro_brand | 0.01543 | - | 0.01038 |
| B-nam_org_political_party | 0.01264 | - | 0.01309 |
| B-nam_loc_gpe_admin1 | 0.01219 | - | 0.01445 |
| B-nam_eve_human_sport | 0.01174 | - | 0.01242 |
| B-nam_pro_software | 0.01091 | - | 0.02190 |
| B-nam_adj | 0.00963 | - | 0.01174 |
| B-nam_loc_gpe_admin3 | 0.00888 | - | 0.01061 |
| B-nam_pro_model_car | 0.00873 | - | 0.00587 |
| B-nam_loc_hydronym_river | 0.00843 | - | 0.01151 |
| B-nam_oth | 0.00775 | - | 0.00497 |
| B-nam_pro_title_document | 0.00738 | - | 0.01986 |
| B-nam_loc_astronomical | 0.00730 | - | - |
| B-nam_oth_currency | 0.00723 | - | 0.01151 |
| B-nam_adj_city | 0.00670 | - | 0.00948 |
| B-nam_org_group_band | 0.00587 | - | 0.00429 |
| B-nam_loc_gpe_admin2 | 0.00565 | - | 0.00813 |
| B-nam_loc_gpe_district | 0.00504 | - | 0.00406 |
| B-nam_loc_land_continent | 0.00459 | - | 0.00722 |
| B-nam_loc_country_region | 0.00459 | - | 0.00090 |
| B-nam_loc_land_mountain | 0.00414 | - | 0.00203 |
| B-nam_pro_title_book | 0.00384 | - | 0.00248 |
| B-nam_loc_historical_region | 0.00376 | - | 0.00497 |
| B-nam_loc | 0.00361 | - | 0.00090 |
| B-nam_eve | 0.00361 | - | 0.00181 |
| B-nam_org_group | 0.00331 | - | 0.00406 |
| B-nam_loc_land_island | 0.00331 | - | 0.00248 |
| B-nam_pro_media_tv | 0.00316 | - | 0.00158 |
| B-nam_liv_habitant | 0.00316 | - | 0.00158 |
| B-nam_eve_human_cultural | 0.00316 | - | 0.00497 |
| B-nam_pro_title_tv | 0.00309 | - | 0.00542 |
| B-nam_oth_license | 0.00286 | - | 0.00248 |
| B-nam_num_house | 0.00256 | - | 0.00248 |
| B-nam_pro_title_treaty | 0.00248 | - | 0.00045 |
| B-nam_fac_system | 0.00248 | - | 0.00587 |
| B-nam_loc_gpe_subdivision | 0.00241 | - | 0.00587 |
| B-nam_loc_land_region | 0.00226 | - | 0.00248 |
| B-nam_pro_title_album | 0.00218 | - | 0.00158 |
| B-nam_adj_person | 0.00203 | - | 0.00406 |
| B-nam_fac_square | 0.00196 | - | 0.00135 |
| B-nam_pro_award | 0.00188 | - | 0.00519 |
| B-nam_eve_human_holiday | 0.00188 | - | 0.00203 |
| B-nam_pro_title_song | 0.00166 | - | 0.00158 |
| B-nam_pro_media_radio | 0.00151 | - | 0.00068 |
| B-nam_pro_vehicle | 0.00151 | - | 0.00090 |
| B-nam_oth_position | 0.00143 | - | 0.00226 |
| B-nam_liv_animal | 0.00143 | - | 0.00248 |
| B-nam_pro | 0.00135 | - | 0.00045 |
| B-nam_oth_www | 0.00120 | - | 0.00451 |
| B-nam_num_phone | 0.00120 | - | 0.00045 |
| B-nam_pro_title_article | 0.00113 | - | - |
| B-nam_oth_data_format | 0.00113 | - | 0.00226 |
| B-nam_fac_bridge | 0.00105 | - | 0.00090 |
| B-nam_liv_character | 0.00098 | - | - |
| B-nam_pro_software_game | 0.00090 | - | 0.00068 |
| B-nam_loc_hydronym_lake | 0.00090 | - | 0.00045 |
| B-nam_loc_gpe_conurbation | 0.00090 | - | - |
| B-nam_pro_media | 0.00083 | - | 0.00181 |
| B-nam_loc_land | 0.00075 | - | 0.00045 |
| B-nam_loc_land_peak | 0.00075 | - | - |
| B-nam_fac_park | 0.00068 | - | 0.00226 |
| B-nam_org_organization_sub | 0.00060 | - | 0.00068 |
| B-nam_loc_hydronym | 0.00060 | - | 0.00023 |
| B-nam_loc_hydronym_sea | 0.00045 | - | 0.00068 |
| B-nam_loc_hydronym_ocean | 0.00045 | - | 0.00023 |
| B-nam_fac_goe_stop | 0.00038 | - | 0.00090 |
## Citation
```
@inproceedings{broda-etal-2012-kpwr,
title = "{KPW}r: Towards a Free Corpus of {P}olish",
author = "Broda, Bartosz and
Marci{\'n}czuk, Micha{\l} and
Maziarz, Marek and
Radziszewski, Adam and
Wardy{\'n}ski, Adam",
booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)",
month = may,
year = "2012",
address = "Istanbul, Turkey",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/965_Paper.pdf",
pages = "3218--3222",
abstract = "This paper presents our efforts aimed at collecting and annotating a free Polish corpus. The corpus will serve for us as training and testing material for experiments with Machine Learning algorithms. As others may also benefit from the resource, we are going to release it under a Creative Commons licence, which is hoped to remove unnecessary usage restrictions, but also to facilitate reproduction of our experimental results. The corpus is being annotated with various types of linguistic entities: chunks and named entities, selected syntactic and semantic relations, word senses and anaphora. We report on the current state of the project as well as our ultimate goals.",
}
```
## License
```
Creative Commons Attribution 3.0 Unported Licence
```
## Links
[HuggingFace](https://huggingface.co/datasets/clarin-pl/kpwr-ner)
[Source](https://clarin-pl.eu/index.php/kpwr-en/)
[Paper](https://aclanthology.org/L12-1574/)
[KPWr annotation guidelines](http://www.nlp.pwr.wroc.pl/narzedzia-i-zasoby/zasoby/kpwr-lemma/16-narzedzia-zasoby/79-wytyczne)
[KPWr annotation guidelines - named entities](https://clarin-pl.eu/dspace/handle/11321/294)
## Examples
### Loading
```python
from pprint import pprint
from datasets import load_dataset
dataset = load_dataset("clarin-pl/kpwr-ner")
pprint(dataset['train'][0])
# {'lemmas': ['roborally', 'czy', 'wysoki', 'napięcie', '?'],
# 'ner': [73, 160, 73, 151, 160],
# 'orth': ['subst:sg:nom:n',
# 'qub',
# 'adj:sg:nom:n:pos',
# 'subst:sg:nom:n',
# 'interp'],
# 'tokens': ['RoboRally', 'czy', 'Wysokie', 'napięcie', '?']}
```
### Evaluation
```python
import random
from pprint import pprint
from datasets import load_dataset, load_metric
dataset = load_dataset("clarin-pl/kpwr-ner")
references = dataset["test"]["ner"]
# generate random predictions
predictions = [
[
random.randrange(dataset["train"].features["ner"].feature.num_classes)
for _ in range(len(labels))
]
for labels in references
]
# transform to original names of labels
references_named = [
[dataset["train"].features["ner"].feature.names[label] for label in labels]
for labels in references
]
predictions_named = [
[dataset["train"].features["ner"].feature.names[label] for label in labels]
for labels in predictions
]
# utilise seqeval to evaluate
seqeval = load_metric("seqeval")
seqeval_score = seqeval.compute(
predictions=predictions_named, references=references_named, scheme="IOB2"
)
pprint(seqeval_score, depth=1)
# {'nam_adj': {...},
# 'nam_adj_city': {...},
# 'nam_adj_country': {...},
# 'nam_adj_person': {...},
# 'nam_eve': {...},
# 'nam_eve_human': {...},
# 'nam_eve_human_cultural': {...},
# 'nam_eve_human_holiday': {...},
# 'nam_eve_human_sport': {...},
# 'nam_fac_bridge': {...},
# 'nam_fac_goe': {...},
# 'nam_fac_goe_stop': {...},
# 'nam_fac_park': {...},
# 'nam_fac_road': {...},
# 'nam_fac_square': {...},
# 'nam_fac_system': {...},
# 'nam_liv_animal': {...},
# 'nam_liv_character': {...},
# 'nam_liv_god': {...},
# 'nam_liv_habitant': {...},
# 'nam_liv_person': {...},
# 'nam_loc': {...},
# 'nam_loc_astronomical': {...},
# 'nam_loc_country_region': {...},
# 'nam_loc_gpe_admin1': {...},
# 'nam_loc_gpe_admin2': {...},
# 'nam_loc_gpe_admin3': {...},
# 'nam_loc_gpe_city': {...},
# 'nam_loc_gpe_conurbation': {...},
# 'nam_loc_gpe_country': {...},
# 'nam_loc_gpe_district': {...},
# 'nam_loc_gpe_subdivision': {...},
# 'nam_loc_historical_region': {...},
# 'nam_loc_hydronym': {...},
# 'nam_loc_hydronym_lake': {...},
# 'nam_loc_hydronym_ocean': {...},
# 'nam_loc_hydronym_river': {...},
# 'nam_loc_hydronym_sea': {...},
# 'nam_loc_land': {...},
# 'nam_loc_land_continent': {...},
# 'nam_loc_land_island': {...},
# 'nam_loc_land_mountain': {...},
# 'nam_loc_land_peak': {...},
# 'nam_loc_land_region': {...},
# 'nam_num_house': {...},
# 'nam_num_phone': {...},
# 'nam_org_company': {...},
# 'nam_org_group': {...},
# 'nam_org_group_band': {...},
# 'nam_org_group_team': {...},
# 'nam_org_institution': {...},
# 'nam_org_nation': {...},
# 'nam_org_organization': {...},
# 'nam_org_organization_sub': {...},
# 'nam_org_political_party': {...},
# 'nam_oth': {...},
# 'nam_oth_currency': {...},
# 'nam_oth_data_format': {...},
# 'nam_oth_license': {...},
# 'nam_oth_position': {...},
# 'nam_oth_tech': {...},
# 'nam_oth_www': {...},
# 'nam_pro': {...},
# 'nam_pro_award': {...},
# 'nam_pro_brand': {...},
# 'nam_pro_media': {...},
# 'nam_pro_media_periodic': {...},
# 'nam_pro_media_radio': {...},
# 'nam_pro_media_tv': {...},
# 'nam_pro_media_web': {...},
# 'nam_pro_model_car': {...},
# 'nam_pro_software': {...},
# 'nam_pro_software_game': {...},
# 'nam_pro_title': {...},
# 'nam_pro_title_album': {...},
# 'nam_pro_title_article': {...},
# 'nam_pro_title_book': {...},
# 'nam_pro_title_document': {...},
# 'nam_pro_title_song': {...},
# 'nam_pro_title_treaty': {...},
# 'nam_pro_title_tv': {...},
# 'nam_pro_vehicle': {...},
# 'overall_accuracy': 0.006156203762418094,
# 'overall_f1': 0.0009844258777797407,
# 'overall_precision': 0.0005213624939842789,
# 'overall_recall': 0.008803611738148984}
``` | 13,598 | [
[
-0.054931640625,
-0.031829833984375,
0.01343536376953125,
0.01154327392578125,
-0.0108795166015625,
0.01091766357421875,
0.00318145751953125,
-0.01532745361328125,
0.051788330078125,
0.0207366943359375,
-0.043426513671875,
-0.049163818359375,
-0.046783447265625,... |
imvladikon/hebrew_speech_kan | 2023-05-05T09:12:15.000Z | [
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"language:he",
"region:us"
] | imvladikon | null | null | 2 | 118 | 2022-03-02T23:29:22 | ---
task_categories:
- automatic-speech-recognition
language:
- he
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1569850175.0
num_examples: 8000
- name: validation
num_bytes: 394275049.0
num_examples: 2000
download_size: 1989406585
dataset_size: 1964125224.0
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Hebrew Dataset for ASR
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
```json
{'audio': {'path': '/root/.cache/huggingface/datasets/downloads/extracted/8ce7402f6482c6053251d7f3000eec88668c994beb48b7ca7352e77ef810a0b6/train/e429593fede945c185897e378a5839f4198.wav',
'array': array([-0.00265503, -0.0018158 , -0.00149536, ..., -0.00135803,
-0.00231934, -0.00190735]),
'sampling_rate': 16000},
'sentence': 'היא מבינה אותי יותר מכל אחד אחר'}
```
### Data Fields
[More Information Needed]
### Data Splits
| | train | validation |
| ---- | ----- | ---------- |
| number of samples | 8000 | 2000 |
| hours | 6.92 | 1.73 |
## Dataset Creation
### Curation Rationale
scraped data from youtube (channel כאן) with removing outliers (by length and ratio between length of the audio and sentences)
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{imvladikon2022hebrew_speech_kan,
author = {Gurevich, Vladimir},
title = {Hebrew Speech Recognition Dataset: Kan},
year = {2022},
howpublished = \url{https://huggingface.co/datasets/imvladikon/hebrew_speech_kan},
}
```
### Contributions
[More Information Needed] | 2,512 | [
[
-0.0301055908203125,
-0.03912353515625,
-0.01175689697265625,
0.0297393798828125,
-0.024688720703125,
0.001605987548828125,
-0.039276123046875,
-0.0221099853515625,
0.032501220703125,
0.026092529296875,
-0.059478759765625,
-0.0789794921875,
-0.0535888671875,
... |
metaeval/ethics | 2023-06-02T14:45:34.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"language:en",
"region:us"
] | metaeval | Probing for ethics understanding | null | 4 | 118 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license: []
multilinguality:
- monolingual
pretty_name: ethics
size_categories:
- unknown
source_datasets: []
tags: []
task_categories:
- text-classification
task_ids: []
---
https://github.com/hendrycks/ethics | 300 | [
[
0.0003254413604736328,
-0.0173187255859375,
0.0513916015625,
0.01068878173828125,
-0.006908416748046875,
0.0003216266632080078,
-0.002758026123046875,
-0.03717041015625,
0.020233154296875,
0.05377197265625,
-0.03448486328125,
-0.03228759765625,
0.014823913574218... |
strombergnlp/bornholmsk_parallel | 2022-07-01T15:45:35.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:da",
"language:da-bornholm",
"license:cc-by-4.0",
"region:us"
] | strombergnlp | This dataset is parallel text for Bornholmsk and Danish.
For more details, see the paper [Bornholmsk Natural Language Processing: Resources and Tools](https://aclanthology.org/W19-6138/). | @inproceedings{derczynski-kjeldsen-2019-bornholmsk,
title = "Bornholmsk Natural Language Processing: Resources and Tools",
author = "Derczynski, Leon and
Kjeldsen, Alex Speed",
booktitle = "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
month = sep # "{--}" # oct,
year = "2019",
address = "Turku, Finland",
publisher = {Link{\"o}ping University Electronic Press},
url = "https://aclanthology.org/W19-6138",
pages = "338--344",
abstract = {This paper introduces language processing resources and tools for Bornholmsk, a language spoken on the island of Bornholm, with roots in Danish and closely related to Scanian. This presents an overview of the language and available data, and the first NLP models for this living, minority Nordic language. Sammenfattnijng p{\aa} borrijnholmst: D{\ae}jnna artikkelijn introduserer naturspr{\aa}gsresurser {\aa} varktoi for borrijnholmst, ed spr{\aa}g a d{\ae}r snakkes p{\aa} {\"o}n Borrijnholm me r{\o}dder i danst {\aa} i n{\ae}r familia me sk{\aa}nst. Artikkelijn gjer ed {\^a}uersyn {\^a}uer spr{\aa}ged {\aa} di datan som fijnnes, {\aa} di fosste NLP mod{\ae}llarna for d{\ae}tta l{\ae}wenes nordiska minnret{\^a}lsspr{\aa}ged.},
} | 2 | 118 | 2022-05-11T08:29:38 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
- da-bornholm
license:
- cc-by-4.0
multilinguality:
- translation
pretty_name: Bornholmsk/Danish Parallel Texts
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: bornholmsk-parallel
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/StrombergNLP/bornholmsk](https://github.com/StrombergNLP/bornholmsk)
- **Repository:** [https://github.com/StrombergNLP/bornholmsk](https://github.com/StrombergNLP/bornholmsk)
- **Paper:** [https://aclanthology.org/W19-6138/](https://aclanthology.org/W19-6138/)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
- **Size of downloaded dataset files:** 490 KB
- **Size of the generated dataset:** 582 KB
- **Total amount of disk used:** 1072 KB
### Dataset Summary
This dataset is parallel text for Bornholmsk and Danish.
For more details, see the paper [Bornholmsk Natural Language Processing: Resources and Tools](https://aclanthology.org/W19-6138/).
### Supported Tasks and Leaderboards
*
### Languages
Bornholmsk, a language variant of Danish spoken on the island of Bornholm, and Danish. bcp47: `da-bornholm` and `da-DK`
## Dataset Structure
### Data Instances
### Data Fields
`id`: the sentence ID, `int`
`da-bornholm`: the Bornholmsk text, `string`
`da`: the Danish translation, `string`
### Data Splits
* Train: 5785 sentence pairs
* Validation: 500 sentence pairs
* Test: 500 sentence pairs
## Dataset Creation
### Curation Rationale
To gather as much parallel Bornholmsk together as possible
### Source Data
#### Initial Data Collection and Normalization
From a translation of Kuhre's Sansager, a selection of colloquial resources, and a prototype Bornholmsk/Danish dictionary
#### Who are the source language producers?
Native speakers of Bornholmsk who have produced works in their native language, or translated them to Danish. Much of the data is the result of a community of Bornholmsk speakers volunteering their time across the island in an effort to capture this endangered language.
### Annotations
#### Annotation process
No annotations
#### Who are the annotators?
Native speakers of Bornholmsk, mostly aged 60+.
### Personal and Sensitive Information
Unknown, but low risk of presence, given the source material
## Considerations for Using the Data
### Social Impact of Dataset
The hope behind this data is to enable people to learn and use Bornholmsk
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
This collection of Bornholmsk is curated by Leon Derczynski and Alex Speed Kjeldsen
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
```
@inproceedings{derczynski-kjeldsen-2019-bornholmsk,
title = "Bornholmsk Natural Language Processing: Resources and Tools",
author = "Derczynski, Leon and
Kjeldsen, Alex Speed",
booktitle = "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
month = sep # "{--}" # oct,
year = "2019",
address = "Turku, Finland",
publisher = {Link{\"o}ping University Electronic Press},
url = "https://aclanthology.org/W19-6138",
pages = "338--344",
}
``` | 4,370 | [
[
-0.037567138671875,
-0.048858642578125,
0.019683837890625,
0.0220184326171875,
-0.022491455078125,
0.0118255615234375,
-0.04071044921875,
-0.043548583984375,
0.042327880859375,
0.042694091796875,
-0.0513916015625,
-0.07305908203125,
-0.04083251953125,
0.0323... |
FinanceInc/auditor_sentiment | 2022-07-21T19:03:51.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"region:us"
] | FinanceInc | null | null | 11 | 118 | 2022-07-21T18:25:47 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- sentiment-classification
paperswithcode_id: null
pretty_name: Auditor_Sentiment
---
# Dataset Card for Auditor Sentiment
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
## Dataset Description
Auditor review sentiment collected by News Department
- **Point of Contact:**
Talked to COE for Auditing, currently sue@demo.org
### Dataset Summary
Auditor sentiment dataset of sentences from financial news. The dataset consists of several thousand sentences from English language financial news categorized by sentiment.
### Supported Tasks and Leaderboards
Sentiment Classification
### Languages
English
## Dataset Structure
### Data Instances
```
"sentence": "Pharmaceuticals group Orion Corp reported a fall in its third-quarter earnings that were hit by larger expenditures on R&D and marketing .",
"label": "negative"
```
### Data Fields
- sentence: a tokenized line from the dataset
- label: a label corresponding to the class as a string: 'positive' - (2), 'neutral' - (1), or 'negative' - (0)
### Data Splits
A train/test split was created randomly with a 75/25 split
## Dataset Creation
### Curation Rationale
To gather our auditor evaluations into one dataset. Previous attempts using off-the-shelf sentiment had only 70% F1, this dataset was an attempt to improve upon that performance.
### Source Data
#### Initial Data Collection and Normalization
The corpus used in this paper is made out of English news reports.
#### Who are the source language producers?
The source data was written by various auditors.
### Annotations
#### Annotation process
This release of the auditor reviews covers a collection of 4840
sentences. The selected collection of phrases was annotated by 16 people with
adequate background knowledge on financial markets. The subset here is where inter-annotation agreement was greater than 75%.
#### Who are the annotators?
They were pulled from the SME list, names are held by sue@demo.org
### Personal and Sensitive Information
There is no personal or sensitive information in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
All annotators were from the same institution and so interannotator agreement
should be understood with this taken into account.
### Licensing Information
License: Demo.Org Proprietary - DO NOT SHARE
This dataset is based on the [financial phrasebank](https://huggingface.co/datasets/financial_phrasebank) dataset. | 3,707 | [
[
-0.03424072265625,
-0.0186004638671875,
-0.005237579345703125,
0.03094482421875,
-0.035736083984375,
-0.0018339157104492188,
-0.01306915283203125,
-0.032623291015625,
0.0338134765625,
0.035430908203125,
-0.038330078125,
-0.06170654296875,
-0.04669189453125,
... |
taide/TAIDE-14-tasks | 2023-10-26T09:14:32.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:n<1K",
"language:zh",
"language:en",
"license:cc-by-nc-4.0",
"gpt4",
"region:us"
] | taide | null | null | 11 | 118 | 2023-09-04T06:21:18 | ---
license: cc-by-nc-4.0
task_categories:
- text-generation
- question-answering
- conversational
language:
- zh
- en
tags:
- gpt4
size_categories:
- n<1K
---
# Dataset Card for TAIDE-14-tasks
### Dataset Summary
The "TAIDE-14-tasks" dataset, derived from the TAIDE project, encompasses 14 prevalent text generation tasks. This dataset features a collection of 140 prompts tailored for assessing Traditional Chinese Large Language Models (LLM). GPT-4 meticulously crafted these prompts using the provided task, domain, and keywords from the instructions, with further validation by human experts. Each data entry not only contains the main content but also offers both positive and negative reference responses. These positive and negative reference responses are generated by GPT-4 and then manually proofread to ensure accuracy and relevance. For those keen on evaluating LLMs, we advocate for the G-Eval methodology.
Topics Covered (50):
```
{'人類學和社會學', '心理學和心理健康', '心靈和身心健康', '生物學和生物技術', '地理和地球科學',
'老年人和長者議題', '汽車和交通', '宗教和信仰', '法律和法規', '社區和社會發展',
'社會和文化議題', '社群媒體和網路文化', '青少年和成年人生活', '品牌和行銷', '建築和設計'
'政治和國際關係', '科技和人工智慧', '科學和探索', '音樂和音樂創作', '飛行和航空業',
'家庭和家居裝潢', '家庭和親密關係', '旅遊和探險', '時尚和美容', '時尚和配件',
'神秘學和靈性', '財務和投資', '假日和節日', '動物和動物保護', '教育和學習',
'飲食和烹飪', '傳媒和新聞業', '愛情和浪漫', '節能和可持續發展', '經濟和商業',
'資料和統計學', '遊戲和遊戲設計', '電子商務和網路交易', '電影和電影產業', '慈善和志願服務',
'演講和公共演講', '網路安全和隱私', '歷史和文化遺產', '環境和氣候變化', '職業和就業市場',
'職業道德和商業道德', '醫學和健康', '寵物和動物福利', '藝術和文學', '體育和健身'}
```
Tasks Included (14):
```
{'開放式生成', '分類', '問答', '摘要任務', '寫作', '翻譯', '文本分析', '常識推理', '寫信', '抽取', '推薦', '情緒分析', '提供建議', '對話生成'}
```
### Language
* Predominantly in Traditional Chinese.
* Some portions in English.
## Dataset Structure
### Data Columns
* Topic
* Task
* Keywords
* Prompt
* Input
* Positive Response
* Negative Response
### Data Instances
```
{
'Topic': '科技和人工智慧',
'Task': '寫信',
'Keywords': '如何?',
'Prompt': '假設你是一名人工智慧專家,一家公司希望了解如何將AI技術引入他們的產品。請寫一封回信,解釋如何將AI技術整合到他們的業務中。',
'Input': '<noinput>',
'Positive Response': '親愛的[公司名稱]:
非常感謝您的來信,我很高興能為您提供有關將AI技術整合到您的產品中的建議。在當今的科技世界中,人工智慧(AI)已經成為許多成功創新產品的核心驅動力之一。
首先,我建議您分析哪些部分的業務流程可以透過使用AI技術得到改進。這可以包括自動化的客戶服務,資料分析和預測,產品推薦等。確定這些範疇後,可以開始研究適用於每個特定問題的AI解決方案。
接下來,您需要建立一個專業的AI團隊,包括資料科學家、機器學習工程師、軟體開發人員等。此外,您可能需要提供相應的培訓,以確保團隊成員具備實現AI整合的技能。
在技術層面上,您可以選擇開源的機器學習框架,如TensorFlow或PyTorch,以編寫和設計自己的機器學習模型。此外,您還可以使用現有的API和AI平台,例如Google Cloud AI或Microsoft Azure AI,這些平台提供了大量預先設計的AI解決方案。
最後,將AI技術整合到產品中的過程應該是可持續且不斷優化的。確保持續收集資料,以便為機器學習模型提供更多資料,並不斷改進其性能。
希望我的建議對您有所幫助。如果您有任何進一步的疑問或需要專業建議,請隨時與我聯繫。
祝商祺!
[您的名字]',
'Negative Response': '親愛的[公司名稱]:
嗨,謝謝你的信。把AI加到你的業務裡就是找個電腦高手,讓他們給你做個程式,然後看看有沒有什麼好玩的功能可以加進去,就這麼簡單。
祝好運!
[您的名字]',
}
```
## Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). | 3,124 | [
[
-0.035919189453125,
-0.048583984375,
0.0276947021484375,
0.0184173583984375,
-0.0255584716796875,
-0.01151275634765625,
-0.0159454345703125,
-0.03045654296875,
0.025970458984375,
0.0220489501953125,
-0.049896240234375,
-0.0499267578125,
-0.03814697265625,
0.... |
Shreyasrp/Text-to-SQL | 2023-09-28T17:04:10.000Z | [
"region:us"
] | Shreyasrp | null | null | 0 | 118 | 2023-09-28T17:02:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
erhwenkuo/wikipedia-zhtw | 2023-10-10T03:22:43.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"size_categories:1M<n<10M",
"language:zh",
"license:cc-by-sa-3.0",
"region:us"
] | erhwenkuo | null | null | 2 | 118 | 2023-10-10T02:31:00 | ---
dataset_info:
config_name: '20231001'
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1682641991
num_examples: 1373081
download_size: 1064907519
dataset_size: 1682641991
configs:
- config_name: '20231001'
data_files:
- split: train
path: 20231001/train-*
license: cc-by-sa-3.0
task_categories:
- text-generation
- fill-mask
language:
- zh
size_categories:
- 1M<n<10M
---
# Dataset Card for "wikipedia-zhtw"
維基百科數據集包含許多不同語言的文章。這個數據集是根據 Wikipedia dumps (https://dumps.wikimedia.org/) 裡頭 `zhwiki` 的中文下載檔案來建構的。每個範例都包含一篇完整的維基百科文章的內容,並經過清理以去除不需要的部分(例如參考文獻等)。
- **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
- **zhwiki 下載點:** [https://dumps.wikimedia.org/zhwiki](https://dumps.wikimedia.org/zhwiki)
## 數據 Dump 版本
由於維基百科數據集定期會進行網站數據拋轉,在 `2023/10/10` 的時間點去查看時會有下列的數據可供下載:
|數據 Dump 目錄|拋轉時間點|
|-------------|--------|
|`20230620/`|01-Aug-2023 09:31|
|`20230701/`|20-Aug-2023 09:41|
|`20230720/`|01-Sep-2023 09:31|
|`20230801/`|20-Sep-2023 09:38|
|`20230820/`|01-Oct-2023 09:34|
|`20230901/`|04-Sep-2023 21:18|
|`20230920/`|22-Sep-2023 01:59|
|`20231001/`|10-Oct-2023 02:55|
|`latest/`|10-Oct-2023 02:55|
本數據集會定期去取得最近有明確的日期來進行下載與清理,便於驗證與使用。
## 數據下載清理
1. 下載 zhwiki 的 data dump 檔案
2. 使用 [WikiExtractor](https://github.com/attardi/wikiextractor) 套件來進行文件內容萃取
3. 進行數據清理并轉換成 jsonl 格式檔案
4. 使用 Huggingface [Datasets](https://pypi.org/project/datasets/) 套件來載入 jsonl 并上傳至 Huggingface Hub
## 資料集結構
範例如下:
{'id': '333',
'url': 'https://zh.wikipedia.org/wiki?curid=333',
'title': '鄧麗君',
'text': '鄧麗君,臺灣歌手、演員及慈善家,本名鄧麗筠。她是20世紀後期華語流行音樂具代表性的人物...'
}
## 資料欄位
所有配置中的資料欄位都是相同的:
- `id (str)`: 文章的 ID。
- `url (str)`: 文章的 URL。
- `title (str)`: 文章的標題。
- `text (str)`: 文章的文字內容。
## 使用
```python
from datasets import load_dataset
# 請在第二個參數去指定要使用的數據 dump 的日期
load_dataset("erhwenkuo/wikipedia-zhtw", "20231001")
```
## 許可資訊
維基百科的大部分文章內容及其許多圖像均根據 `Creative Commons Attribution-ShareAlike 3.0 Unported License (CC BY-SA)` 和 `GNU Free Documentation License (GFDL)` 共同授權。
## Citation
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
``` | 2,290 | [
[
-0.051361083984375,
-0.040802001953125,
-0.0017766952514648438,
0.01204681396484375,
-0.036834716796875,
-0.029083251953125,
-0.0219573974609375,
-0.0280914306640625,
0.03204345703125,
0.016876220703125,
-0.053131103515625,
-0.05743408203125,
-0.02099609375,
... |
interpress_news_category_tr_lite | 2023-01-25T14:33:07.000Z | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|interpress_news_category_tr",
"language:tr",
"license:unknown",
"news-category-classification",
"region:us"
] | null | It is a Turkish news data set consisting of 273601 news in 10 categories, compiled from print media and news websites between 2010 and 2017 by the Interpress (https://www.interpress.com/) media monitoring company. It has been rearranged as easily separable and with fewer classes. | null | 10 | 117 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- tr
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|interpress_news_category_tr
task_categories:
- text-classification
task_ids: []
pretty_name: Interpress Turkish News Category Dataset (270K - Lite Version)
tags:
- news-category-classification
dataset_info:
features:
- name: content
dtype: string
- name: category
dtype:
class_label:
names:
'0': kültürsanat
'1': ekonomi
'2': siyaset
'3': eğitim
'4': dünya
'5': spor
'6': teknoloji
'7': magazin
'8': sağlık
'9': gündem
config_name: 270k_10class
splits:
- name: train
num_bytes: 721110711
num_examples: 218880
- name: test
num_bytes: 179348267
num_examples: 54721
download_size: 342920336
dataset_size: 900458978
---
# Dataset Card for Interpress Turkish News Category Dataset (270K - Lite Version)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Interpress](https://www.interpress.com/)
- **Point of Contact:** [Yavuz Komecoglu](mailto:yavuz.komecoglu@kodiks.com)
### Dataset Summary
Turkish News Category Dataset (270K - Lite Version) is a Turkish news data set consisting of 273601 news in 10 categories ("kültürsanat", "ekonomi", "siyaset", "eğitim", "dünya", "spor", "teknoloji", "magazin", "sağlık", "gündem"), compiled from printed media and news websites between 2010 and 2017 by the Interpress (https://www.interpress.com/) media monitoring company. **It has been rearranged as easily separable and with fewer classes.**
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is based on Turkish.
## Dataset Structure
### Data Instances
A text classification dataset with 10 different news category.
Here is an example from the dataset:
```
{
'category': 0,
'content': 'Tarihten Sınıfta Kaldık Bugün tarihe damgasını vuran Osmanlı İmparatorluğu nun kuruluş yıldönümü. Adına dizilerin çekildiği tarihimizi ne kadar biliyoruz? Gerekçeler faklı; ama sonuç aynı çıktı. Tarihten sınıfta kaldık. Sayfa 5r 1 Bugün tarihe damgasını vuran Osmanlı İmparatorluğumun kuruluş yıldönümü. Adına dizilerin çekildiği tarihimizi ne kadar biliyoruz? Gerekçeler faklı; ama sonuç aynı çıktı. Tarihten sınıfta kaldık 7 Ocak 1299... Kıtalara dağılan ücüyle, ülkeler arasında gördüğü aygıyla tarihe damgasını vuran anlı devletin kuruluş tarihi. Peki, anlı tarihimizi ne kadar biliyoruz? on zamanlarda tarihimizi anlatan izilere ilgi nasıl? Bu dizilerde anlatanlar ne kadar sağlıklı? İşte sokaın değerlendirmesi; levlüdiye Karaman (42-Ev lamım): Bir bilgim yok. Tarihle izla ilgilenmiyorum. Eşim daha ilgilidir bu konuda. Evde anlatır, ndan duyduklarımla yetiniyorum esem yalan olmaz. Osmanlı döeminde yaşamak isterdim. Tarih izileri izlerim Muhteşem Yüzyıl izisini çok izledim; hatta hiç kaırmazdım. Ama tarihimiz bu değil. Sunuün bilincindeyim. Muhteşem üzyıl dizisi genelde haremiyle ön landaydı. Onun için tarihi diziden ğrenmeyi de doğru bulmuyorum. )kullarda verilen tarih dersleri yeisiz. Daha çok tanıtabilirler. Görel anlatım yapılsın çocuklarımız aten okumak istemiyor. En azman eğlenceli hale getirip bu şekilde ilgilendirebilirler. erdi Üstün (22-Saatçi): Bu gün Osmanlı Devleti nin kuruluş yıldönümü olduğunu bilmiyordum. O dönemde yaşamak isterdim. Tarih yazılmış neden yaşamak istemeyim ki. Tarihime yeterince hakim olduğumu düşünüyorum. Araştırmalar yapıyorum. Merak ediyorum. Okullarda verilen tarih dersleri yeterli. Tarih dizisi izlemem, televizyondan tarihimi öğrenmek bana mantıklı gelmiyor. Yeterli olabilir; ama hikayeleştiriliyor. Sonuçta olduğu gibi anlatılsa daha iyi olur. Songül Karabacak (40-Ev Hanımı): Kuruluş yıldönümü olduğunu bilmiyordum. Tarih bilgim çok azdır. Zaten biz yaşadığımız dönemde tarih yazıyoruz. Osmanlı Dönemi nde yaşamak istemezdim. Sebebini bilmiyorum; ama hayatımdan memnunum, dönemden de memnunum. Dizileri takip etmiyorum. Ama mutlaka dizilerde tarihimiz doğru yansıtılıyor ki insanlar sürekli takip ediyor. Benim televizyonla pek aram yoktur. Ertuğrul Şahin (47-Çalışmıyor): Kuruluş yıldönümü olduğunu bilmiyordum. Sizden öğrendim. O dönemde yaşamak isterdim. Tarih sonuçta merak ederim. Tarihle ilgili çok bilgim yok. Okumadım, zaten şartlar el vermedi. Okullarda verilen eğitim yeterli değil. Örnek vermek gerekirse; 20 yaşında oğlum var Atatürk ün doğum yılını soruyorum yüzüme bakıyor. Verilen eğitim belli. Konu belirliyorlar onun dışına çıkmıyorlar. Daha fazla bilgi verilebilir. Tabi gençlerimizde de suç var bize baksınlar tarihimizi bilmiyoruz. Onlar araştırma yapsınlar her gün internette geziyorlar faydasız bir şeye bakacaklarına ecdatlarını okusunlar. Tarih dizlerini izlerim. Ama doğru yansıtılıyor mu orasını bilmiyorum sadece izleyiciyim. Ama önceden Süleyman Şah ı duyardım. Büyüklerimiz anlatırdı bunu diziden teyit ettim mesela. Ahmet Efe (22-Muhasebeci): Kuruluş yıldönümü olduğuyla ilgili bir bilgim yok. O dönemde yaşamak isterdim. Aldığımız bilgiler sonucunda illa ki bir özenme oluyor. Tam anlamıyla tarih bilgisine sahip olduğumu düşünmüyorum. Tarihe merakım var aslında; ama çok kısıtlı araştırma yapıyorum. Okullarda verilen tarih dersi yeterli değil. Çünkü şuradan birkaç çocuğu çevirip sorsanız size yeterli bilgi vermez. Veremez onun da bilgisi yok sonuçta. Zaten kısıtlı bilgiler veriliyor. Tarih dizilerini kılıç kalkan kuşanıp izliyorum. Doğru yansıtılıyor bundan dolayı da biraz insanlar tarihini öğrenmeye başladı desek yalan olmaz. Bu ne kadar doğru derseniz de bilgiyi doğru verdikten sonra tabi diziden de tarih öğrenilebilir. Mehmet Ak (28-Satış Danışmanı): Kuruluşunun bugün olduğunu bilmiyordum. O dönemde yaşamak isterdim. Yeterli bilgim yok bence kim tarihi tam anlamıyla öğrenebilir ki zaten. Ama tabi tarih kitapları okuyorum, araştırıyorum. Okullarda verilen tarih derslerini yeterli bulmuyorum; ama daha fazla neler yapılabilir, tarih küçüklere nasıl anlatılır bilmiyorum tek bildiğim yeterli olmadığı. Tarih dizileri gerçeği yüzde 75 yansıtıyor. Bu konuda araştırma yaptım yüzeysel anlatılıyor; fakat yine de bilgi edinilebilecek diziler. En azından rutinleşmiş dizi konularından uzak. Aile ile rahat rahat izleyebilirsin. Hasan Çalık (65-Emekli): Kuruluş yıldönümü olduğunu biliyorum. Araştırma yaparım. O dönemde yaşamak istemezdim Cumhuriyet döneminde yaşamayı daha çok isterdim. Okullarda verilen dersler yeterli. Film ya da dizi okumak yerine kitap okumayı tercih ederim. Bir insan ancak kitap okuyarak aydınlanabilir. Bu şekilde kendini geliştirebilir. Bir ömre ne kadar kitap sığdırırsan o kadar aydın bir insan olursun. Konusu fark etmez ister tarih olsun, ister roman okumak her zaman kazanç sağlar. Bir diziden tarihi ne kadar yeterli öğrenebilirsin ki ya da ne kadar doğru anlatılabilir. Bence diziyi bırakıp kitaplara yönelsinler. Nuray Çelik'
}
```
### Data Fields
- **category** : Indicates to which category the news text belongs.
(Such as "kültürsanat" (0), "ekonomi" (1), "siyaset" (2), "eğitim" (3), "dünya" (4), "spor" (5), "teknoloji" (6), "magazin" (7), "sağlık" (8), "gündem" (9))
- **content** : Contains the text of the news.
### Data Splits
The data is split into a training and testing. The split is organized as the following
| | train | test |
|------------|--------:|-------:|
| data split | 218,880 | 54,721 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
Downloaded over 270,000 news from the printed media and news websites between 2010 and 2017 by the Interpress (https://www.interpress.com/) media monitoring company. This data collection compiled from print media and internet news is presented in its raw form. For this reason, it is appropriate to use it with careful pre-processing steps regarding various OCR errors and typos.
#### Who are the source language producers?
Turkish printed news sources and online news sites.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
https://www.interpress.com/
### Contributions
Thanks to [@basakbuluz](https://github.com/basakbuluz) & [@yavuzkomecoglu](https://github.com/yavuzkomecoglu) & [@serdarakyol](https://github.com/serdarakyol/) for adding this dataset. | 9,989 | [
[
-0.058502197265625,
-0.04180908203125,
0.013031005859375,
0.012939453125,
-0.04547119140625,
0.0005645751953125,
-0.0021877288818359375,
-0.030364990234375,
0.0379638671875,
0.01094818115234375,
-0.0202789306640625,
-0.0433349609375,
-0.052703857421875,
0.02... |
SetFit/mnli_mm | 2022-02-28T13:56:44.000Z | [
"region:us"
] | SetFit | null | null | 0 | 117 | 2022-03-02T23:29:22 | # Glue MNLI
This dataset is a port of the official [`mnli` dataset](https://huggingface.co/datasets/glue/viewer/mnli/train) on the Hub.
It contains the mismatched version.
Note that the premise and hypothesis columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| 352 | [
[
-0.0240478515625,
-0.04931640625,
0.0020809173583984375,
0.0186309814453125,
-0.005466461181640625,
-0.003650665283203125,
0.026947021484375,
-0.008331298828125,
0.06488037109375,
0.039276123046875,
-0.066162109375,
-0.01026153564453125,
-0.027862548828125,
... |
classla/copa_hr | 2022-10-25T07:32:15.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:hr",
"license:cc-by-sa-4.0",
"causal-reasoning",
"textual-entailment",
"commonsense-reasoning",
"arxiv:2005.00333",
"arxiv:2104.09243",
"region:us"
] | classla | The COPA-HR dataset (Choice of plausible alternatives in Croatian) is a translation
of the English COPA dataset (https://people.ict.usc.edu/~gordon/copa.html) by following the
XCOPA dataset translation methodology (https://arxiv.org/abs/2005.00333). The dataset consists of 1000 premises
(My body cast a shadow over the grass), each given a question (What is the cause?), and two choices
(The sun was rising; The grass was cut), with a label encoding which of the choices is more plausible
given the annotator or translator (The sun was rising).
The dataset is split into 400 training samples, 100 validation samples, and 500 test samples. It includes the
following features: 'premise', 'choice1', 'choice2', 'label', 'question', 'changed' (boolean). | @article{DBLP:journals/corr/abs-2104-09243,
author = {Nikola Ljubesic and
Davor Lauc},
title = {BERTi{\'{c}} - The Transformer Language Model for Bosnian, Croatian,
Montenegrin and Serbian},
journal = {CoRR},
volume = {abs/2104.09243},
year = {2021},
url = {https://arxiv.org/abs/2104.09243},
archivePrefix = {arXiv},
} | 0 | 117 | 2022-03-02T23:29:22 | ---
language:
- hr
license:
- cc-by-sa-4.0
task_categories:
- text-classification
task_ids:
- natural-language-inference
tags:
- causal-reasoning
- textual-entailment
- commonsense-reasoning
---
The COPA-HR dataset (Choice of plausible alternatives in Croatian) is a translation
of the English COPA dataset (https://people.ict.usc.edu/~gordon/copa.html) by following the
XCOPA dataset translation methodology (https://arxiv.org/abs/2005.00333). The dataset consists of 1000 premises
(My body cast a shadow over the grass), each given a question (What is the cause?), and two choices
(The sun was rising; The grass was cut), with a label encoding which of the choices is more plausible
given the annotator or translator (The sun was rising).
The dataset is split into 400 training samples, 100 validation samples, and 500 test samples. It includes the
following features: 'premise', 'choice1', 'choice2', 'label', 'question', 'changed' (boolean).
If you use the dataset in your work, please cite
```
@article{DBLP:journals/corr/abs-2104-09243,
author = {Nikola Ljube\\\\v{s}i\\\\'{c} and
Davor Lauc},
title = {BERTi{\\\\'{c}} - The Transformer Language Model for Bosnian, Croatian,
Montenegrin and Serbian},
journal = {CoRR},
volume = {abs/2104.09243},
year = {2021},
url = {https://arxiv.org/abs/2104.09243},
archivePrefix = {arXiv},
}
``` | 1,406 | [
[
-0.0189056396484375,
-0.030120849609375,
0.024169921875,
0.0028438568115234375,
-0.024322509765625,
0.004802703857421875,
-0.0200347900390625,
-0.042572021484375,
0.00684356689453125,
0.038543701171875,
-0.053070068359375,
-0.037109375,
-0.0182952880859375,
... |
ai4bharat/IndicHeadlineGeneration | 2022-10-13T06:08:20.000Z | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:27K<n<341K",
"source_datasets:original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.",
"language:as",
"language:bn",
"language:gu",
... | ai4bharat | This is the new headline generation dataset released as part of IndicNLG Suite. Each
input document is paired an output title. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 1.43M. | @inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
} | 0 | 117 | 2022-03-10T09:58:27 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
pretty_name: IndicHeadlineGeneration
size_categories:
- 27K<n<341K
source_datasets:
- original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.
task_categories:
- conditional-text-generation
task_ids:
- conditional-text-generation-other-headline-generation
---
# Dataset Card for "IndicHeadlineGeneration"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicHeadlineGeneration is the news headline generation dataset released as part of IndicNLG Suite. Each
input document is paired with an output as title. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 1.4M.
### Supported Tasks and Leaderboards
**Tasks:** Headline Generation
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{'id': '14',
'input': "अमेरिकी सिंगर अरियाना ग्रांडे का नया म्यूजिक एल्बम 'थैंक यू नेक्स्ट' रिलीज हो गया है।एक दिन पहले ही रिलीज हुए इस गाने को देखने वालों की संख्या 37,663,702 पहुंच गई है।यूट्यूब पर अपलोड इस गाने को 24 घंटे के भीतर 3.8 मिलियन लोगों ने पसंद किया है।अरियाना ग्रांडे नई दिल्लीः अमेरिकी सिंगर अरियाना ग्रांडे का नया म्यूजिक एल्बम 'थैंक यू नेक्स्ट' रिलीज हो गया है।एक दिन पहले ही रिलीज हुए इस गाने को देखने वालों की संख्या 37,663,702 पहुंच गई है।यूट्यूब पर अपलोड इस गाने को 24 घंटे के भीतर 3.8 मिलियन लोगों ने पसंद किया है।वहीं इस वीडियो पर कमेंट्स की बाढ़ आ गई है।गाने में मीन गर्ल्स, ब्रिंग इट ऑन, लीगली ब्लॉंड और 13 गोइंग 30 के कुछ फेमस सीन्स को दिखाया गया है।गाने में क्रिस जैनर का कैमियो भी है।बता दें अभी कुछ महीने पहले ही अरियाना के एक्स ब्वॉयफ्रेंड मैक मिलर का 26 साल की उम्र में निधन हो गया था।इस खबर को सुनकर अरियाना टूट सी गई थीं।उन्होंने सोशल मीडिया पर पोस्ट कर कई बार अपनी भावनाएं व्यक्त की।अरियाना ग्रांडे और रैपर मैक मिलर ने करीब 2 साल तक एक दूसरे को डेट किया।मैक के निधन की वजह ड्रग्स की ओवरडोज बताई गई।दोनों की मुलाकात साल 2012 में हुई थी।दोनों ने एक कंसर्ट में साथ कई गानों पर परफॉर्म भी किया था।जिसके बाद दोनों एक दूसरे को डेट करने लगे लेकिन नशे की लत के कारण अरियाना ने उनसे ब्रेकअप कर लिया।पर देश-विदेश की ताजा और स्पेशल स्टोरी पढ़ते हुए अपने आप को रखिए अप-टू-डेट।के लिए क्लिक करें सिनेमा सेक्शन",
'target': 'अरियाना ग्रांडे का नया गाना रिलीज, सोशल मीडिया पर वायरल',
'url': 'https://www.indiatv.in/entertainment/hollywood-ariana-grande-shatters-24-hour-views-record-612835'
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `input (string)`: News article as input.
- `target (strings)`: Output as headline of the news article.
- `url (string)`: Source web link of the news article.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Dev | Test |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 29,631 | 14,592 | 14,808 |
Bengali | bn | 113,424 | 14,739 | 14,568 |
Gujarati | gu | 199,972 | 31,270 | 31,215 |
Hindi | hi | 208,221 | 44,738 | 44,514 |
Kannada | kn | 132,380 | 19,416 | 3,261 |
Malayalam | ml | 10,358 | 5,388 | 5,220 |
Marathi | mr | 114,042 | 14,253 | 14,340 |
Oriya | or | 58,225 | 7,484 | 7,137 |
Punjabi | pa | 48,441 | 6,108 | 6,086 |
Tamil | ta | 60,650 | 7,616 | 7,688 |
Telugu | te | 21,352 | 2,690 | 2,675 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
For hindi, web sources like [Dainik Bhaskar](https://www.bhaskar.com), [Naidunia](https://www.naidunia.com/), [NDTV](https://ndtv.in/), [Business Standard](https://hindi.business-standard.com/) and [IndiaTV](https://www.indiatv.in/). For other languages, modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) dataset.
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437) | 7,540 | [
[
-0.03521728515625,
-0.050048828125,
-0.007587432861328125,
0.0274658203125,
-0.029541015625,
0.01910400390625,
-0.0307464599609375,
-0.023651123046875,
0.025390625,
0.01366424560546875,
-0.04193115234375,
-0.0535888671875,
-0.04962158203125,
0.02024841308593... |
GEM/xwikis | 2023-02-22T13:05:19.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:de",
"language:en",
"language:fr",
"language:cs",
"license:cc-by-sa-4.0",
"arxiv:2202.09583",
"region:us"
] | GEM | The XWikis Corpus (Perez-Beltrachini and Lapata, 2021) provides datasets with different language pairs and directions for cross-lingual abstractive document summarisation. This current version includes four languages: English, German, French, and Czech. The dataset is derived from Wikipedia. It is based on the observation that for a Wikipedia title, the lead section provides an overview conveying salient information, while the body provides detailed information. It thus assumes the body and lead paragraph as a document-summary pair. Furthermore, as a Wikipedia title can be associated with Wikipedia articles in various languages, 1) Wikipedia’s Interlanguage Links are used to find titles across languages and 2) given any two related Wikipedia titles, e.g., Huile d’Olive (French) and Olive Oil (English), the lead paragraph from one title is paired with the body of the other to derive cross-lingual pairs. | @inproceedings{perez2021models,
title={Models and Datasets for Cross-Lingual Summarisation},
author={Perez-Beltrachini, Laura and Lapata, Mirella},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={9408--9423},
year={2021}
} | 2 | 117 | 2022-03-14T15:31:48 | ---
annotations_creators:
- found
language_creators:
- unknown
language:
- de
- en
- fr
- cs
license:
- cc-by-sa-4.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids: []
pretty_name: xwikis
---
# Dataset Card for GEM/xwikis
## Dataset Description
- **Homepage:** https://github.com/lauhaide/clads
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/2202.09583
- **Leaderboard:** N/A
- **Point of Contact:** Laura Perez-Beltrachini
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/xwikis).
### Dataset Summary
The XWikis Corpus provides datasets with different language pairs and directions for cross-lingual and multi-lingual abstractive document summarisation.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/xwikis')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/xwikis).
#### website
[Github](https://github.com/lauhaide/clads)
#### paper
https://arxiv.org/abs/2202.09583
#### authors
Laura Perez-Beltrachini (University of Edinburgh)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Github](https://github.com/lauhaide/clads)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
https://arxiv.org/abs/2202.09583
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@InProceedings{clads-emnlp,
author = "Laura Perez-Beltrachini and Mirella Lapata",
title = "Models and Datasets for Cross-Lingual Summarisation",
booktitle = "Proceedings of The 2021 Conference on Empirical Methods in Natural Language Processing ",
year = "2021",
address = "Punta Cana, Dominican Republic",
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Laura Perez-Beltrachini
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
lperez@ed.ac.uk
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`German`, `English`, `French`, `Czech`, `Chinese`
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
Cross-lingual and Multi-lingual single long input document abstractive summarisation.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Entity descriptive summarisation, that is, generate a summary that conveys the most salient facts of a document related to a given entity.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Laura Perez-Beltrachini (University of Edinburgh)
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Laura Perez-Beltrachini (University of Edinburgh) and Ronald Cardenas (University of Edinburgh)
### Dataset Structure
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
For each language pair and direction there exists a train/valid/test split.
The test split is a sample of size 7k from the intersection of titles existing in the four languages (cs,fr,en,de).
Train/valid are randomly split.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
- identification of entity salient information
- translation
- multi-linguality
- cross-lingual transfer, zero-shot, few-shot
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
ROUGE-1/2/L
## Dataset Curation
### Original Curation
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
other
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
found
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
The input documents have section structure information.
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
validated by another rater
#### Quality Control Details
<!-- info: Describe the quality control measures that were taken. -->
<!-- scope: microscope -->
Bilingual annotators assessed the content overlap of source document and target summaries.
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`public domain`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`public domain`
### Known Technical Limitations
| 9,957 | [
[
-0.03076171875,
-0.049102783203125,
0.022430419921875,
0.0053863525390625,
-0.006931304931640625,
-0.0022411346435546875,
-0.033905029296875,
-0.036895751953125,
0.0299224853515625,
0.03558349609375,
-0.036651611328125,
-0.06109619140625,
-0.0440673828125,
0... |
nthngdy/oscar-small | 2023-03-08T09:57:45.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:oscar",
"language:af",
"language:am",
"language:ar",
"language:arz",
"language:as",
"language:az",
"language:azb"... | nthngdy | The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\ | @inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{\'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{\'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{\"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
} | 4 | 117 | 2022-03-23T09:26:03 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- am
- ar
- arz
- as
- az
- azb
- ba
- be
- bg
- bn
- bo
- br
- ca
- ce
- ceb
- ckb
- cs
- cv
- cy
- da
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gl
- gu
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mhr
- mk
- ml
- mn
- mr
- ms
- mt
- my
- nds
- ne
- nl
- nn
- 'no'
- or
- os
- pa
- pl
- pnb
- ps
- pt
- ro
- ru
- sa
- sah
- sd
- sh
- si
- sk
- sl
- sq
- sr
- sv
- sw
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- yi
- zh
license:
- cc0-1.0
multilinguality:
- multilingual
source_datasets:
- oscar
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: oscar
pretty_name: OSCAR
---
## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts.
Using this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly inteded to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the [Go programming language](https://golang.org/) so it lets the [Go runtime](https://golang.org/src/runtime/mprof.go) handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
| 13,327 | [
[
-0.0278472900390625,
-0.0306549072265625,
0.01123809814453125,
0.002994537353515625,
-0.030242919921875,
0.0023708343505859375,
-0.0120086669921875,
-0.048736572265625,
0.046051025390625,
0.0352783203125,
-0.021331787109375,
-0.03607177734375,
-0.055572509765625... |
frgfm/imagewoof | 2022-12-11T22:26:18.000Z | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:en",
"license:apache-2.0",
"region:us"
] | frgfm | Imagewoof is a subset of 10 classes from Imagenet that aren't so
easy to classify, since they're all dog breeds. The breeds are:
Australian terrier, Border terrier, Samoyed, Beagle, Shih-Tzu,
English foxhound, Rhodesian ridgeback, Dingo, Golden retriever,
Old English sheepdog. | @software{Howard_Imagewoof_2019,
title={Imagewoof: a subset of 10 classes from Imagenet that aren't so easy to classify},
author={Jeremy Howard},
year={2019},
month={March},
publisher = {GitHub},
url = {https://github.com/fastai/imagenette#imagewoof}
} | 2 | 117 | 2022-07-26T15:21:56 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality: []
size_categories:
- 1K<n<10K
source_datasets:
- extended
task_categories:
- image-classification
task_ids: []
paperswithcode_id: imagewoof
pretty_name: Imagewoof
---
# Dataset Card for Imagewoof
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/fastai/imagenette#imagewoof
- **Repository:** https://github.com/fastai/imagenette
- **Leaderboard:** https://paperswithcode.com/sota/image-classification-on-imagewoof
### Dataset Summary
A smaller subset of 10 classes from [Imagenet](https://huggingface.co/datasets/imagenet-1k#dataset-summary) that aren't so easy to classify, since they're all dog breeds.
This dataset was created by [Jeremy Howard](https://twitter.com/jeremyphoward), and this repository is only there to share his work on this platform. The repository owner takes no credit of any kind in the creation, curation or packaging of the dataset.
### Supported Tasks and Leaderboards
- `image-classification`: The dataset can be used to train a model for Image Classification.
### Languages
The class labels in the dataset are in English.
## Dataset Structure
### Data Instances
A data point comprises an image URL and its classification label.
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=320x320 at 0x19FA12186D8>,
'label': 'Beagle',
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the image.
- `label`: the expected class label of the image.
### Data Splits
| |train|validation|
|---------|----:|---------:|
|imagewoof| 9025| 3929|
## Dataset Creation
### Curation Rationale
cf. https://huggingface.co/datasets/imagenet-1k#curation-rationale
### Source Data
#### Initial Data Collection and Normalization
Imagewoof is a subset of [ImageNet](https://huggingface.co/datasets/imagenet-1k). Information about data collection of the source data can be found [here](https://huggingface.co/datasets/imagenet-1k#initial-data-collection-and-normalization).
### Annotations
#### Annotation process
cf. https://huggingface.co/datasets/imagenet-1k#annotation-process
#### Who are the annotators?
cf. https://huggingface.co/datasets/imagenet-1k#who-are-the-annotators
### Personal and Sensitive Information
cf. https://huggingface.co/datasets/imagenet-1k#personal-and-sensitive-information
## Considerations for Using the Data
### Social Impact of Dataset
cf. https://huggingface.co/datasets/imagenet-1k#social-impact-of-dataset
### Discussion of Biases
cf. https://huggingface.co/datasets/imagenet-1k#discussion-of-biases
### Other Known Limitations
cf. https://huggingface.co/datasets/imagenet-1k#other-known-limitations
## Additional Information
### Dataset Curators
cf. https://huggingface.co/datasets/imagenet-1k#dataset-curators
and Jeremy Howard
### Licensing Information
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@software{Howard_Imagewoof_2019,
title={Imagewoof: a subset of 10 classes from Imagenet that aren't so easy to classify},
author={Jeremy Howard},
year={2019},
month={March},
publisher = {GitHub},
url = {https://github.com/fastai/imagenette#imagewoof}
}
```
### Contributions
This dataset was created by [Jeremy Howard](https://twitter.com/jeremyphoward) and published on [Github](https://github.com/fastai/imagenette). It was then only integrated into HuggingFace Datasets by [@frgfm](https://huggingface.co/frgfm).
| 4,661 | [
[
-0.050323486328125,
-0.0240020751953125,
-0.0085906982421875,
0.01194000244140625,
-0.0274810791015625,
-0.0186004638671875,
-0.0119781494140625,
-0.057647705078125,
0.032745361328125,
0.0276336669921875,
-0.04364013671875,
-0.0582275390625,
-0.052642822265625,
... |
CShorten/CDC-COVID-FAQ | 2022-09-11T15:42:46.000Z | [
"license:afl-3.0",
"region:us"
] | CShorten | null | null | 1 | 117 | 2022-09-11T15:42:18 | ---
license: afl-3.0
---
Dataset extracted from https://www.cdc.gov/coronavirus/2019-ncov/hcp/faq.html#Treatment-and-Management.
| 129 | [
[
-0.007328033447265625,
-0.0494384765625,
0.0248565673828125,
0.00467681884765625,
-0.02044677734375,
0.0010356903076171875,
0.01169586181640625,
-0.025909423828125,
0.0159759521484375,
0.0814208984375,
-0.043853759765625,
-0.04962158203125,
-0.0273284912109375,
... |
bigbio/gad | 2022-12-22T15:25:28.000Z | [
"multilinguality:momolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | bigbio | A corpus identifying associations between genes and diseases by a semi-automatic
annotation procedure based on the Genetic Association Database | @article{Bravo2015,
doi = {10.1186/s12859-015-0472-9},
url = {https://doi.org/10.1186/s12859-015-0472-9},
year = {2015},
month = feb,
publisher = {Springer Science and Business Media {LLC}},
volume = {16},
number = {1},
author = {{\`{A}}lex Bravo and Janet Pi{\~{n}}ero and N{\'{u}}ria Queralt-Rosinach and Michael Rautschka and Laura I Furlong},
title = {Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research},
journal = {{BMC} Bioinformatics}
} | 1 | 117 | 2022-09-26T03:36:32 | ---
language:
- en
bigbio_language:
- English
license: cc-by-4.0
multilinguality: momolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: GAD
homepage: https://geneticassociationdb.nih.gov/
bigbio_pubmed: true
bigbio_public: true
bigbio_tasks:
- TEXT_CLASSIFICATION
paperswithcode_id: gad
---
# Dataset Card for GAD
## Dataset Description
- **Homepage:** https://geneticassociationdb.nih.gov/
- **Pubmed:** True
- **Public:** True
- **Tasks:** TXTCLASS
A corpus identifying associations between genes and diseases by a semi-automatic
annotation procedure based on the Genetic Association Database.
## Note about homepage
The homepage for this dataset is no longer reachable, but the url is recorded here.
Data for this dataset was originally downloaded from a google drive
folder (the link used in the [BLURB benchmark data download script](https://microsoft.github.io/BLURB/submit.html).
However, we host the data in the huggingface hub for more reliable downloads and access.
## Citation Information
```
@article{Bravo2015,
doi = {10.1186/s12859-015-0472-9},
url = {https://doi.org/10.1186/s12859-015-0472-9},
year = {2015},
month = feb,
publisher = {Springer Science and Business Media {LLC}},
volume = {16},
number = {1},
author = {{\`{A}}lex Bravo and Janet Pi{\~{n}}ero and N{\'{u}}ria Queralt-Rosinach and Michael Rautschka and Laura I Furlong},
title = {Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research},
journal = {{BMC} Bioinformatics}
}
```
| 1,578 | [
[
-0.030364990234375,
-0.06951904296875,
0.0278472900390625,
0.0020961761474609375,
-0.01236724853515625,
0.01508331298828125,
-0.002773284912109375,
-0.048797607421875,
0.046600341796875,
0.01104736328125,
-0.036468505859375,
-0.06658935546875,
-0.04779052734375,... |
ashraq/tmdb-people-image | 2023-04-21T20:02:31.000Z | [
"region:us"
] | ashraq | null | null | 3 | 117 | 2022-12-02T17:34:52 | ---
dataset_info:
features:
- name: adult
dtype: bool
- name: also_known_as
dtype: string
- name: biography
dtype: string
- name: birthday
dtype: string
- name: deathday
dtype: string
- name: gender
dtype: int64
- name: homepage
dtype: string
- name: id
dtype: int64
- name: imdb_id
dtype: string
- name: known_for_department
dtype: string
- name: name
dtype: string
- name: place_of_birth
dtype: string
- name: popularity
dtype: float64
- name: profile_path
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 3749610460.6819267
num_examples: 116403
download_size: 3733145768
dataset_size: 3749610460.6819267
---
Data was obtained from [TMDB API](https://developers.themoviedb.org/3) | 815 | [
[
-0.0087432861328125,
-0.049835205078125,
0.0733642578125,
0.0204315185546875,
-0.0189056396484375,
0.035400390625,
0.04278564453125,
-0.00860595703125,
0.035919189453125,
0.051422119140625,
-0.055450439453125,
-0.0794677734375,
-0.0174407958984375,
-0.011901... |
argilla/medical-domain | 2022-12-07T11:57:58.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"region:us"
] | argilla | null | null | 18 | 117 | 2022-12-07T08:47:29 | ---
language:
- en
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
dataset_info:
features:
- name: text
dtype: string
- name: inputs
struct:
- name: text
dtype: string
- name: prediction
list:
- name: label
dtype: string
- name: score
dtype: float64
- name: prediction_agent
dtype: string
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
dtype: 'null'
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
struct:
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 30903523
num_examples: 4966
download_size: 14846569
dataset_size: 30903523
---
# Dataset Card for "medical-domain"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/tboyle10/medicaltranscriptions
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Medical transcription data scraped from mtsamples.com
Medical data is extremely hard to find due to HIPAA privacy regulations. This dataset offers a solution by providing medical transcription samples.
This dataset contains sample medical transcriptions for various medical specialties.
### Languages
english
### Citation Information
Acknowledgements
Medical transcription data scraped from mtsamples.com
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset. | 1,707 | [
[
0.0099334716796875,
-0.038330078125,
0.0309906005859375,
0.005950927734375,
-0.031158447265625,
0.0130462646484375,
0.004344940185546875,
-0.0233306884765625,
0.04888916015625,
0.052154541015625,
-0.059326171875,
-0.06817626953125,
-0.053466796875,
0.0131683... |
keremberke/blood-cell-object-detection | 2023-01-18T20:37:18.000Z | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Biology",
"region:us"
] | keremberke | null | @misc{ blood-cell-detection-1ekwu_dataset,
title = { Blood Cell Detection Dataset },
type = { Open Source Dataset },
author = { Team Roboflow },
howpublished = { \\url{ https://universe.roboflow.com/team-roboflow/blood-cell-detection-1ekwu } },
url = { https://universe.roboflow.com/team-roboflow/blood-cell-detection-1ekwu },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-01-18 },
} | 10 | 117 | 2022-12-31T22:57:22 | ---
task_categories:
- object-detection
tags:
- roboflow
- roboflow2huggingface
- Biology
---
<div align="center">
<img width="640" alt="keremberke/blood-cell-object-detection" src="https://huggingface.co/datasets/keremberke/blood-cell-object-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['platelets', 'rbc', 'wbc']
```
### Number of Images
```json
{'train': 255, 'test': 36, 'valid': 73}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/blood-cell-object-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/team-roboflow/blood-cell-detection-1ekwu/dataset/3](https://universe.roboflow.com/team-roboflow/blood-cell-detection-1ekwu/dataset/3?ref=roboflow2huggingface)
### Citation
```
@misc{ blood-cell-detection-1ekwu_dataset,
title = { Blood Cell Detection Dataset },
type = { Open Source Dataset },
author = { Team Roboflow },
howpublished = { \\url{ https://universe.roboflow.com/team-roboflow/blood-cell-detection-1ekwu } },
url = { https://universe.roboflow.com/team-roboflow/blood-cell-detection-1ekwu },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-01-18 },
}
```
### License
Public Domain
### Dataset Summary
This dataset was exported via roboflow.com on November 4, 2022 at 7:46 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 364 images.
Cells are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 416x416 (Stretch)
No image augmentation techniques were applied.
| 2,160 | [
[
-0.0288238525390625,
-0.0174713134765625,
0.023895263671875,
-0.0145721435546875,
-0.04449462890625,
-0.0000034570693969726562,
0.010498046875,
-0.04034423828125,
0.0298309326171875,
0.01471710205078125,
-0.0369873046875,
-0.0640869140625,
-0.0278778076171875,
... |
Hello-SimpleAI/HC3-Chinese | 2023-01-21T13:11:49.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:sentence-similarity",
"task_categories:zero-shot-classification",
"size_categories:10K<n<100K",
"language:en",
"language:zh",
"license:cc-by-sa-4.0",
"ChatGPT",
"SimpleAI",
"Detection",
"OOD",
"arxi... | Hello-SimpleAI | Human ChatGPT Comparison Corpus (HC3) Chinese Version | \ | 102 | 117 | 2023-01-18T14:20:45 | ---
task_categories:
- text-classification
- question-answering
- sentence-similarity
- zero-shot-classification
language:
- en
- zh
tags:
- ChatGPT
- SimpleAI
- Detection
- OOD
size_categories:
- 10K<n<100K
license: cc-by-sa-4.0
---
# Human ChatGPT Comparison Corpus (HC3)
We propose the first human-ChatGPT comparison corpus, named **HC3** dataset.
This dataset is introduced in our paper:
- Paper: [***How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection***](https://arxiv.org/abs/2301.07597)
Code, models and analysis are available on our GitHub:
- GitHub: [**Chatgpt-Comparison-Detection project** 🔬](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection)
# Dataset Copyright
If the source datasets used in this corpus has a specific license which is stricter than CC-BY-SA, our products follow the same. If not, they follow CC-BY-SA license.
See [dataset copyright](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection#dataset-copyright).
# Citation
Checkout this papaer [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597)
```
@article{guo-etal-2023-hc3,
title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
author = "Guo, Biyang and
Zhang, Xin and
Wang, Ziyuan and
Jiang, Minqi and
Nie, Jinran and
Ding, Yuxuan and
Yue, Jianwei and
Wu, Yupeng",
journal={arXiv preprint arxiv:2301.07597}
year = "2023",
}
``` | 1,484 | [
[
-0.033355712890625,
-0.032318115234375,
0.00640869140625,
0.0167388916015625,
-0.01236724853515625,
0.007328033447265625,
-0.021270751953125,
-0.0364990234375,
-0.005474090576171875,
0.027557373046875,
-0.0168609619140625,
-0.04827880859375,
-0.033782958984375,
... |
mediabiasgroup/mbib-base | 2023-08-03T01:03:05.000Z | [
"task_categories:text-classification",
"size_categories:1M<n<10M",
"language:en",
"license:cc",
"media",
"mediabias",
"media-bias",
"media bias",
"region:us"
] | mediabiasgroup | null | null | 5 | 117 | 2023-02-06T13:51:22 | ---
license: cc
task_categories:
- text-classification
language:
- en
tags:
- media
- mediabias
- media-bias
- media bias
size_categories:
- 1M<n<10M
---
# Dataset Card for Media-Bias-Identification-Benchmark
## Table of Contents
- [Dataset Card for Media-Bias-Identification-Benchmark](#dataset-card-for-mbib)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Tasks and Information](#tasks-and-information)
- [Baseline](#baseline)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [cognitive-bias](#cognitive-bias)
- [Data Fields](#data-fields)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/Media-Bias-Group/Media-Bias-Identification-Benchmark
- **Repository:** https://github.com/Media-Bias-Group/Media-Bias-Identification-Benchmark
- **Paper:** https://doi.org/10.1145/3539618.3591882
- **Point of Contact:** [Martin Wessel](mailto:martin.wessel@uni-konstanz.de)
### Baseline
<table>
<tr><td><b>Task</b></td><td><b>Model</b></td><td><b>Micro F1</b></td><td><b>Macro F1</b></td></tr>
<td>cognitive-bias</td> <td> ConvBERT/ConvBERT</td> <td>0.7126</td> <td> 0.7664</td></tr>
<td>fake-news</td> <td>Bart/RoBERTa-T</td> <td>0.6811</td> <td> 0.7533</td> </tr>
<td>gender-bias</td> <td> RoBERTa-T/ELECTRA</td> <td>0.8334</td> <td>0.8211</td> </tr>
<td>hate-speech</td> <td>RoBERTA-T/Bart</td> <td>0.8897</td> <td> 0.7310</td> </tr>
<td>linguistic-bias</td> <td> ConvBERT/Bart </td> <td> 0.7044 </td> <td> 0.4995 </td> </tr>
<td>political-bias</td> <td> ConvBERT/ConvBERT </td> <td> 0.7041 </td> <td> 0.7110 </td> </tr>
<td>racial-bias</td> <td> ConvBERT/ELECTRA </td> <td> 0.8772 </td> <td> 0.6170 </td> </tr>
<td>text-leve-bias</td> <td> ConvBERT/ConvBERT </td> <td> 0.7697</td> <td> 0.7532 </td> </tr>
</table>
### Languages
All datasets are in English
## Dataset Structure
### Data Instances
#### cognitive-bias
An example of one training instance looks as follows.
```json
{
"text": "A defense bill includes language that would require military hospitals to provide abortions on demand",
"label": 1
}
```
### Data Fields
- `text`: a sentence from various sources (eg., news articles, twitter, other social media).
- `label`: binary indicator of bias (0 = unbiased, 1 = biased)
## Considerations for Using the Data
### Social Impact of Dataset
We believe that MBIB offers a new common ground
for research in the domain, especially given the rising amount of
(research) attention directed toward media bias
### Citation Information
```
@inproceedings{
title = {Introducing MBIB - the first Media Bias Identification Benchmark Task and Dataset Collection},
author = {Wessel, Martin and Spinde, Timo and Horych, Tomáš and Ruas, Terry and Aizawa, Akiko and Gipp, Bela},
year = {2023},
note = {[in review]}
}
``` | 3,291 | [
[
-0.058807373046875,
-0.041473388671875,
0.01139068603515625,
0.006168365478515625,
-0.01206207275390625,
0.0162353515625,
-0.0220489501953125,
-0.01334381103515625,
0.0202789306640625,
0.008392333984375,
-0.058502197265625,
-0.05950927734375,
-0.055908203125,
... |
HuggingFaceH4/stack-exchange-preferences | 2023-03-08T03:37:53.000Z | [
"task_categories:question-answering",
"size_categories:10M<n<100M",
"language:en",
"license:cc-by-sa-4.0",
"RLHF",
"preferences",
"human-feedback",
"Stack Exchange",
"arxiv:2112.00861",
"region:us"
] | HuggingFaceH4 | null | null | 75 | 117 | 2023-02-11T03:24:28 | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- en
pretty_name: H4 Stack Exchange Preferences Dataset
tags:
- RLHF
- preferences
- human-feedback
- Stack Exchange
download_size: 22132072448
size_categories:
- 10M<n<100M
---
# Dataset Card for H4 Stack Exchange Preferences Dataset
## Dataset Description
- **Homepage:** https://archive.org/details/stackexchange
- **Repository:** (private for now) https://github.com/huggingface/h4
- **Point of Contact:** Nathan Lambert, nathan@huggingface.co
- **Size of downloaded dataset:** 22.13 GB
- **Number of instructions:** 10,741,532
### Dataset Summary
This dataset contains questions and answers from the [Stack Overflow Data Dump](https://archive.org/details/stackexchange) for the purpose of **preference model training**.
Importantly, the questions have been filtered to fit the following criteria for preference models (following closely from [Askell et al. 2021](https://arxiv.org/abs/2112.00861)): *have >=2 answers*.
This data could also be used for instruction fine-tuning and language model training.
The questions are grouped with answers that are assigned a score corresponding to the Anthropic paper:
```
score = log2 (1 + upvotes) rounded to the nearest integer, plus 1 if the answer was accepted by the questioner (we assign a score of −1 if the number of upvotes is negative).
```
Some important notes when using this dataset for preference model pretraining (PMP), which can be ignored for other uses:
* the data will likely need to be filtered more due to matching scores.
* see section 4.1 of Askel et al 2021 for instructions on using each pair of samples twice via the following `binarization` (for better pre-training initialization):
```
Subsequently, we created a binary dataset by applying a ‘binarization’ procedure to the ranked dataset. That
is, for every ranked pair A > B, we transform it into two independent binary comparisons:
GOOD:A > BAD:A
BAD:B > GOOD:B
```
To see all the stackexchanges used in this data, please see [this file](https://huggingface.co/datasets/HuggingFaceH4/pmp-stack-exchange/blob/main/stack_exchanges.json).
Unfortunately, sharing the binarized data directly without metadata violates the license, so we have shared a script for binarization.
### Using the data
Here is a script from our internal tooling used to create a binarized dataset:
```
# Copyright 2023 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import random
from argparse import ArgumentParser
from pathlib import Path
import numpy as np
from datasets import Dataset, concatenate_datasets, load_dataset
from h4.data.utils import save_dataset_shards
H4_DIR = Path(__file__).resolve().parents[3]
DATA_DIR = H4_DIR / "data"
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("--debug", action="store_true", help="Added print statements / limit data size for debugging")
parser.add_argument(
"--output_dir",
default=f"{DATA_DIR}/pmp-binarized",
type=str,
help="Where to save the processed dataset",
)
parser.add_argument(
"--exchange_name",
type=str,
default=None,
help="Optional argument to specify a specific subsection of the dataset",
)
parser.add_argument(
"--binary_score", type=int, default=8, help="Score assigned to binarized pairs for preference data."
)
parser.add_argument(
"--stream_data", action="store_true", help="Optionally stream data, which can be useful with weaker computers"
)
parser.set_defaults(debug=False, stream_data=False) # default will process full dataset
args = parser.parse_args()
specific_exchange = args.exchange_name
stream_dataset = args.stream_data
binary_score = args.binary_score
if specific_exchange:
data_dir = "data/" + args.exchange_name
else:
data_dir = None
if args.debug:
data_len_limit = 10000
else:
data_len_limit = np.inf
dataset = load_dataset(
"HuggingFaceH4/pmp-stack-exchange",
data_dir=data_dir,
split="train",
streaming=stream_dataset,
)
pmp_data = []
for i, d in enumerate(iter(dataset)):
# check debug limit, quit if in debug mode (don't save)
if i > data_len_limit:
print("Early exit for debug mode!")
print(pmp_data)
break
question = d["question"]
answers = d["answers"]
num_answers = len(answers)
answer_scores = [a["pm_score"] for a in answers]
if len(np.unique(answer_scores)) < 2:
print(f"PM Scores are {answer_scores}, skipping this question {i}")
else:
# Sample 2 unique scores for binarization
dif_scores = False
while not dif_scores:
# print("infinite loop...?")
two_answers = random.sample(answers, 2)
if two_answers[0]["pm_score"] != two_answers[1]["pm_score"]:
dif_scores = True
answer_0 = two_answers[0]
answer_1 = two_answers[1]
text_0 = "Question: " + question + "\n" + "Answer: " + answer_0["text"]
text_1 = "Question: " + question + "\n" + "Answer: " + answer_1["text"]
score_0 = binary_score
score_1 = binary_score
pmp_data.append({"context": text_0, "score": score_0})
pmp_data.append({"context": text_1, "score": score_1})
# Save binarized data
sublist_len = 100000
print(f"Dataset length is {len(pmp_data)}")
# bypass known issue in arrow https://issues.apache.org/jira/browse/ARROW-17137
print(f"Processed dataset length > {sublist_len}, processing to HF dataset in chunks")
chunks = [pmp_data[x : x + sublist_len] for x in range(0, len(pmp_data), sublist_len)]
ds_chunks = [Dataset.from_list(ch) for ch in chunks]
ds = concatenate_datasets(ds_chunks)
save_dataset_shards(ds, args.output_dir, subset="stackexchange", shard_size="100MB")
```
### Languages
This is intended to be English only, thought other languages may be present. Some Stack Exchanges that are omitted include:
```
spanish: es.meta.stackoverflow.com, es.stackoverflow.com
japanese: ja.meta.stackoverflow.com, ja.stackoverflow.com
portugese: pt.stackoverflow.com, pt.meta.stackoverflow.com
russian: ru.stackoverflow, ru.meta.stackoverflow
```
### Licensing Information
License: https://creativecommons.org/licenses/by-sa/4.0/
The cc-by-sa 4.0 licensing, while intentionally permissive, does require attribution:
Attribution — You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work).
Specifically the attribution requirements are as follows:
1. Visually display or otherwise indicate the source of the content as coming from the Stack Exchange Network. This requirement is satisfied with a discreet text blurb, or some other unobtrusive but clear visual indication.
2. Ensure that any Internet use of the content includes a hyperlink directly to the original question on the source site on the Network (e.g., http://stackoverflow.com/questions/12345)
3. Visually display or otherwise clearly indicate the author names for every question and answer used
4. Ensure that any Internet use of the content includes a hyperlink for each author name directly back to his or her user profile page on the source site on the Network (e.g., http://stackoverflow.com/users/12345/username), directly to the Stack Exchange domain, in standard HTML (i.e. not through a Tinyurl or other such indirect hyperlink, form of obfuscation or redirection), without any “nofollow” command or any other such means of avoiding detection by search engines, and visible even with JavaScript disabled.
For more information, see the Stack Exchange Terms of Service.
### Citation Information
```
@online{h4stackexchange,
author = {Lambert, Nathan and Tunstall, Lewis and Rajani, Nazneen and Thrush, Tristan},
title = {HuggingFace H4 Stack Exchange Preference Dataset},
year = 2023,
url = {https://huggingface.co/datasets/HuggingFaceH4/stack-exchange-preferences},
}
``` | 8,744 | [
[
-0.046478271484375,
-0.0548095703125,
0.0196075439453125,
0.02642822265625,
-0.0310211181640625,
-0.0137939453125,
-0.0114593505859375,
-0.031707763671875,
0.02593994140625,
0.040557861328125,
-0.03802490234375,
-0.0450439453125,
-0.034820556640625,
0.026901... |
jinaai/negation-dataset | 2023-08-04T10:09:02.000Z | [
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"finetuner",
"arxiv:2307.11224",
"region:us"
] | jinaai | null | null | 7 | 117 | 2023-07-13T13:23:45 |
---
tags:
- finetuner
language: en
license: apache-2.0
dataset_info:
features:
- name: anchor
dtype: string
- name: entailment
dtype: string
- name: negative
dtype: string
splits:
- name: train
num_examples: 10000
- name: test
num_examples: 500
download_size: 1467517
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
---
<br><br>
<p align="center">
<img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>The data offered by Jina AI, Finetuner team.</b>
</p>
## Summary
This dataset is an English-language dataset based on the [SNLI](https://huggingface.co/datasets/snli) dataset.
It contains negations of samples from SNLI.
## Instances
Each data point consists of a triplet ('anchor', 'entailment', 'negative') of strings, where ('anchor', 'entailment') are positive pairs
taken from SNLI, and 'negative' contradicts both 'anchor' and 'entailment'.
## Fields
- 'anchor': string, some statement
- 'entailment': string, a statement which follows from 'anchor', but is usually syntactically dissimilar
- 'negative': string, a statement contradicting 'anchor' and 'entailment'. Syntactically very similar to 'entailment'
## Splits
| | train | test |
|------------|-------|------|
| # of items | 10000 | 500 |
## Source
Positive pairs were sampled from the [SNLI](https://huggingface.co/datasets/snli) dataset and negative samples were created using GPT-3.5
and GPT-4.
## Example Usage
```python
from datasets import load_dataset
from pprint import pprint
dataset = load_dataset('jinaai/negation-dataset')
pprint(dataset['train'][:5])
```
Output:
```python
{'anchor': ['Two young girls are playing outside in a non-urban environment.',
'A man with a red shirt is watching another man who is standing on '
'top of a attached cart filled to the top.',
'A man in a blue shirt driving a Segway type vehicle.',
'A woman holds her mouth wide open as she is placing a stack of '
'crackers in.',
'A group of people standing on a rock path.'],
'entailment': ['Two girls are playing outside.',
'A man is standing on top of a cart.',
'A person is riding a motorized vehicle.',
'There is a woman eating crackers.',
'A group of people are hiking.'],
'negative': ['Two girls are not playing outside.',
'A man is not standing on top of a cart.',
'A person is not riding a motorized vehicle.',
'There is no woman eating crackers.',
'A group of people are not hiking.']}
```
## Models
[Jina AI's](https://jina.ai) open source embedding models ([small](https://huggingface.co/jinaai/jina-embedding-s-en-v1),
[base](https://huggingface.co/jinaai/jina-embedding-b-en-v1) and
[large](https://huggingface.co/jinaai/jina-embedding-l-en-v1)) were all fine-tuned on the negation dataset.
## Licensing Information
This work is licensed under the Apache License, Version 2.0.
## Contributors
Thanks to contributors from [Jina AI](https://jina.ai) for adding this dataset.
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
## Citation
If you find this dataset useful in your research, please cite the following paper:
``` latex
@misc{günther2023jina,
title={Jina Embeddings: A Novel Set of High-Performance Sentence Embedding Models},
author={Michael Günther and Louis Milliken and Jonathan Geuter and Georgios Mastrapas and Bo Wang and Han Xiao},
year={2023},
eprint={2307.11224},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
| 3,986 | [
[
-0.02978515625,
-0.0882568359375,
0.03173828125,
0.0177459716796875,
-0.0182952880859375,
-0.03167724609375,
-0.01393890380859375,
-0.012908935546875,
0.041656494140625,
0.0156402587890625,
-0.042633056640625,
-0.03955078125,
-0.034454345703125,
0.0171661376... |
erhwenkuo/hh_rlhf-chinese-zhtw | 2023-10-04T23:24:34.000Z | [
"task_categories:reinforcement-learning",
"language:zh",
"license:mit",
"arxiv:2204.05862",
"region:us"
] | erhwenkuo | null | null | 0 | 117 | 2023-10-04T23:11:44 | ---
dataset_info:
features:
- name: context
list:
- name: role
dtype: string
- name: text
dtype: string
- name: chosen
struct:
- name: role
dtype: string
- name: text
dtype: string
- name: rejected
struct:
- name: role
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 302431699
num_examples: 344317
download_size: 178897699
dataset_size: 302431699
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- reinforcement-learning
language:
- zh
---
# Dataset Card for "hh_rlhf-chinese-zhtw"
此數據集合併了下列的資料:
1. 關於有用且無害的人類偏好數據,來自 [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2204.05862)。這些數據旨在為後續 RLHF 訓練訓練偏好(或獎勵)模型。這些資料*不*用於對話代理人的監督訓練。根據這些資料訓練對話代理可能會導致有害的模型,這種情況應該避免。
3. 人工生成並帶註釋的紅隊對話,來自[減少危害的紅隊語言模型:方法、擴展行為和經驗教訓](https://www.anthropic.com/red_teaming.pdf)。這些數據旨在了解眾包紅隊如何建模以及哪些類型的紅隊攻擊成功或失敗。這些數據*不*用於微調或偏好建模(使用上面的數據進行偏好建模)。這些數據是從上述無害偏好建模數據導出的對話的完整轉錄本,其中僅將所選響應合併到整個轉錄本中。此外,文字記錄也透過人工和自動測量來標註整個對話的危害程度。
## 特別注意
數據(尤其是無害偏好數據和紅隊數據)包含可能令人反感或令人不安的內容。主題包括但不限於歧視性語言以及對虐待、暴力、自殘、剝削和其他可能令人不安的主題的討論。
請僅根據您個人的風險承受能力來處理資料。這些數據旨在用於研究目的,特別是可以使模型「減少」危害的研究。
如上所述,這些資料「不」該用於訓練對話代理,因為這可能會導致有害的模型行為。
## 原始數據
原作者在 GitHub 上託管此資料集:https://github.com/anthropics/hh-rlhf
## 轉譯來源
本數據集來自 [dikw/hh_rlhf_cn](https://huggingface.co/datasets/dikw/hh_rlhf_cn) 并使用了 OpenCC 來進行簡繁轉譚。
基於 Anthropic 論文 Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback 開源的 helpful 和 harmless 數據,使用翻譯工具進行了翻譯。
- `hh_rlhf_train.jsonl` 合併中英文訓練集資料 清洗過後 17 萬條
- `hh_rlhf_test.jsonl` 合併中英文測試集資料 清洗過後 9 千條
- `harmless_base_cn_train.jsonl` 42394 條
- `harmless_base_cn_test.jsonl` 2304 條
- `helpful_base_cn_train.jsonl` 43722 條
- `helpful_base_cn_test.jsonl` 2346 條
| 1,899 | [
[
-0.031768798828125,
-0.04425048828125,
-0.0029850006103515625,
0.019561767578125,
-0.0255126953125,
-0.01433563232421875,
-0.01148223876953125,
-0.0367431640625,
0.0239105224609375,
0.0200347900390625,
-0.059661865234375,
-0.04742431640625,
-0.038970947265625,
... |
liyucheng/winogrande_val | 2023-10-17T14:56:56.000Z | [
"region:us"
] | liyucheng | null | null | 0 | 117 | 2023-10-17T14:56:53 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: option1
dtype: string
- name: option2
dtype: string
- name: answer
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 196015
num_examples: 1267
download_size: 94663
dataset_size: 196015
---
# Dataset Card for "winogrande_val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 499 | [
[
-0.038787841796875,
-0.0084991455078125,
0.01024627685546875,
0.0134735107421875,
-0.01390838623046875,
-0.0037784576416015625,
0.0269927978515625,
-0.017059326171875,
0.056182861328125,
0.035797119140625,
-0.04656982421875,
-0.0487060546875,
-0.05340576171875,
... |
SetFit/ethos_binary | 2022-01-16T17:54:54.000Z | [
"region:us"
] | SetFit | null | null | 0 | 116 | 2022-03-02T23:29:22 |
This is the binary split of [ethos](https://huggingface.co/datasets/ethos), split into train and test.
It contains comments annotated for hate speech or not. | 162 | [
[
-0.057830810546875,
-0.052825927734375,
0.0067138671875,
0.002590179443359375,
-0.031280517578125,
0.017364501953125,
0.00603485107421875,
-0.058135986328125,
0.0601806640625,
0.033203125,
-0.0555419921875,
-0.01288604736328125,
-0.06353759765625,
0.00846862... |
lintang/numerical_reasoning_arithmetic | 2023-01-09T06:33:43.000Z | [
"region:us"
] | lintang | Generated dataset for testing numerical reasoning | \ | 0 | 116 | 2023-01-05T08:48:37 | # Numerical Reasoning
| 22 | [
[
-0.042510986328125,
-0.0166778564453125,
0.059417724609375,
0.06768798828125,
-0.034881591796875,
0.004253387451171875,
0.0185699462890625,
0.01393890380859375,
-0.00827789306640625,
0.042877197265625,
-0.020050048828125,
-0.0250396728515625,
-0.034088134765625,... |
Muennighoff/babi | 2023-02-12T13:34:24.000Z | [
"region:us"
] | Muennighoff | null | null | 0 | 116 | 2023-02-12T09:19:00 |
Creation (Copied & adapted from https://github.com/stanford-crfm/helm/blob/0eaaa62a2263ddb94e9850ee629423b010f57e4a/src/helm/benchmark/scenarios/babi_qa_scenario.py):
```python
!wget http://www.thespermwhale.com/jaseweston/babi/tasks_1-20_v1-2.tar.gz
!tar -xf tasks_1-20_v1-2.tar.gz
import json
from typing import List
tasks = list(range(1, 20))
splits = ["train", "valid", "test"]
def process_path(path: str) -> str:
"""Turn a path string (task 19) from the original format 's,w' to a verbal model-friendly format 'south west'"""
steps: List[str] = path.split(",")
directions = {"s": "south", "n": "north", "e": "east", "w": "west"}
path = " ".join([directions[step] for step in steps])
return path
for split in splits:
with open(f"babi_{split}.jsonl", "w") as f_base:
for task in tasks:
split_path: str = f"./tasks_1-20_v1-2/en-valid/qa{task}_{split}.txt"
with open(split_path, "r") as f:
facts = list(f)
story: List[str] = []
for fact in facts:
fid = int(fact.split(" ")[0])
if fid == 1:
story = []
fact = " ".join(fact.split(" ")[1:])
is_question = "?" in fact
if is_question:
question, answer = fact.split("\t")[:2]
question, answer = question.strip(), answer.strip()
# All tasks except task 19 have a verbal single-word answer (e.g. kitchen, apple, yes).
# Task 19 (path finding) has a non verbal answer format (
if task == 19:
answer = process_path(answer)
f_base.write(json.dumps({
"passage": "".join(story),
"question": question,
"answer": answer,
"task": task,
}) + "\n")
if "?" in story:
print("STORY", "".join(story))
else:
story.append(fact)
``` | 2,199 | [
[
-0.00997161865234375,
-0.062225341796875,
0.04547119140625,
0.036376953125,
-0.0094451904296875,
0.00386810302734375,
-0.0216522216796875,
-0.0191650390625,
-0.00013446807861328125,
0.027130126953125,
-0.0455322265625,
-0.044219970703125,
-0.04473876953125,
... |
thu-coai/esconv | 2023-07-15T08:26:36.000Z | [
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2106.01144",
"region:us"
] | thu-coai | null | null | 0 | 116 | 2023-05-08T09:18:06 | ---
license: cc-by-nc-4.0
language:
- en
---
The ESConv dataset. [GitHub repo](https://github.com/thu-coai/Emotional-Support-Conversation). [Original paper](https://arxiv.org/abs/2106.01144).
```bib
@inproceedings{liu-etal-2021-towards,
title={Towards Emotional Support Dialog Systems},
author={Liu, Siyang and
Zheng, Chujie and
Demasi, Orianna and
Sabour, Sahand and
Li, Yu and
Yu, Zhou and
Jiang, Yong and
Huang, Minlie},
booktitle={ACL},
year={2021}
}
``` | 510 | [
[
-0.021148681640625,
-0.037567138671875,
0.02227783203125,
0.0137939453125,
0.006061553955078125,
-0.01508331298828125,
-0.013153076171875,
-0.036407470703125,
0.041717529296875,
0.0296630859375,
-0.07012939453125,
-0.03564453125,
-0.022064208984375,
0.017730... |
clarin-knext/msmarco-pl | 2023-06-07T08:22:03.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | 0 | 116 | 2023-06-06T22:02:28 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | 201 | [
[
-0.0153961181640625,
-0.0628662109375,
0.03546142578125,
0.0164031982421875,
-0.0221710205078125,
-0.0103607177734375,
-0.01160430908203125,
-0.034515380859375,
-0.0013275146484375,
0.0286102294921875,
-0.03826904296875,
-0.048126220703125,
-0.0290069580078125,
... |
rafaelpadilla/coco2017 | 2023-08-11T23:02:22.000Z | [
"task_categories:object-detection",
"annotations_creators:expert-generated",
"size_categories:100K<n<1M",
"language:en",
"arxiv:1405.0312",
"region:us"
] | rafaelpadilla | This dataset contains all COCO 2017 images and annotations split in training (118287 images) and validation (5000 images). | @article{DBLP:journals/corr/LinMBHPRDZ14,
author = {Tsung{-}Yi Lin and
Michael Maire and
Serge J. Belongie and
Lubomir D. Bourdev and
Ross B. Girshick and
James Hays and
Pietro Perona and
Deva Ramanan and
Piotr Doll{\'{a}}r and
C. Lawrence Zitnick},
title = {Microsoft {COCO:} Common Objects in Context},
journal = {CoRR},
volume = {abs/1405.0312},
year = {2014},
url = {http://arxiv.org/abs/1405.0312},
archivePrefix = {arXiv},
eprint = {1405.0312},
timestamp = {Mon, 13 Aug 2018 16:48:13 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | 1 | 116 | 2023-07-19T19:30:44 | ---
pretty_name: COCO2017
annotations_creators:
- expert-generated
size_categories:
- 100K<n<1M
language:
- en
task_categories:
- object-detection
---
# Dataset Card for Dataset Name
This dataset includes **COCO 2017** only.
COCO 2014 and 2015 will be included soon.
## Dataset Description
- **Homepage:** https://cocodataset.org/
- **Repository:** https://github.com/cocodataset/cocoapi
- **Paper:** [Microsoft COCO: Common Objects in Context](https://arxiv.org/abs/1405.0312)
### Dataset Summary
COCO (Common Objects in Context) is a large-scale object detection, segmentation, and captioning dataset. It contains over 200,000 labeled images with over 80 category labels. It includes complex, everyday scenes with common objects in their natural context.
This dataset covers only the "object detection" part of the COCO dataset. But some features and specifications for the full COCO dataset:
- Object segmentation
- Recognition in context
- Superpixel stuff segmentation
- 330K images (>200K labeled)
- 1.5 million object instances
- 80 object categories
- 91 stuff categories
- 5 captions per image
- 250,000 people with keypoints
### Data Splits
- **Training set ("train")**: 118287 images annotated with 860001 bounding boxes in total.
- **Validation set ("val")**: 5000 images annotated with 36781 bounding boxes in total.
- **92 classes**: "None", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "street sign", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "hat", "backpack", "umbrella", "shoe", "eye glasses", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "plate", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "mirror", "dining table", "window", "desk", "toilet", "door", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "blender", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush", "hair brush"
- **But only 80 classes have with annotations**: "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"
### Boxes format:
For the object detection set of COCO dataset, the ground-truth bounding boxes are provided in the following format: `x, y, width, height` in absolute coordinates.
### Curation Rationale
COCO dataset was curated with the goal of advancing the state of the art in many tasks, such as object detection, dense pose, keypoints, segmentation and image classification.
### Licensing Information
The annotations in this dataset belong to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License.
Mode details at: https://cocodataset.org/#termsofuse
### Loading dataset
You can load COCO 2017 dataset by calling:
```
from datasets import load_dataset
# Full dataset
dataset = load_dataset("rafaelpadilla/coco2017")
print(dataset)
>> DatasetDict({
>> train: Dataset({
>> features: ['image', 'image_id', 'objects'],
>> num_rows: 118287
>> })
>> val: Dataset({
>> features: ['image', 'image_id', 'objects'],
>> num_rows: 5000
>> })
>> })
# Training set only
dataset = load_dataset("rafaelpadilla/coco2017", split="train")
# Validation set only
dataset = load_dataset("rafaelpadilla/coco2017", split="val")
```
### COCODataset Class
We offer the dataset class `COCODataset` that extends VisionDataset to represents images and annotations of COCO. To use it, you need to install coco2017 package. For that, follow the steps below:
1. Create and activate an environment:
```
conda create -n coco2017 python=3.11
conda activate coco2017
```
2. Install cocodataset package:
```
pip install git+https://huggingface.co/datasets/rafaelpadilla/coco2017@main
```
or alternatively:
```
git clone https://huggingface.co/datasets/rafaelpadilla/coco2017
cd coco2017
pip install .
```
3. Now you can import `COCODataset` class into your Python code by:
```
from cocodataset import COCODataset
```
### Citation Information
@inproceedings{lin2014microsoft,
title={Microsoft coco: Common objects in context},
author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
booktitle={Computer Vision--ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13},
pages={740--755},
year={2014},
organization={Springer}
}
### Contributions
Tsung-Yi Lin Google Brain
Genevieve Patterson MSR, Trash TV
Matteo R. Ronchi Caltech
Yin Cui Google
Michael Maire TTI-Chicago
Serge Belongie Cornell Tech
Lubomir Bourdev WaveOne, Inc.
Ross Girshick FAIR
James Hays Georgia Tech
Pietro Perona Caltech
Deva Ramanan CMU
Larry Zitnick FAIR
Piotr Dollár FAIR
| 5,991 | [
[
-0.060150146484375,
-0.0499267578125,
0.00740814208984375,
0.00823974609375,
-0.0300445556640625,
0.01338958740234375,
-0.00737762451171875,
-0.052337646484375,
0.0172576904296875,
0.032989501953125,
-0.0322265625,
-0.056610107421875,
-0.043792724609375,
0.0... |
sibozhu/paddington_en | 2023-10-04T03:08:51.000Z | [
"region:us"
] | sibozhu | null | null | 0 | 116 | 2023-10-04T03:08:00 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
roborovski/diffusiondb-seq2seq | 2023-10-10T03:04:26.000Z | [
"region:us"
] | roborovski | null | null | 0 | 116 | 2023-10-10T02:25:27 | ---
dataset_info:
features:
- name: subject
dtype: string
- name: descriptor
dtype: string
splits:
- name: train
num_bytes: 10079006
num_examples: 93834
download_size: 6236928
dataset_size: 10079006
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "diffusiondb-seq2seq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 492 | [
[
-0.045654296875,
-0.03851318359375,
0.0281982421875,
0.0261077880859375,
-0.01062774658203125,
-0.0018072128295898438,
0.03546142578125,
0.01287841796875,
0.061065673828125,
0.0293426513671875,
-0.0498046875,
-0.049591064453125,
-0.0504150390625,
-0.03665161... |
arsentd_lev | 2023-01-25T14:26:36.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:topic-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:apc",
"language:ajp",
"lic... | null | The Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV) contains 4,000 tweets written in Arabic and equally retrieved from Jordan, Lebanon, Palestine and Syria. | @article{ArSenTDLev2018,
title={ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets},
author={Baly, Ramy, and Khaddaj, Alaa and Hajj, Hazem and El-Hajj, Wassim and Bashir Shaban, Khaled},
journal={OSACT3},
pages={},
year={2018}} | 3 | 115 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- apc
- ajp
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
- topic-classification
paperswithcode_id: arsentd-lev
pretty_name: ArSenTD-LEV
dataset_info:
features:
- name: Tweet
dtype: string
- name: Country
dtype:
class_label:
names:
'0': jordan
'1': lebanon
'2': syria
'3': palestine
- name: Topic
dtype: string
- name: Sentiment
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
'3': very_negative
'4': very_positive
- name: Sentiment_Expression
dtype:
class_label:
names:
'0': explicit
'1': implicit
'2': none
- name: Sentiment_Target
dtype: string
splits:
- name: train
num_bytes: 1233980
num_examples: 4000
download_size: 392666
dataset_size: 1233980
---
# Dataset Card for ArSenTD-LEV
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ArSenTD-LEV homepage](http://oma-project.com/)
- **Paper:** [ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets](https://arxiv.org/abs/1906.01830)
### Dataset Summary
The Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV) contains 4,000 tweets written in Arabic and equally retrieved from Jordan, Lebanon, Palestine and Syria.
### Supported Tasks and Leaderboards
Sentriment analysis
### Languages
Arabic Levantine Dualect
## Dataset Structure
### Data Instances
{'Country': 0,
'Sentiment': 3,
'Sentiment_Expression': 0,
'Sentiment_Target': 'هاي سوالف عصابات ارهابية',
'Topic': 'politics',
'Tweet': 'ثلاث تفجيرات في #كركوك الحصيلة قتيل و 16 جريح بدأت اكلاوات كركوك كانت امان قبل دخول القوات العراقية ، هاي سوالف عصابات ارهابية'}
### Data Fields
`Tweet`: the text content of the tweet \
`Country`: the country from which the tweet was collected ('jordan', 'lebanon', 'syria', 'palestine')\
`Topic`: the topic being discussed in the tweet (personal, politics, religion, sports, entertainment and others) \
`Sentiment`: the overall sentiment expressed in the tweet (very_negative, negative, neutral, positive and very_positive) \
`Sentiment_Expression`: the way how the sentiment was expressed: explicit, implicit, or none (the latter when sentiment is neutral) \
`Sentiment_Target`: the segment from the tweet to which sentiment is expressed. If sentiment is neutral, this field takes the 'none' value.
### Data Splits
No standard splits are provided
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Make sure to read and agree to the [license](http://oma-project.com/ArSenL/ArSenTD_Lev_Intro)
### Citation Information
```
@article{baly2019arsentd,
title={Arsentd-lev: A multi-topic corpus for target-based sentiment analysis in arabic levantine tweets},
author={Baly, Ramy and Khaddaj, Alaa and Hajj, Hazem and El-Hajj, Wassim and Shaban, Khaled Bashir},
journal={arXiv preprint arXiv:1906.01830},
year={2019}
}
```
### Contributions
Thanks to [@moussaKam](https://github.com/moussaKam) for adding this dataset. | 5,022 | [
[
-0.0447998046875,
-0.03436279296875,
0.013153076171875,
0.02886962890625,
-0.0289306640625,
0.0189666748046875,
-0.0204010009765625,
-0.00659942626953125,
0.03277587890625,
0.019134521484375,
-0.04058837890625,
-0.0980224609375,
-0.057342529296875,
-0.002155... |
sepedi_ner | 2023-01-25T14:44:06.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:nso",
"license:other",
"region:us"
] | null | Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags. | @inproceedings{sepedi_ner,
author = {D.J. Prinsloo and
Roald Eiselen},
title = {NCHLT Sepedi Named Entity Annotated Corpus},
booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.},
year = {2016},
url = {https://repo.sadilar.org/handle/20.500.12185/328},
} | 1 | 115 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- nso
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Sepedi NER Corpus
license_details: Creative Commons Attribution 2.5 South Africa License
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': OUT
'1': B-PERS
'2': I-PERS
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
config_name: sepedi_ner
splits:
- name: train
num_bytes: 3378134
num_examples: 7117
download_size: 22077376
dataset_size: 3378134
---
# Dataset Card for Sepedi NER Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Sepedi Ner Corpus Homepage](https://repo.sadilar.org/handle/20.500.12185/328)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Martin Puttkammer](mailto:Martin.Puttkammer@nwu.ac.za)
### Dataset Summary
The Sepedi Ner Corpus is a Sepedi dataset developed by [The Centre for Text Technology (CTexT), North-West University, South Africa](http://humanities.nwu.ac.za/ctext). The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Sepedi language. The dataset uses CoNLL shared task annotation standards.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language supported is Sesotho sa Leboa (Sepedi).
## Dataset Structure
### Data Instances
A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
```
{'id': '0',
'ner_tags': [0, 0, 0, 0, 0],
'tokens': ['Maikemišetšo', 'a', 'websaete', 'ya', 'ditirelo']
}
```
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC",
```
The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity.
### Data Splits
The data was not split.
## Dataset Creation
### Curation Rationale
The data was created to help introduce resources to new language - sepedi.
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The data is based on South African government domain and was crawled from gov.za websites.
#### Who are the source language producers?
The data was produced by writers of South African government websites - gov.za
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The data was annotated during the NCHLT text resource development project.
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa).
See: [more information](http://www.nwu.ac.za/ctext)
### Licensing Information
The data is under the [Creative Commons Attribution 2.5 South Africa License](http://creativecommons.org/licenses/by/2.5/za/legalcode)
### Citation Information
```
@inproceedings{sepedi_ner_corpus,
author = {D.J. Prinsloo and
Roald Eiselen},
title = {NCHLT Sepedi Named Entity Annotated Corpus},
booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.},
year = {2016},
url = {https://repo.sadilar.org/handle/20.500.12185/328},
}
```
### Contributions
Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset. | 5,530 | [
[
-0.04296875,
-0.031951904296875,
-0.007297515869140625,
0.025390625,
-0.0278778076171875,
-0.0019140243530273438,
-0.0271453857421875,
-0.03179931640625,
0.048095703125,
0.040069580078125,
-0.037017822265625,
-0.05072021484375,
-0.060760498046875,
0.03912353... |
swahili | 2022-11-18T21:49:35.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"l... | null | The Swahili dataset developed specifically for language modeling task.
The dataset contains 28,000 unique words with 6.84M, 970k, and 2M words for the train,
valid and test partitions respectively which represent the ratio 80:10:10.
The entire dataset is lowercased, has no punctuation marks and,
the start and end of sentence markers have been incorporated to facilitate easy tokenization during language modeling. | @InProceedings{huggingface:dataset,
title = Language modeling data for Swahili (Version 1),
authors={Shivachi Casper Shikali, & Mokhosi Refuoe.
},
year={2019},
link = http://doi.org/10.5281/zenodo.3553423
} | 7 | 115 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- sw
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: swahili
dataset_info:
features:
- name: text
dtype: string
config_name: swahili
splits:
- name: train
num_bytes: 7700136
num_examples: 42069
- name: test
num_bytes: 695092
num_examples: 3371
- name: validation
num_bytes: 663520
num_examples: 3372
download_size: 2783330
dataset_size: 9058748
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7339006/
- **Repository:**
- **Paper:** https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7339006/
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
The Swahili dataset developed specifically for language modeling task.
The dataset contains 28,000 unique words with 6.84M, 970k, and 2M words for the train,
valid and test partitions respectively which represent the ratio 80:10:10.
The entire dataset is lowercased, has no punctuation marks and,
the start and end of sentence markers have been incorporated to facilitate easy tokenization during language modeling.
### Supported Tasks and Leaderboards
Language Modeling
### Languages
Swahili (sw)
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- text : A line of text in Swahili
### Data Splits
train = 80%, valid = 10%, test = 10%
## Dataset Creation
### Curation Rationale
Enhancing African low-resource languages
### Source Data
#### Initial Data Collection and Normalization
The dataset contains 28,000 unique words with 6.84 M, 970k, and 2 M words for the train, valid and test partitions respectively which represent the ratio 80:10:10.
The entire dataset is lowercased, has no punctuation marks and, the start and end of sentence markers have been incorporated to facilitate easy tokenization during language modelling.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
Unannotated data
#### Who are the annotators?
NA
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Enhancing African low-resource languages
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons Attribution 4.0 International
### Citation Information
"""\
@InProceedings{huggingface:dataset,
title = Language modeling data for Swahili (Version 1),
authors={Shivachi Casper Shikali, & Mokhosi Refuoe.
},
year={2019},
link = http://doi.org/10.5281/zenodo.3553423
}
"""
### Contributions
Thanks to [@akshayb7](https://github.com/akshayb7) for adding this dataset. | 4,194 | [
[
-0.028045654296875,
-0.05255126953125,
-0.01378631591796875,
0.024169921875,
-0.022369384765625,
-0.00580596923828125,
-0.03729248046875,
-0.033660888671875,
0.02880859375,
0.04150390625,
-0.050018310546875,
-0.059539794921875,
-0.053955078125,
0.00749206542... |
PereLluis13/spanish_speech_text | 2022-02-04T17:32:37.000Z | [
"region:us"
] | PereLluis13 | null | null | 1 | 115 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
demelin/moral_stories | 2022-07-17T15:29:10.000Z | [
"task_categories:multiple-choice",
"task_categories:text-generation",
"task_categories:text-classification",
"task_ids:multiple-choice-qa",
"task_ids:language-modeling",
"task_ids:text-scoring",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
... | demelin | Moral Stories is a crowd-sourced dataset of structured, branching narratives for the study of grounded, goal-oriented
social reasoning. For detailed information, see https://aclanthology.org/2021.emnlp-main.54.pdf. | @article{Emelin2021MoralSS,
title={Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences},
author={Denis Emelin and Ronan Le Bras and Jena D. Hwang and Maxwell Forbes and Yejin Choi},
journal={ArXiv},
year={2021},
volume={abs/2012.15738}
} | 10 | 115 | 2022-07-14T11:19:52 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- crowdsourced
license:
- mit
multilinguality:
- monolingual
pretty_name: Moral Stories
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- multiple-choice
- text-generation
- text-classification
- commonsense-reasoning
- moral-reasoning
- social-reasoning
task_ids:
- multiple-choice-qa
- language-modeling
- text-scoring
---
# Dataset Card for Moral Stories
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Moral Stories repository](https://github.com/demelin/moral_stories)
- **Repository:** [Moral Stories repository](https://github.com/demelin/moral_stories)
- **Paper:** [Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences](https://aclanthology.org/2021.emnlp-main.54/)
- **Leaderboard:** [N/A]
- **Point of Contact:** [Denis Emelin](demelin.github.io)
### Dataset Summary
Moral Stories is a crowd-sourced dataset of structured narratives that describe normative and norm-divergent actions taken by individuals to accomplish certain intentions in concrete situations, and their respective consequences. All stories in the dataset consist of seven sentences, belonging to the following categories:
- Norm: A guideline for social conduct generally observed by most people in everyday situations.
- Situation: Setting of the story that introduces story participants and describes their environment.
- Intention: Reasonable goal that one of the story participants (the actor), wants to fulfill.
- Normative action: An action by the actor that fulfills the intention and observes the norm.
- Normative consequence: Possible effect of the normative action on the actor's environment.
- Divergent action: An action by the actor that fulfills the intention and diverges from the norm.
- Divergent consequence: Possible effect of the divergent action on the actor's environment.
Accordingly, each story's constituent sentences can be grouped into three segments. The context segment grounds actions within a particular social scenario, the normative path contains the normative action and its consequence, whereas the divergent path includes their norm-divergent analogues. Combining the context segment separately with each path yields two self-contained sub-stories differing in the adherence of the described events to social expectations. See also [*Section 2* in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Supported Tasks and Leaderboards
- commonsense-reasoning / social-reasoning / moral-reasoning: The dataset can also be used evaluate whether pretrained language models can reason about actions that are either consistent or inconsistent with social norms, the consequences of actions, and the norms that may motivate specific action choices. A [BART model](https://huggingface.co/facebook/bart-large) can be used for this purpose.
- text-classification: This dataset can be used to train models to differentiate between normative and divergent actions as well as between plausible and implausible consequences of actions. A [RoBERTa-based model](https://huggingface.co/roberta-base) can be used for this purpose.
- text-generation: The dataset can be used to train models to generate normative / divergent action hypotheses and their consequences, or norms that may explain certain action choices, conditioned on contexts of different scope. A [GPT-based model](https://huggingface.co/EleutherAI/gpt-neo-2.7B) can be used for this purpose.
### Languages
*Moral Stories* is available in English, with mainstream US Englishes being the dominant variety, as indicated by self-reported contributor demographics provided in the [*Ethical Considerations* section of the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
## Dataset Structure
### Data Instances
Due to its highly structured nature, *Moral Stories* enables a variety of tasks, such as action classification or norm generation. Furthermore, we provide different data splits in an attempt to challenge generalization abilities of the evaluated models. For details, refer to [*Section 2* in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf). Here, we replicate one instance from the full, task-agnostic dataset, one instance from a classification task dataset (cls-action+norm), and one instance from a generation task dataset (gen-consequence$action+context):
"full" instance:
{"ID": "3OJSZ2ATDTUMG7NGR28931AHDQF57F",
"norm": "It's kind to wait for people.",
"situation": "Josh drives Penelope to work every day and is hanging outside her house to pick her up, but there's no sign of her.",
"intention": "Josh needs to get to work.",
"moral_action": "Josh continues to sit outside Penelope's house until she walks out the door a few minutes later.",
"moral_consequence": "Penelope appreciates Josh's patience and thanks him for waiting around.",
"immoral_action": "Josh shrugs his shoulders and drives away to work after sitting around a moment.",
"immoral_consequence": "Penelope comes running out of the house shouting to wait, but she's too late."}
positive and negative "cls-action+norm" instances (i.e. classify an action as normative or divergent given the action and a relevant norm):
{"ID": "3EF8EXOTT2TGCKNZROVU3129RYNJ1G1",
"norm": "It's bad to be too loud and wake people up.",
"moral_action": "Jordan quietly makes himself a bowl of cereal for breakfast.",
"label": "1"}
{"ID": "3EF8EXOTT2TGCKNZROVU3129RYNJ1G0",
"norm": "It's bad to be too loud and wake people up.",
"immoral_action": "Jordan noisily bangs a lot of pots and pans when making his breakfast.",
"label": "0"}
"gen-consequence$action+context" instance (i.e. generate either a plausible or implausible consequence given an action and its context):
{"ID": "33FOTY3KENJT46HTHWE8GXPE3LE1C03",
"norm": "It's mean to hit a bear with a guitar.",
"situation": "Joe was camping in Montana with his band when he came across a black bear.",
"intention": "Joe wanted to scare away the black bear.",
"immoral_action": "Joe grabbed his guitarist's guitar and hit the bear with it.",
"immoral_consequence": "The bear suffers a concussion and doesn't understand why he was hit.",
"label": "1"}
### Data Fields
- "ID": Unique identifier ID for this dataset instance.
- "norm": A guideline for social conduct generally observed by most people in everyday situations.
- "situation": Setting of the story that introduces story participants and describes their environment.
- "intention": Reasonable goal that one of the story participants (the actor), wants to fulfill.
- "moral_(i.e. 'normative')_action": An action by the actor that fulfills the intention and observes the norm.
- "moral_consequence": Possible effect of the normative action on the actor's environment.
- "immoral_(i.e. 'divergent')_action": An action by the actor that fulfills the intention and diverges from the norm.
- "immoral_consequence": Possible effect of the divergent action on the actor's environment.
- "label": Data instance label; for action-related tasks, "0" corresponds to an immoral / divergent action while "1" corresponds to a moral / normative action, for consequence-related tasks, "0" corresponds to a plausible consequence while "1" corresponds to an implausible consequence (for generation tasks, label is always set to "1")
### Data Splits
For classification tasks, we examined three data split strategies:
- *Norm Distance*: Norms are based on social consensus and may, as such, change across time and between locations. Therefore, we are also interested in how well classification models can generalize to novel norms. To estimate this, we split the dataset by embedding
norms found in the collected stories and grouping them into 1k clusters via agglomerative clustering. Clusters are ordered according to their degree of isolation, defined as the cosine distance between a cluster's centroid and the next-closest cluster's centroid. Stories with norms from most isolated clusters are assigned to test and development sets, with the rest forming the training set.
- *Lexical Bias*: Tests the susceptibility of classifiers to surface-level lexical correlations. We first identify 100 biased lemmas that occur most frequently either in normative or divergent actions. Each story is then assigned a bias score corresponding to the total number of biased lemmas present in both actions (or consequences). Starting with the lowest bias scores, stories are assigned to the test, development, and, lastly, training set.
- *Minimal Pairs*: Evaluates the model's ability to perform nuanced social reasoning. Splits are obtained by ordering stories according to the Damerau-Levenshtein distance between their actions (or consequences) and assigning stories with lowest distances to the test set, followed by the development set. The remainder makes up the training set.
For generation tasks, only the *Norm Distance* split strategy is used. For more details, refer to [*Section 3* and *Appendix C* in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
## Dataset Creation
### Curation Rationale
Please refer to [*Section 2* and the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Source Data
#### Initial Data Collection and Normalization
Please refer to [*Section 2* in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
#### Who are the source language producers?
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Annotations
#### Annotation process
Please refer to [*Section 2* and the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
#### Who are the annotators?
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Discussion of Biases
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Other Known Limitations
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
## Additional Information
### Dataset Curators
[Denis Emelin](demelin.github.io)
### Licensing Information
MIT
### Citation Information
@article{Emelin2021MoralSS,
title={Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences},
author={Denis Emelin and Ronan Le Bras and Jena D. Hwang and Maxwell Forbes and Yejin Choi},
journal={ArXiv},
year={2021},
volume={abs/2012.15738}
} | 12,011 | [
[
-0.0156402587890625,
-0.062469482421875,
0.04376220703125,
0.0226898193359375,
-0.026397705078125,
-0.02252197265625,
0.0007352828979492188,
-0.01381683349609375,
0.01354217529296875,
0.042510986328125,
-0.06787109375,
-0.05078125,
-0.03692626953125,
0.01339... |
keremberke/forklift-object-detection | 2023-01-15T14:32:47.000Z | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Manufacturing",
"region:us"
] | keremberke | null | @misc{ forklift-dsitv_dataset,
title = { Forklift Dataset },
type = { Open Source Dataset },
author = { Mohamed Traore },
howpublished = { \\url{ https://universe.roboflow.com/mohamed-traore-2ekkp/forklift-dsitv } },
url = { https://universe.roboflow.com/mohamed-traore-2ekkp/forklift-dsitv },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { mar },
note = { visited on 2023-01-15 },
} | 4 | 115 | 2023-01-01T09:57:34 | ---
task_categories:
- object-detection
tags:
- roboflow
- roboflow2huggingface
- Manufacturing
---
<div align="center">
<img width="640" alt="keremberke/forklift-object-detection" src="https://huggingface.co/datasets/keremberke/forklift-object-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['forklift', 'person']
```
### Number of Images
```json
{'test': 42, 'valid': 84, 'train': 295}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/forklift-object-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-traore-2ekkp/forklift-dsitv/dataset/1](https://universe.roboflow.com/mohamed-traore-2ekkp/forklift-dsitv/dataset/1?ref=roboflow2huggingface)
### Citation
```
@misc{ forklift-dsitv_dataset,
title = { Forklift Dataset },
type = { Open Source Dataset },
author = { Mohamed Traore },
howpublished = { \\url{ https://universe.roboflow.com/mohamed-traore-2ekkp/forklift-dsitv } },
url = { https://universe.roboflow.com/mohamed-traore-2ekkp/forklift-dsitv },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { mar },
note = { visited on 2023-01-15 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on April 3, 2022 at 9:01 PM GMT
It includes 421 images.
Forklift are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| 1,749 | [
[
-0.040985107421875,
-0.0248870849609375,
0.022003173828125,
-0.01299285888671875,
-0.0305633544921875,
-0.0142822265625,
0.00836944580078125,
-0.0377197265625,
0.0286407470703125,
0.01593017578125,
-0.05621337890625,
-0.052337646484375,
-0.0394287109375,
0.0... |
Jacobvs/CelebrityTweets | 2023-03-02T23:01:59.000Z | [
"region:us"
] | Jacobvs | null | null | 0 | 115 | 2023-03-02T23:01:12 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
tiedong/goat | 2023-05-25T22:14:53.000Z | [
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"region:us"
] | tiedong | null | null | 16 | 115 | 2023-05-25T22:07:47 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
size_categories:
- 1M<n<10M
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset.json file contains ~1.7 million synthetic data for arithmetic tasks, generated by dataset.ipynb.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 1,517 | [
[
-0.0229949951171875,
-0.036865234375,
0.003978729248046875,
0.027252197265625,
-0.0123291015625,
0.01690673828125,
-0.008758544921875,
-0.019256591796875,
0.04010009765625,
0.05181884765625,
-0.06512451171875,
-0.07177734375,
-0.04339599609375,
0.01934814453... |
karan4d/instruct_machiavellian_textbooks | 2023-10-03T16:30:54.000Z | [
"license:apache-2.0",
"region:us"
] | karan4d | null | null | 0 | 115 | 2023-10-03T02:20:08 | ---
license: apache-2.0
---
credits: shoutout @vikp for his textbook_quality GH repo this was created with
dataset info: a bunch of bad boy data for Machiavellian LLMs | 169 | [
[
-0.00919342041015625,
-0.0208587646484375,
0.0292816162109375,
-0.02587890625,
-0.029388427734375,
-0.0244293212890625,
0.018890380859375,
-0.00954437255859375,
0.0304107666015625,
0.0728759765625,
-0.03692626953125,
-0.0706787109375,
-0.01155853271484375,
-... |
Tverous/SemEval-Sample | 2023-10-20T23:19:59.000Z | [
"region:us"
] | Tverous | null | null | 0 | 115 | 2023-10-20T17:27:04 | ---
dataset_info:
features:
- name: conv_uttr_id
dtype: string
- name: conversation
dtype: string
- name: sentence
dtype: string
- name: emotion
dtype: int64
- name: cause_utterance_ID
sequence: string
splits:
- name: train
num_bytes: 13354056
num_examples: 13619
download_size: 1080587
dataset_size: 13354056
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SemEval-Sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 616 | [
[
-0.04620361328125,
-0.0216522216796875,
0.01763916015625,
0.0120849609375,
-0.020477294921875,
-0.017669677734375,
0.0237579345703125,
-0.003650665283203125,
0.07183837890625,
0.043609619140625,
-0.06304931640625,
-0.05133056640625,
-0.034576416015625,
-0.01... |
yuvalkirstain/task_prediction_train | 2023-10-31T18:44:28.000Z | [
"region:us"
] | yuvalkirstain | null | null | 0 | 115 | 2023-10-31T06:18:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: path
dtype: string
- name: text
dtype: string
- name: task_name
dtype: string
splits:
- name: train
num_bytes: 659890949
num_examples: 5663600
- name: validation
num_bytes: 7823929
num_examples: 60002
download_size: 0
dataset_size: 667714878
---
# Dataset Card for "task_prediction_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 639 | [
[
-0.025421142578125,
-0.003631591796875,
0.01528167724609375,
0.0193328857421875,
-0.002887725830078125,
-0.0110626220703125,
0.00946807861328125,
-0.0129547119140625,
0.04217529296875,
0.026763916015625,
-0.07208251953125,
-0.036529541015625,
-0.05438232421875,
... |
liveqa | 2022-11-03T16:15:28.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zh",
"license:unknown",
"region:us"
] | null | This is LiveQA, a Chinese dataset constructed from play-by-play live broadcast.
It contains 117k multiple-choice questions written by human commentators for over 1,670 NBA games,
which are collected from the Chinese Hupu website. | @inproceedings{qianying-etal-2020-liveqa,
title = "{L}ive{QA}: A Question Answering Dataset over Sports Live",
author = "Qianying, Liu and
Sicong, Jiang and
Yizhong, Wang and
Sujian, Li",
booktitle = "Proceedings of the 19th Chinese National Conference on Computational Linguistics",
month = oct,
year = "2020",
address = "Haikou, China",
publisher = "Chinese Information Processing Society of China",
url = "https://www.aclweb.org/anthology/2020.ccl-1.98",
pages = "1057--1067"
} | 1 | 114 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- found
language:
- zh
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: liveqa
pretty_name: LiveQA
dataset_info:
features:
- name: id
dtype: int64
- name: passages
sequence:
- name: is_question
dtype: bool
- name: text
dtype: string
- name: candidate1
dtype: string
- name: candidate2
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 112187507
num_examples: 1670
download_size: 114704569
dataset_size: 112187507
---
# Dataset Card for LiveQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/PKU-TANGENT/LiveQA)
- **Repository:** [Github](https://github.com/PKU-TANGENT/LiveQA)
- **Paper:** [Liu et al., 2020](https://www.aclweb.org/anthology/2020.ccl-1.98.pdf)
- **Leaderboard:** N/A
- **Point of Contact:** Qianying Liu
### Dataset Summary
The LiveQA dataset is a Chinese question-answering resource constructed from playby-play live broadcasts. It contains 117k multiple-choice questions written by human commentators for over 1,670 NBA games, which are collected from the Chinese Hupu website.
### Supported Tasks and Leaderboards
Question Answering.
[More Information Needed]
### Languages
Chinese.
## Dataset Structure
### Data Instances
Each instance represents a timeline (i.e., a game) with an identifier. The passages field comprise an array of text or question segments. In the following truncated example, user comments about the game is followed by a question about which team will be the first to reach 60 points.
```python
{
'id': 1,
'passages': [
{
"is_question": False,
"text": "'我希望两位球员都能做到!!",
"candidate1": "",
"candidate2": "",
"answer": "",
},
{
"is_question": False,
"text": "新年给我们送上精彩比赛!",
"candidate1": "",
"candidate2": "",
"answer": "",
},
{
"is_question": True,
"text": "先达到60分?",
"candidate1": "火箭",
"candidate2": "勇士",
"answer": "勇士",
},
{
"is_question": False,
"text": "自己急停跳投!!!",
"candidate1": "",
"candidate2": "",
"answer": "",
}
]
}
```
### Data Fields
- id: identifier for the game
- passages: collection of text/question segments
- text: real-time text comment or binary question related to the context
- candidate1/2: one of the two answer options to the question
- answer: correct answer to the question in text
### Data Splits
There is no predefined split in this dataset.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
This resource is developed by [Liu et al., 2020](https://www.aclweb.org/anthology/2020.ccl-1.98.pdf).
```
@inproceedings{qianying-etal-2020-liveqa,
title = "{L}ive{QA}: A Question Answering Dataset over Sports Live",
author = "Qianying, Liu and
Sicong, Jiang and
Yizhong, Wang and
Sujian, Li",
booktitle = "Proceedings of the 19th Chinese National Conference on Computational Linguistics",
month = oct,
year = "2020",
address = "Haikou, China",
publisher = "Chinese Information Processing Society of China",
url = "https://www.aclweb.org/anthology/2020.ccl-1.98",
pages = "1057--1067"
}
```
### Contributions
Thanks to [@j-chim](https://github.com/j-chim) for adding this dataset. | 5,349 | [
[
-0.0269012451171875,
-0.060699462890625,
0.0173797607421875,
0.017791748046875,
-0.0120849609375,
0.0176849365234375,
-0.00492095947265625,
-0.0236663818359375,
0.035919189453125,
0.03173828125,
-0.069091796875,
-0.045867919921875,
-0.02044677734375,
0.00662... |
laion/laion-coco | 2022-10-23T18:55:09.000Z | [
"license:cc-by-4.0",
"region:us"
] | laion | null | null | 43 | 114 | 2022-09-30T20:29:42 | ---
license: cc-by-4.0
---
# LAION COCO: 600M SYNTHETIC CAPTIONS FROM LAION2B-EN
by: Christoph Schuhmann, Andreas Köpf, Richard Vencu, Theo Coombes, Romain Beaumont, 10 Oct, 2022
Author: Christoph Schuhmann, Andreas Köpf , Theo Coombes, Richard Vencu, Benjamin Trom , Romain Beaumont
We present LAION-COCO, the world’s largest dataset of 600M generated high-quality captions for publicly available web-images
Laion5B has five billion natural captions. They provide a lot of information, but could synthetic captions complement them? To answer this question, we use a combination of existing, publicly available models to produce high quality captions for images in the style of MS COCO. We captioned 600M images from the english subset of Laion-5B with an ensemble of BLIP L/14 and 2 CLIP versions (L/14 and RN50x64).
This will make it possible to investigate the value of generated captions to train models. We’re curious on how these synthetic captions could impact models trained on them!
The 600M samples are provided in parquet files. Columns include the original caption, the url, the top caption and a list of alternative captions with lower CLIP-similarity scores.
## Method
The method we used to generate these captions was to:
- We use Blip L/14 to generate 40 captions
- Rank them using openai Clip Open AI L/14 ; selected the best 5 captions
- Rank using Open AI RN50x64 Clip model to select the best one
- Use a small, fine-tuned T0 model to roughly repair grammar and punctuation of the texts
The hyperparameters were chosen through a grid search (settings) by Andreas Köpf to best match the style ( ROUGE scores ) of MS COCO texts.
## Evaluation
We evaluated these generated captions by asking human evaluators to guess whether a caption is coming from a human or an AI model. We also asked them to rate the quality on a scale from 0(bad) to 5 (good).
In a first round we presented the evaluators each 200 samples, that contained 100 AI generated and 100 human written MS COCO captions.
## Observations
Mean rating & standard deviation of samples, that were written by a human:
Mean: 3.98
Stdev: 0.99
Mean rating & standard deviation of samples, that were written by an AI
Mean: 3.89
Stdev: 1.12
Mean rating & standard deviation of samples, where the annotator believed they were written by a human:
Mean: 4.44
Stdev: 0.61
Mean rating & standard deviation of samples, where the annotator believed they were generated by an AI
Mean: 3.50
Stdev: 1.15
## Interpretation
It is very interesting that the mean scores of the samples generated by humans and generated by the model are very similar. We also notice that the standard deviation of the generated captions is a little bit higher.
We hypothesize that most in most cases the quality of the generated captions is perceived as as good as the quality of the human written captions.
But sometimes the captioning model obviously fails and the quality of the results is pretty low because the model doesn't relevant understand concepts about what is going on in the picture, because it's knowledge is not grounded in a sufficiently sophisticated world model.
## Failure cases
“Two people posing for the camera in their wedding attire, one with an umbrella over his head and another with long red hair.”
“An older man having a heart attack, with his hand on the chest.”
When we remove all samples from the evaluations that have ratings of either 0 or 1, we Observe that the mean ratings and standard deviations move closer together.
Scores without ratings of 0 and 1
Mean rating & standard deviation of samples, that were written by a human:
Mean: 4.07
Stdev: 0.81
Mean rating & standard deviation of samples, that were written by an AI
Mean: 4.02
Stdev: 0.94
The mean ratings of the generated captions are still a little bit lower and the standard deviation is still a little bit higher, but the trend is pretty clear. By removing samples with rating 2, the gap between the qualities would probably decrease even further.
Presentation only generated captions:
In a next step, we presented the human evaluators 400 captions that were only generated by the model (no human written captions in between):
Mean rating of all samples
3.81
Standard deviation of all samples
0.94
% rated as human
47.5
% rated as AI
52.5
We observe that the human evaluators thought in 47.5% of all cases, that the captions were written by a human. This makes us confident that our captains are on average pretty good. When we told the evaluators later that all captions were generated by the model they told us that it was very hard for them to judge whether a caption was written by a model or a human, and that it only was easy for them in obvious failure cases.
## Conclusions
We conclude that Our ensemble of BLIP and CLIP is already pretty good and capable of generating captions with a quality that is on average pretty close to the human written captions of MS Coco.
It would be very interesting for future work to let people rate our generated captions at larger scale and then filter out the samples with low rating values. These results could be used to train models to rate the quality of captions and to predict whether a caption looks like a generated or a human written caption.
And even without further automated filtering, an ensemble of our captions and human evaluators would be a pretty good workflow to curate high quality captions at much lower costs than if we would ask humans to write them from scratch.
## Credit assignments
- Christoph Schuhmann lead the project, implemented a first version of the code, ran most of the generations & conducted the human evaluations
- Andreas Köpf conducted the hyperparameter search & wrote the code to execute BLIP + CLIP filtering at scale
- Theo Coombes managed the server that coordinated which GPU worker got which part of LAION to work on
- Romain Beaumont packaged the .json into parquet files, sent to HF and wrote the first draft of this post
- Richard Vencu provided the infra structure to use the idle compute for this project
- Benjamin Trom wrote code that help us to convert the .json files to parquet
We thank stability.ai for providing the compute used to generate the captions in the dataset. | 6,264 | [
[
-0.034759521484375,
-0.031036376953125,
0.0182037353515625,
0.01316070556640625,
-0.0213470458984375,
0.0214691162109375,
-0.0120697021484375,
-0.055206298828125,
0.01318359375,
0.04638671875,
-0.02191162109375,
-0.0269622802734375,
-0.0299530029296875,
0.03... |
sander-wood/irishman | 2023-09-25T15:14:16.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"license:mit",
"music",
"region:us"
] | sander-wood | null | null | 10 | 114 | 2023-01-10T23:42:04 | ---
license: mit
task_categories:
- text-generation
pretty_name: IrishMAN
size_categories:
- 100K<n<1M
tags:
- music
---
If you prefer MIDI or MusicXML, download [IrishMAN-MIDI](https://huggingface.co/datasets/sander-wood/irishman/resolve/main/irishman-midi.zip) or [IrishMAN-XML](https://huggingface.co/datasets/sander-wood/irishman/resolve/main/irishman-xml.zip). For better use of structural info in control codes, consider ABC notation.
## ABC Notation
ABC notation is an ASCII-based plain text musical notation system that is commonly used for transcribing traditional music and sharing sheet music online. It provides a simple and concise way to represent musical elements such as notes, rhythms, chords, and more.
For those looking to interact with ABC notation in various ways, there are several tools available:
1. **[Online ABC Player](https://abc.rectanglered.com/):** This web-based tool allows you to input ABC notation and hear the corresponding audio playback. By pasting your ABC code into the player, you can instantly listen to the tune as it would sound when played.
2. **[ABC Sheet Music Editor - EasyABC](https://easyabc.sourceforge.net/):** EasyABC is a user-friendly software application designed for creating, editing, and formatting ABC notation. Its graphical interface enables you to input your ABC code, preview the sheet music, and make adjustments as necessary.
## Dataset Summary
The **Irish Massive ABC Notation (IrishMAN)** dataset includes 216,284 Irish tunes in ABC notation, divided into 99\% (214,122 tunes) for training and 1\% (2,162 tunes) for validation. These tunes were collected from thesession.org and abcnotation.com, both renowned for sharing traditional music. To ensure uniformity in formatting, all tunes were converted to XML and then back to ABC using [scripts](https://wim.vree.org/svgParse/), and fields containing natural language (e.g., titles and lyrics) were removed.
Each tune is automatically annotated with control codes derived from ABC symbols, as described in the below section. These control codes offer insights into the musical forms and structures of each composition.
In the IrishMAN dataset, a [music21](https://web.mit.edu/music21/doc/index.html#)-filtered [subset](https://huggingface.co/datasets/sander-wood/irishman/raw/main/leadsheet_ids.json) includes 34,211 lead sheets, each human-annotated with chord symbols. It is from this very subset that [TunesFormer](https://huggingface.co/sander-wood/tunesformer) developed its capacity to generate melodies with harmonies.
A noteworthy aspect is the copyright status. All tunes in the dataset are in the public domain, ensuring ethical and legal usage for research and creative projects.
## Control Codes
Inspired by [CTRL](https://huggingface.co/ctrl), we incorporate control codes into TunesFormer to represent musical forms. These codes, positioned ahead of the ABC notation, enable users to specify the structures of the generated tunes. The following control codes are introduced:
- **S:number of sections**: determines the number of sections in the entire melody. It counts on several symbols that can be used to represent section boundaries: `[|`, `||`, `|]`, `|:`, `::`, and `:|`. In our dataset, the range is 1 to 8 (e.g., `S:1` for a single-section melody, and `S:8` for a melody with eight sections).
- **B:number of bars**: specifies the desired number of bars within a section. It counts on the bar symbol `|`. In our dataset, the range is 1 to 32 (e.g., `B:1` for a one-bar section, and `B:32` for a section with 32 bars).
- **E:edit distance similarity**: controls the similarity level between the current section $c$ and a previous section $p$ in the melody. It is based on the Levenshtein distance $lev(c,p)$ , quantifying the difference between sections for creating variations or contrasts. Mathematically, it can be expressed as:
```
eds(c,p) = 1 - lev(c,p) / max(|c|,|p|)
```
where $|c|$ and $|p|$ are the string lengths of the two sections. It is discretized into 11 levels, ranging from no match at all to an exact match (e.g., `E:0` for no similarity, and `E:10` for an exact match).
## Copyright Disclaimer
This dataset is for research use only and not for commercial purposes. We believe all data in this dataset is in the public domain. If you own the copyright to any musical composition in the IrishMAN dataset and have concerns, please contact us at shangda@mail.ccom.edu.cn. We will address your concerns and take appropriate action if needed.
## Special Thanks
We would like to extend a special thanks to thesession.org and abcnotation.com for their contributions to the development and promotion of ABC notation, as well as their significant impact on the field of music information retrieval. Their platforms have become invaluable resources for the traditional and folk music community. We also wish to express our gratitude to Willem (Wim) for providing the essential conversion tools that enabled the transformation of the tunes into a uniform format. Together, these collaborations have made it possible for researchers like us to create and study extensive datasets like IrishMAN. | 5,166 | [
[
-0.0298309326171875,
-0.037384033203125,
0.01922607421875,
0.0523681640625,
-0.01605224609375,
-0.00772857666015625,
-0.02996826171875,
-0.046844482421875,
0.035675048828125,
0.046905517578125,
-0.034820556640625,
-0.072509765625,
-0.0241851806640625,
0.0329... |
MohamedRashad/characters_backstories | 2023-04-03T06:42:29.000Z | [
"task_categories:text-generation",
"language:en",
"license:openrail",
"region:us"
] | MohamedRashad | null | null | 2 | 114 | 2023-04-03T05:14:52 | ---
license: openrail
task_categories:
- text-generation
language:
- en
pretty_name: Dungeons & Dragons Characters Backstory
---
This dataset is made from this repo [here](https://github.com/janelleshane/DnD_bios)
and it contains 2322 character bios to be used | 262 | [
[
-0.0303497314453125,
0.006076812744140625,
0.0418701171875,
0.0209503173828125,
-0.011474609375,
0.023681640625,
0.021728515625,
-0.002719879150390625,
0.0218963623046875,
0.07342529296875,
-0.0711669921875,
-0.05364990234375,
0.002460479736328125,
0.0318298... |
emozilla/proofpile-test-tokenized | 2023-08-09T15:29:52.000Z | [
"region:us"
] | emozilla | null | null | 0 | 114 | 2023-08-09T15:27:50 | ---
dataset_info:
features:
- name: text
dtype: string
- name: meta
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: tokenized_len
dtype: int64
splits:
- name: test
num_bytes: 1644067664
num_examples: 46251
download_size: 552973486
dataset_size: 1644067664
---
# Dataset Card for "proofpile-test-tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 532 | [
[
-0.040985107421875,
-0.0175933837890625,
0.003170013427734375,
0.0127410888671875,
-0.0081939697265625,
0.004138946533203125,
0.00580596923828125,
-0.004474639892578125,
0.052734375,
0.025115966796875,
-0.032440185546875,
-0.055450439453125,
-0.04876708984375,
... |
tomashs/LSC_acronyms_topic_vectors | 2023-10-05T21:38:49.000Z | [
"region:us"
] | tomashs | null | null | 0 | 114 | 2023-10-05T21:27:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: short_form
dtype: string
- name: long_form
dtype: string
- name: label
dtype: int64
- name: topic_vector
sequence: float64
splits:
- name: train
num_bytes: 1959752089
num_examples: 352720
- name: validation
num_bytes: 418571627
num_examples: 75339
- name: test
num_bytes: 419813918
num_examples: 75540
download_size: 2198337547
dataset_size: 2798137634
---
# Dataset Card for "LSC_acronyms_topic_vectors"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 842 | [
[
-0.0430908203125,
-0.0156707763671875,
0.0097503662109375,
0.004039764404296875,
-0.023895263671875,
0.022186279296875,
0.02020263671875,
0.0130157470703125,
0.07672119140625,
0.01497650146484375,
-0.0596923828125,
-0.058135986328125,
-0.049652099609375,
-0.... |
casperhansen/longalpaca_1k_unlimited_test | 2023-10-15T11:49:53.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | casperhansen | null | null | 0 | 114 | 2023-10-15T11:40:15 | ---
license: cc-by-nc-4.0
---
Dataset preprocessed from https://huggingface.co/datasets/Yukang/LongAlpaca-12k.
This contains 1000 samples that have a minimum length of 16k tokens.
## Script to reproduce
```python
from datasets import load_dataset
from transformers import AutoTokenizer
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
# Load the dataset and tokenizer
data = load_dataset("Yukang/LongAlpaca-12k")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1", trust_remote_code=True)
def filter_function(batch):
# Separate each round of conversation and concatenate them into single strings
conversation_strs = [f'{instruction}\n\n{output}' for instruction, output in zip(batch['instruction'], batch['output'])]
# Tokenize the strings without truncation
tokens = tokenizer(conversation_strs, truncation=False, return_length=True)
# Return True for examples where the token count exceeds max
return [length > 16384 for length in tokens['length']]
# Note that I've added a "keep" key to the return dictionary
filtered_data = data.filter(filter_function, batched=True, batch_size=1000)
# Convert to Pandas DataFrame
df = pd.DataFrame(filtered_data['train'])
# Sample 1k rows
sampled_df = df.sample(n=1000, random_state=1)
# Convert the Pandas DataFrame to a PyArrow Table
table = pa.table(sampled_df)
# Save the table as a Parquet file
pq.write_table(table, 'data.parquet')
``` | 1,465 | [
[
-0.0273284912109375,
-0.053558349609375,
-0.0032024383544921875,
0.04150390625,
-0.0274810791015625,
-0.035186767578125,
-0.0265960693359375,
-0.01424407958984375,
0.036895751953125,
0.044525146484375,
-0.0323486328125,
-0.040557861328125,
-0.0364990234375,
... |
llmware/rag_instruct_test_dataset2_financial_0.1 | 2023-10-23T15:01:44.000Z | [
"license:apache-2.0",
"finance",
"retrieval augmented generation",
"RAG",
"region:us"
] | llmware | null | null | 3 | 114 | 2023-10-22T15:19:52 | ---
license: apache-2.0
tags:
- finance
- retrieval augmented generation
- RAG
pretty_name: RAG Instruct Test Dataset 2 - Financial - v0.1
---
# Dataset Card for RAG-Instruct-Financial-Test-Dataset
### Dataset Summary
This is a test dataset for "retrieval augmented generation" (RAG) use cases, especially for financial data extraction and analysis, including a series of questions relating to tabular financial data and common-sense math operations (small increments, decrements, sorting and ordering as well as recognizing when information is not included in a particular source). This test dataset includes 100 samples with context passages pulled from common 'retrieval scenarios' in financial markets, including financial earnings releases, stock market updates, financial tables and financial news. The primary use case is to evaluate the effectiveness of an
instruct-fine-tuned LLM used in conjunction with closed-context, fact-based question-answering, key-value extraction, and summarization with bulletpoints. The context passages are relatively short in this test-set ranging from ~100 tokens to ~500 tokens, and was designed for use with the
BLING series of models but is suitable for comparison evaluations of any LLM for basic RAG scenarios.
This is part of a series of RAG-Instruct test datasets from llmware.
### Languages
English
## Dataset Structure
100 JSONL samples with 4 keys - "query" | "context" | "answer" | "sample_number"
### Personal and Sensitive Information
The dataset samples were written bespoke for this objective, but do rely upon some public information, including major public figures and widely reported events.
Any other names were created/masked and any overlap with real companies or people is coincidental.
## Dataset Card Contact
Darren Oberst & llmware team
Please reach out anytime if you are interested in this project and would like to participate and work with us!
| 1,937 | [
[
-0.01531219482421875,
-0.050079345703125,
-0.00894927978515625,
0.02484130859375,
-0.02520751953125,
0.01216888427734375,
0.0002162456512451172,
-0.0190582275390625,
0.003368377685546875,
0.05535888671875,
-0.031402587890625,
-0.0343017578125,
-0.019683837890625... |
DataGuard/german-guanaco | 2023-10-22T22:23:59.000Z | [
"region:us"
] | DataGuard | null | null | 0 | 114 | 2023-10-22T22:23:52 | ---
dataset_info:
features:
- name: system_prompt
dtype: string
- name: input
dtype: string
- name: response
dtype: string
- name: type
dtype: string
- name: lang
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 10005376.286831813
num_examples: 15186
- name: test
num_bytes: 1112147.7131681878
num_examples: 1688
download_size: 4699474
dataset_size: 11117524.0
---
# Dataset Card for "german-guanaco"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 624 | [
[
-0.037506103515625,
-0.030975341796875,
0.0263671875,
0.023406982421875,
-0.01812744140625,
-0.00659942626953125,
-0.001155853271484375,
-0.024566650390625,
0.05938720703125,
0.0188751220703125,
-0.050506591796875,
-0.08343505859375,
-0.051788330078125,
-0.0... |
tou7and/imdb-truncated-polluted | 2023-10-24T02:52:54.000Z | [
"region:us"
] | tou7and | null | null | 0 | 114 | 2023-10-24T02:35:27 | A polluted version of imdb-truncated.
Errors and distortions are added to trainset and testset, inlcuding input text and labels. | 128 | [
[
-0.040130615234375,
-0.0247802734375,
0.00585174560546875,
0.018646240234375,
-0.03619384765625,
0.0264892578125,
0.01788330078125,
-0.032196044921875,
0.036346435546875,
0.0626220703125,
-0.05609130859375,
0.03515625,
-0.05242919921875,
0.00658416748046875,... |
SetFit/qnli | 2022-02-28T13:29:16.000Z | [
"region:us"
] | SetFit | null | null | 0 | 113 | 2022-03-02T23:29:22 | # Glue QNLI
This dataset is a port of the official [`qnli` dataset](https://huggingface.co/datasets/glue/viewer/qnli/train) on the Hub.
Note that the question and sentence columns have been renamed to text1 and text2 respectively.
Also, the test split is not labeled; the label column values are always -1.
| 314 | [
[
-0.0206298828125,
-0.05078125,
0.006664276123046875,
0.01201629638671875,
-0.0016117095947265625,
0.00823974609375,
0.0167999267578125,
-0.006008148193359375,
0.06671142578125,
0.024871826171875,
-0.0733642578125,
-0.01154327392578125,
-0.0158843994140625,
-... |
SetFit/yelp_review_full | 2022-01-19T21:49:57.000Z | [
"region:us"
] | SetFit | null | null | 0 | 113 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dennlinger/klexikon | 2022-10-25T15:03:56.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"task_ids:text-simplification",
"annotations_creators:found",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",... | dennlinger | null | null | 5 | 113 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
- expert-generated
language_creators:
- found
- machine-generated
language:
- de
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
- text2text-generation
task_ids:
- text-simplification
paperswithcode_id: klexikon
pretty_name: Klexikon
tags:
- conditional-text-generation
- simplification
- document-level
---
# Dataset Card for the Klexikon Dataset
## Table of Contents
- [Version History](#version-history)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Version History
- **v0.3** (2022-09-01): Removing some five samples from the dataset due to duplication conflicts with other samples.
- **v0.2** (2022-02-28): Updated the files to no longer contain empty sections and removing otherwise empty lines at the end of files. Also removing lines with some sort of coordinate.
- **v0.1** (2022-01-19): Initial data release on Huggingface datasets.
## Dataset Description
- **Homepage:** [N/A]
- **Repository:** [Klexikon repository](https://github.com/dennlinger/klexikon)
- **Paper:** [Klexikon: A German Dataset for Joint Summarization and Simplification](https://arxiv.org/abs/2201.07198)
- **Leaderboard:** [N/A]
- **Point of Contact:** [Dennis Aumiller](mailto:dennis.aumiller@gmail.com)
### Dataset Summary
The Klexikon dataset is a German resource of document-aligned texts between German Wikipedia and the children's lexicon "Klexikon". The dataset was created for the purpose of joint text simplification and summarization, and contains almost 2900 aligned article pairs.
Notably, the children's articles use a simpler language than the original Wikipedia articles; this is in addition to a clear length discrepancy between the source (Wikipedia) and target (Klexikon) domain.
### Supported Tasks and Leaderboards
- `summarization`: The dataset can be used to train a model for summarization. In particular, it poses a harder challenge than some of the commonly used datasets (CNN/DailyMail), which tend to suffer from positional biases in the source text. This makes it very easy to generate high (ROUGE) scoring solutions, by simply taking the leading 3 sentences. Our dataset provides a more challenging extraction task, combined with the additional difficulty of finding lexically appropriate simplifications.
- `simplification`: While not currently supported by the HF task board, text simplification is concerned with the appropriate representation of a text for disadvantaged readers (e.g., children, language learners, dyslexic,...).
For scoring, we ran preliminary experiments based on [ROUGE](https://huggingface.co/metrics/rouge), however, we want to cautiously point out that ROUGE is incapable of accurately depicting simplification appropriateness.
We combined this with looking at Flesch readability scores, as implemented by [textstat](https://github.com/shivam5992/textstat).
Note that simplification metrics such as [SARI](https://huggingface.co/metrics/sari) are not applicable here, since they require sentence alignments, which we do not provide.
### Languages
The associated BCP-47 code is `de-DE`.
The text of the articles is in German. Klexikon articles are further undergoing a simple form of peer-review before publication, and aim to simplify language for 8-13 year old children. This means that the general expected text difficulty for Klexikon articles is lower than Wikipedia's entries.
## Dataset Structure
### Data Instances
One datapoint represents the Wikipedia text (`wiki_text`), as well as the Klexikon text (`klexikon_text`).
Sentences are separated by newlines for both datasets, and section headings are indicated by leading `==` (or `===` for subheadings, `====` for sub-subheading, etc.).
Further, it includes the `wiki_url` and `klexikon_url`, pointing to the respective source texts. Note that the original articles were extracted in April 2021, so re-crawling the texts yourself will likely change some content.
Lastly, we include a unique identifier `u_id` as well as the page title `title` of the Klexikon page.
Sample (abridged texts for clarity):
```
{
"u_id": 0,
"title": "ABBA",
"wiki_url": "https://de.wikipedia.org/wiki/ABBA",
"klexikon_url": "https://klexikon.zum.de/wiki/ABBA",
"wiki_sentences": [
"ABBA ist eine schwedische Popgruppe, die aus den damaligen Paaren Agnetha Fältskog und Björn Ulvaeus sowie Benny Andersson und Anni-Frid Lyngstad besteht und sich 1972 in Stockholm formierte.",
"Sie gehört mit rund 400 Millionen verkauften Tonträgern zu den erfolgreichsten Bands der Musikgeschichte.",
"Bis in die 1970er Jahre hatte es keine andere Band aus Schweden oder Skandinavien gegeben, der vergleichbare Erfolge gelungen waren.",
"Trotz amerikanischer und britischer Dominanz im Musikgeschäft gelang der Band ein internationaler Durchbruch.",
"Sie hat die Geschichte der Popmusik mitgeprägt.",
"Zu ihren bekanntesten Songs zählen Mamma Mia, Dancing Queen und The Winner Takes It All.",
"1982 beendeten die Gruppenmitglieder aufgrund privater Differenzen ihre musikalische Zusammenarbeit.",
"Seit 2016 arbeiten die vier Musiker wieder zusammen an neuer Musik, die 2021 erscheinen soll.",
],
"klexikon_sentences": [
"ABBA war eine Musikgruppe aus Schweden.",
"Ihre Musikrichtung war die Popmusik.",
"Der Name entstand aus den Anfangsbuchstaben der Vornamen der Mitglieder, Agnetha, Björn, Benny und Anni-Frid.",
"Benny Andersson und Björn Ulvaeus, die beiden Männer, schrieben die Lieder und spielten Klavier und Gitarre.",
"Anni-Frid Lyngstad und Agnetha Fältskog sangen."
]
},
```
### Data Fields
* `u_id` (`int`): A unique identifier for each document pair in the dataset. 0-2349 are reserved for training data, 2350-2623 for testing, and 2364-2897 for validation.
* `title` (`str`): Title of the Klexikon page for this sample.
* `wiki_url` (`str`): URL of the associated Wikipedia article. Notably, this is non-trivial, since we potentially have disambiguated pages, where the Wikipedia title is not exactly the same as the Klexikon one.
* `klexikon_url` (`str`): URL of the Klexikon article.
* `wiki_text` (`List[str]`): List of sentences of the Wikipedia article. We prepare a pre-split document with spacy's sentence splitting (model: `de_core_news_md`). Additionally, please note that we do not include page contents outside of `<p>` tags, which excludes lists, captions and images.
* `klexikon_text` (`List[str]`): List of sentences of the Klexikon article. We apply the same processing as for the Wikipedia texts.
### Data Splits
We provide a stratified split of the dataset, based on the length of the respective Wiki article/Klexikon article pair (according to number of sentences).
The x-axis represents the length of the Wikipedia article, and the y-axis the length of the Klexikon article.
We segment the coordinate systems into rectangles of shape `(100, 10)`, and randomly sample a split of 80/10/10 for training/validation/test from each rectangle to ensure stratification. In case of rectangles with less than 10 entries, we put all samples into training.
The final splits have the following size:
* 2350 samples for training
* 274 samples for validation
* 274 samples for testing
## Dataset Creation
### Curation Rationale
As previously described, the Klexikon resource was created as an attempt to bridge the two fields of text summarization and text simplification. Previous datasets suffer from either one or more of the following shortcomings:
* They primarily focus on input/output pairs of similar lengths, which does not reflect longer-form texts.
* Data exists primarily for English, and other languages are notoriously understudied.
* Alignments exist for sentence-level, but not document-level.
This dataset serves as a starting point to investigate the feasibility of end-to-end simplification systems for longer input documents.
### Source Data
#### Initial Data Collection and Normalization
Data was collected from [Klexikon](klexikon.zum.de), and afterwards aligned with corresponding texts from [German Wikipedia](de.wikipedia.org).
Specifically, the collection process was performed in April 2021, and 3145 articles could be extracted from Klexikon back then. Afterwards, we semi-automatically align the articles with Wikipedia, by looking up articles with the same title.
For articles that do not exactly match, we manually review their content, and decide to match to an appropriate substitute if the content can be matched by at least 66% of the Klexikon paragraphs.
Similarly, we proceed to manually review disambiguation pages on Wikipedia.
We extract only full-text content, excluding figures, captions, and list elements from the final text corpus, and only retain articles for which the respective Wikipedia document consists of at least 15 paragraphs after pre-processing.
#### Who are the source language producers?
The language producers are contributors to Klexikon and Wikipedia. No demographic information was available from the data sources.
### Annotations
#### Annotation process
Annotations were performed by manually reviewing the URLs of the ambiguous article pairs. No annotation platforms or existing tools were used in the process.
Otherwise, articles were matched based on the exact title.
#### Who are the annotators?
The manually aligned articles were reviewed by the dataset author (Dennis Aumiller).
### Personal and Sensitive Information
Since Klexikon and Wikipedia are public encyclopedias, no further personal or sensitive information is included. We did not investigate to what extent information about public figures is included in the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
Accessibility on the web is still a big issue, particularly for disadvantaged readers.
This dataset has the potential to strengthen text simplification systems, which can improve the situation.
In terms of language coverage, this dataset also has a beneficial impact on the availability of German data.
Potential negative biases include the problems of automatically aligned articles. The alignments may never be 100% perfect, and can therefore cause mis-aligned articles (or associations), despite the best of our intentions.
### Discussion of Biases
We have not tested whether any particular bias towards a specific article *type* (i.e., "person", "city", etc.) exists.
Similarly, we attempted to present an unbiased (stratified) split for validation and test set, but given that we only cover around 2900 articles, it is possible that these articles represent a particular focal lense on the overall distribution of lexical content.
### Other Known Limitations
Since the articles were written independently of each other, it is not guaranteed that there exists an exact coverage of each sentence in the simplified article, which could also stem from the fact that sometimes Wikipedia pages have separate article pages for aspects (e.g., the city of "Aarhus" has a separate page for its art museum (ARoS). However, Klexikon lists content and description for ARoS on the page of the city itself.
## Additional Information
### Dataset Curators
The dataset was curated only by the author of this dataset, Dennis Aumiller.
### Licensing Information
Klexikon and Wikipedia make their textual contents available under the CC BY-SA license, which will be inherited for this dataset.
### Citation Information
If you use our dataset or associated code, please cite our paper:
```
@inproceedings{aumiller-gertz-2022-klexikon,
title = "Klexikon: A {G}erman Dataset for Joint Summarization and Simplification",
author = "Aumiller, Dennis and
Gertz, Michael",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.288",
pages = "2693--2701"
}
```
| 13,061 | [
[
-0.046783447265625,
-0.04119873046875,
0.0232696533203125,
0.0229339599609375,
-0.0272064208984375,
-0.0084075927734375,
-0.0264129638671875,
-0.0283660888671875,
0.02227783203125,
0.015228271484375,
-0.05126953125,
-0.051971435546875,
-0.045166015625,
0.015... |
nferruz/UR50_2021_04 | 2022-07-22T13:44:04.000Z | [
"size_categories:unknown",
"region:us"
] | nferruz | null | null | 1 | 113 | 2022-03-02T23:29:22 | ---
YAML tags:
annotations_creators: []
language_creators: []
language: []
license: []
multilinguality: []
pretty_name: ''
size_categories:
- unknown
source_datasets: []
task_categories: []
task_ids: []
---
# Dataset Card for UR50_2021_04
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
https://ftp.uniprot.org/pub/databases/uniprot/uniref/uniref50/
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.uniprot.org/
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Uniref50 (UR50) dataset version 2021/04 is a biological dataset taken from the Uniprot database: https://www.uniprot.org/
### Supported Tasks and Leaderboards
The UR50 dataset contains 48 Million protein sequences. It is a useful dataset to train protein language models.
### Languages
Proteins
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
Train, validation
## Dataset Creation
### Curation Rationale
Substituted FASTA headers by <endoftext> tag.
The dataset was tokenized using BPE and further split into train and validation datasets (ratio 90/10) choosing random sequences for the latter.
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
UniProt
### Annotations
#### Annotation process
UniProt contains annotations but no labels/annotations were used for this dataset.
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Citation Information
### Contributions
Thanks to UniProt for curating this dataset. https://www.uniprot.org/
| 2,850 | [
[
-0.0264434814453125,
-0.0192413330078125,
-0.002872467041015625,
0.0238189697265625,
-0.0216217041015625,
0.0110626220703125,
-0.01552581787109375,
-0.0172882080078125,
0.0245361328125,
0.032745361328125,
-0.0494384765625,
-0.07061767578125,
-0.0328369140625,
... |
D3xter1922/proofwriter-dataset | 2022-10-04T12:26:37.000Z | [
"region:us"
] | D3xter1922 | null | null | 1 | 113 | 2022-09-20T22:48:07 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
copenlu/spiced | 2022-10-24T12:31:04.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|s2orc... | copenlu | null | null | 2 | 113 | 2022-10-20T15:18:50 | ---
annotations_creators:
- crowdsourced
- machine-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: SPICED
size_categories:
- 1K<n<10K
source_datasets:
- extended|s2orc
tags:
- scientific text
- scholarly text
- semantic text similarity
- fact checking
- misinformation
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
---
# Dataset Card for SPICED
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.copenlu.com/publication/2022_emnlp_wright/
- **Repository:** https://github.com/copenlu/scientific-information-change
- **Paper:**
### Dataset Summary
The Scientific Paraphrase and Information ChangE Dataset (SPICED) is a dataset of paired scientific findings from scientific papers, news media, and Twitter. The types of pairs are between <paper, news> and <paper, tweet>. Each pair is labeled for the degree of information similarity in the _findings_ described by each sentence, on a scale from 1-5. This is called the _Information Matching Score (IMS)_. The data was curated from S2ORC and matched news articles and Tweets using Altmetric. Instances are annotated by experts using the Prolific platform and Potato. Please use the following citation when using this dataset:
```
@article{modeling-information-change,
title={{Modeling Information Change in Science Communication with Semantically Matched Paraphrases}},
author={Wright, Dustin and Pei, Jiaxin and Jurgens, David and Augenstein, Isabelle},
year={2022},
booktitle = {Proceedings of EMNLP},
publisher = {Association for Computational Linguistics},
year = 2022
}
```
### Supported Tasks and Leaderboards
The task is to predict the IMS between two scientific sentences, which is a scalar between 1 and 5. Preferred metrics are mean-squared error and Pearson correlation.
### Languages
English
## Dataset Structure
### Data Fields
- DOI: The DOI of the original scientific article
- instance\_id: Unique instance ID for the sample. The ID contains the field, whether or not it is a tweet, and whether or not the sample was manually labeled or automatically using SBERT (marked as "easy")
- News Finding: Text of the news or tweet finding
- Paper Finding: Text of the paper finding
- News Context: For news instances, the surrounding two sentences for the news finding. For tweets, a copy of the tweet
- Paper Context: The surrounding two sentences for the paper finding
- scores: Annotator scores after removing low competence annotators
- field: The academic field of the paper ('Computer\_Science', 'Medicine', 'Biology', or 'Psychology')
- split: The dataset split ('train', 'val', or 'test')
- final\_score: The IMS of the instance
- source: Either "news" or "tweet"
- News Url: A URL to the source article if a news instance or the tweet ID of a tweet
### Data Splits
- train: 4721 instances
- validation: 664 instances
- test: 640 instances
## Dataset Creation
For the full details of how the dataset was created, please refer to our [EMNLP 2022 paper]().
### Curation Rationale
Science communication is a complex process of translation from highly technical scientific language to common language that lay people can understand. At the same time, the general public relies on good science communication in order to inform critical decisions about their health and behavior. SPICED was curated in order to provide a training dataset and benchmark for machine learning models to measure changes in scientific information at different stages of the science communication pipeline.
### Source Data
#### Initial Data Collection and Normalization
Scientific text: S2ORC
News articles and Tweets are collected through Altmetric.
#### Who are the source language producers?
Scientists, journalists, and Twitter users.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Models trained on SPICED can be used to perform large scale analyses of science communication. They can be used to match the same finding discussed in different media, and reveal trends in differences in reporting at different stages of the science communication pipeline. It is hoped that this can help to build tools which will improve science communication.
### Discussion of Biases
The dataset is restricted to computer science, medicine, biology, and psychology, which may introduce some bias in the topics which models will perform well on.
### Other Known Limitations
While some context is available, we do not release the full text of news articles and scientific papers, which may contain further context to help with learning the task. We do however provide the paper DOIs and links to the original news articles in case full text is desired.
## Additional Information
### Dataset Curators
Dustin Wright, Jiaxin Pei, David Jurgens, and Isabelle Augenstein
### Licensing Information
MIT
### Contributions
Thanks to [@dwright37](https://github.com/dwright37) for adding this dataset. | 6,278 | [
[
-0.01038360595703125,
-0.0438232421875,
0.0243682861328125,
0.038360595703125,
-0.022857666015625,
-0.0027179718017578125,
-0.015838623046875,
-0.01100921630859375,
0.042510986328125,
0.028076171875,
-0.045745849609375,
-0.06256103515625,
-0.046905517578125,
... |
keremberke/table-extraction | 2023-01-18T09:43:03.000Z | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Documents",
"region:us"
] | keremberke | null | \ | 8 | 113 | 2023-01-18T09:42:19 | ---
task_categories:
- object-detection
tags:
- roboflow
- roboflow2huggingface
- Documents
---
<div align="center">
<img width="640" alt="keremberke/table-extraction" src="https://huggingface.co/datasets/keremberke/table-extraction/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['bordered', 'borderless']
```
### Number of Images
```json
{'test': 34, 'train': 238, 'valid': 70}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/table-extraction", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-traore-2ekkp/table-extraction-pdf/dataset/2](https://universe.roboflow.com/mohamed-traore-2ekkp/table-extraction-pdf/dataset/2?ref=roboflow2huggingface)
### Citation
```
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 18, 2023 at 9:41 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 342 images.
Data-table are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
No image augmentation techniques were applied.
| 1,852 | [
[
-0.033477783203125,
-0.034881591796875,
0.025726318359375,
-0.0018167495727539062,
-0.027557373046875,
-0.0111541748046875,
-0.00655364990234375,
-0.034149169921875,
0.02191162109375,
0.031829833984375,
-0.03839111328125,
-0.060150146484375,
-0.039215087890625,
... |
griffin/ChemSum | 2023-06-01T17:25:14.000Z | [
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:en",
"chemistry",
"biology",
"medical",
"arxiv:2305.07615",
"region:us"
] | griffin | null | null | 5 | 113 | 2023-05-10T02:05:05 | ---
task_categories:
- summarization
language:
- en
tags:
- chemistry
- biology
- medical
pretty_name: Generating Abstracts of Academic Chemistry Papers
size_categories:
- 100K<n<1M
---
# Dataset Card for ChemSum
## ChemSum Description
<!---- **Homepage:**
- **Leaderboard:**
----->
- **Paper:** [What are the Desired Characteristics of Calibration Sets? Identifying Correlates on Long Form Scientific Summarization ](https://arxiv.org/abs/2305.07615)
- **Journal:** ACL 2023
- **Point of Contact:** griffin.adams@columbia.edu
- **Repository:** https://github.com/griff4692/calibrating-summaries
### ChemSum Summary
We introduce a dataset with a pure chemistry focus by compiling a list of chemistry academic journals with Open-Access articles. For each journal, we downloaded full-text article PDFs from the Open-Access portion of the journal using available APIs, or scraping this content using [Selenium Chrome WebDriver](https://www.selenium.dev/documentation/webdriver/).
Each PDF was processed with Grobid via a locally installed [client](https://pypi.org/project/grobid-client-python/) to extract free-text paragraphs with sections.
The table below shows the journals from which Open Access articles were sourced, as well as the number of papers processed.
For all journals, we filtered for papers with the provided topic of Chemistry when papers from other disciplines were also available (e.g. PubMed).
| Source | # of Articles |
| ----------- | ----------- |
| Beilstein | 1,829 |
| Chem Cell | 546 |
| ChemRxiv | 12,231 |
| Chemistry Open | 398 |
| Nature Communications Chemistry | 572 |
| PubMed Author Manuscript | 57,680 |
| PubMed Open Access | 29,540 |
| Royal Society of Chemistry (RSC) | 9,334 |
| Scientific Reports - Nature | 6,826 |
<!---
### Supported Tasks and Leaderboards
[More Information Needed]
--->
### Languages
English
## Dataset Structure
<!--- ### Data Instances --->
### Data Fields
| Column | Description |
| ----------- | ----------- |
| `uuid` | Unique Identifier for the Example |
| `title` | Title of the Article |
| `article_source` | Open Source Journal (see above for list) |
| `abstract` | Abstract (summary reference) |
| `sections` | Full-text sections from the main body of paper (<!> indicates section boundaries)|
| `headers` | Corresponding section headers for `sections` field (<!> delimited) |
| `source_toks` | Aggregate number of tokens across `sections` |
| `target_toks` | Number of tokens in the `abstract` |
| `compression` | Ratio of `source_toks` to `target_toks` |
Please refer to `load_chemistry()` in https://github.com/griff4692/calibrating-summaries/blob/master/preprocess/preprocess.py for pre-processing as a summarization dataset. The inputs are `sections` and `headers` and the targets is the `abstract`.
### Data Splits
| Split | Count |
| ----------- | ----------- |
| `train` | 115,956 |
| `validation` | 1,000 |
| `test` | 2,000 |
### Citation Information
```
@article{adams2023desired,
title={What are the Desired Characteristics of Calibration Sets? Identifying Correlates on Long Form Scientific Summarization},
author={Adams, Griffin and Nguyen, Bichlien H and Smith, Jake and Xia, Yingce and Xie, Shufang and Ostropolets, Anna and Deb, Budhaditya and Chen, Yuan-Jyue and Naumann, Tristan and Elhadad, No{\'e}mie},
journal={arXiv preprint arXiv:2305.07615},
year={2023}
}
```
<!---
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Contributions
[More Information Needed]
--->
| 4,375 | [
[
-0.0279541015625,
-0.00466156005859375,
0.038055419921875,
0.00934600830078125,
-0.0166778564453125,
-0.00827789306640625,
-0.027618408203125,
-0.006816864013671875,
0.021759033203125,
0.028656005859375,
-0.042205810546875,
-0.060394287109375,
-0.027740478515625... |
dmayhem93/agieval-sat-math | 2023-06-18T17:32:05.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | 5 | 113 | 2023-06-18T12:51:24 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 110388
num_examples: 220
download_size: 57002
dataset_size: 110388
license: mit
---
# Dataset Card for "agieval-sat-math"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 1,832 | [
[
-0.0217437744140625,
-0.0465087890625,
0.005840301513671875,
0.0251312255859375,
-0.0160675048828125,
-0.004634857177734375,
0.002796173095703125,
-0.0174102783203125,
-0.003604888916015625,
0.036407470703125,
-0.0506591796875,
-0.0341796875,
-0.030548095703125,... |
hezarai/lscp-pos-500k | 2023-09-02T08:41:54.000Z | [
"task_categories:token-classification",
"language:fa",
"region:us"
] | hezarai | Language recognition has been significantly advanced in recent years by means of modern machine learning methods such as deep learning
and benchmarks with rich annotations. However, research is still limited in low-resource formal languages. This consists of a significant
gap in describing the colloquial language especially for low-resourced ones such as Persian. In order to target this gap for low resource languages,
we propose a “Large Scale Colloquial Persian Dataset” (LSCP). LSCP is hierarchically organized in a semantic taxonomy that focuses on
multi-task informal Persian language understanding as a comprehensive problem. This encompasses the recognition of multiple semantic aspects in the human-level sentences,
which naturally captures from the real-world sentences. We believe that further investigations and processing, as well as the application of novel algorithms and methods,
can strengthen enriching computerized understanding and processing of low resource languages. The proposed corpus consists of 120M sentences resulted from 27M tweets
annotated with parsing tree, part-of-speech tags, sentiment polarity and translation in five different languages. | @inproceedings{abdi-khojasteh-etal-2020-lscp,
title = "{LSCP}: Enhanced Large Scale Colloquial {P}ersian Language Understanding",
author = "Abdi Khojasteh, Hadi and
Ansari, Ebrahim and
Bohlouli, Mahdi",
booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.776",
pages = "6323--6327",
abstract = "Language recognition has been significantly advanced in recent years by means of modern machine learning methods such as deep learning and benchmarks with rich annotations. However, research is still limited in low-resource formal languages. This consists of a significant gap in describing the colloquial language especially for low-resourced ones such as Persian. In order to target this gap for low resource languages, we propose a {``}Large Scale Colloquial Persian Dataset{''} (LSCP). LSCP is hierarchically organized in a semantic taxonomy that focuses on multi-task informal Persian language understanding as a comprehensive problem. This encompasses the recognition of multiple semantic aspects in the human-level sentences, which naturally captures from the real-world sentences. We believe that further investigations and processing, as well as the application of novel algorithms and methods, can strengthen enriching computerized understanding and processing of low resource languages. The proposed corpus consists of 120M sentences resulted from 27M tweets annotated with parsing tree, part-of-speech tags, sentiment polarity and translation in five different languages.",
language = "English",
ISBN = "979-10-95546-34-4",
} | 0 | 113 | 2023-06-25T11:28:38 | ---
task_categories:
- token-classification
language:
- fa
pretty_name: LSCP Dataset (500k samples version)
---
This is a 500 thousand sample version of the original [LSCP dataset](https://iasbs.ac.ir/~ansari/lscp/) that only contains the text and part-of-speech tags and is used for sequence labeling.
### Citation
```bibtex
@InProceedings{abdikhojasteh:2020:LREC,
author = {Abdi Khojasteh, Hadi and Ansari, Ebrahim and Bohlouli, Mahdi},
title = {LSCP: Enhanced Large Scale Colloquial Persian Language Understanding},
booktitle = {Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020)},
year = {2020}
address = {Marseille, France},
publisher = {European Language Resources Association}
pages = {6323--6327},
url = {https://www.aclweb.org/anthology/2020.lrec-1.776}
}
``` | 852 | [
[
-0.0236358642578125,
-0.023590087890625,
0.0252685546875,
-0.0096588134765625,
-0.013275146484375,
0.02178955078125,
-0.039276123046875,
-0.021331787109375,
0.03289794921875,
0.046173095703125,
-0.04779052734375,
-0.029876708984375,
-0.01032257080078125,
0.0... |
joey234/affixal_negation | 2023-10-13T01:33:00.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | joey234 | null | null | 1 | 113 | 2023-09-21T05:28:43 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
pretty_name: e
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
- This dataset contains a list of affixal negations and their non-negated counterpart (e.g. unintended - intended).
- This dataset is from [van Son et al. (2016)](https://aclanthology.org/W16-5007/).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 1,622 | [
[
-0.043243408203125,
-0.056854248046875,
-0.0014142990112304688,
0.00812530517578125,
-0.0155181884765625,
-0.0069732666015625,
-0.01189422607421875,
-0.022552490234375,
0.035125732421875,
0.052276611328125,
-0.06732177734375,
-0.06884765625,
-0.059326171875,
... |
LDJnr/Pure-Dove | 2023-09-26T04:29:58.000Z | [
"task_categories:conversational",
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"Physics",
"Biology",
"Math",
"Chemistry",
"Culture",
"Logic",
"Roleplay",
"region:us"
] | LDJnr | null | null | 12 | 113 | 2023-09-26T02:06:24 | ---
license: apache-2.0
task_categories:
- conversational
- question-answering
- text-generation
language:
- en
tags:
- Physics
- Biology
- Math
- Chemistry
- Culture
- Logic
- Roleplay
pretty_name: Pure-Dove
size_categories:
- 1K<n<10K
---
## This is the Official Pure-Dove dataset. Over 3K multi-turn examples, and many more coming soon!
This dataset aims to be the largest highest quality cluster of real human back and forth conversations with GPT-4.
Steps have even been done to ensure that only the best GPT-4 conversations in comparisons are kept, there are many instances where two GPT-4 responses are rated as equal to eachother or as both bad. We exclude all such responses from Pure Dove and make sure to only include ChatBot Arena responses that are voted as being better even against another instance of GPT-4.
- Comprised of over 3000 highly filtered multi-turn conversations between GPT-4 and real humans.
- Average context length per conversation is over 800 tokens.
## Purpose?
- This dataset is not particularly intended to be trained on by itself, however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such!
## Quality filtering and cleaning.
- The conversations were sourced from openly datasets such as ShareGPT and ChatBotArena by Lmsys, however, a large portion of these chats were riddled with hallucinations and abnormal distributions of different languages.
- Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021", not just in english, but other languages too!
## Credits
During the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out.
Luckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big credit to J-Supha within NousResearch specifically for making these types of significant contributions.
## Future Plans & How you can help!
This is a relatively early build amongst the grand plans for the future of what I plan to work on!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord! | 2,702 | [
[
-0.0350341796875,
-0.059173583984375,
0.03607177734375,
0.014007568359375,
-0.015716552734375,
0.0030345916748046875,
-0.0071258544921875,
-0.03375244140625,
0.006801605224609375,
0.049285888671875,
-0.049652099609375,
-0.035858154296875,
-0.042816162109375,
... |
vlsp-2023-vllm/hellaswag | 2023-10-29T01:02:35.000Z | [
"region:us"
] | vlsp-2023-vllm | null | null | 0 | 113 | 2023-09-29T18:37:27 | ---
dataset_info:
features:
- name: endings
sequence: string
- name: ind
dtype: int64
- name: ctx_a
dtype: string
- name: id
dtype: string
- name: ctx_b
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
- name: activity_label
dtype: string
- name: ctx
dtype: string
- name: source_id
dtype: string
splits:
- name: validation
num_bytes: 13795593
num_examples: 9162
download_size: 6831159
dataset_size: 13795593
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
Reference: https://huggingface.co/datasets/Rowan/hellaswag
# HellaSwag (Vietnamese translation version)
## Install
To install `lm-eval` from the github repository main branch, run:
```bash
git clone https://github.com/hieunguyen1053/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
```
## Basic Usage
> **Note**: When reporting results from eval harness, please include the task versions (shown in `results["versions"]`) for reproducibility. This allows bug fixes to tasks while also ensuring that previously reported scores are reproducible. See the [Task Versioning](#task-versioning) section for more info.
### Hugging Face `transformers`
To evaluate a model hosted on the [HuggingFace Hub](https://huggingface.co/models) (e.g. vlsp-2023-vllm/hoa-1b4) on `hellaswag_vi` you can use the following command:
```bash
python main.py \
--model hf-causal \
--model_args pretrained=vlsp-2023-vllm/hoa-1b4 \
--tasks hellaswag_vi \
--num_fewshot 10 \
--batch_size auto \
--device cuda:0
```
Additional arguments can be provided to the model constructor using the `--model_args` flag. Most notably, this supports the common practice of using the `revisions` feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model:
```bash
python main.py \
--model hf-causal \
--model_args pretrained=vlsp-2023-vllm/hoa-1b4,revision=step100000,dtype="float" \
--tasks hellaswag_vi \
--device cuda:0
```
To evaluate models that are loaded via `AutoSeq2SeqLM` in Huggingface, you instead use `hf-seq2seq`. *To evaluate (causal) models across multiple GPUs, use `--model hf-causal-experimental`*
> **Warning**: Choosing the wrong model may result in erroneous outputs despite not erroring. | 2,427 | [
[
-0.028594970703125,
-0.05865478515625,
0.045867919921875,
0.031982421875,
-0.0135345458984375,
-0.0242919921875,
-0.004947662353515625,
-0.0157470703125,
0.01192474365234375,
0.0333251953125,
-0.052978515625,
-0.036895751953125,
-0.0450439453125,
0.006736755... |
FinGPT/fingpt-sentiment-train | 2023-10-10T06:28:24.000Z | [
"region:us"
] | FinGPT | null | null | 1 | 113 | 2023-10-10T06:26:21 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 18860715
num_examples: 76772
download_size: 6417302
dataset_size: 18860715
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "fingpt-sentiment-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 529 | [
[
-0.0601806640625,
-0.0088348388671875,
0.009368896484375,
0.027008056640625,
-0.023712158203125,
-0.004123687744140625,
0.004116058349609375,
-0.001277923583984375,
0.051910400390625,
0.02459716796875,
-0.0672607421875,
-0.045379638671875,
-0.0469970703125,
... |
onestop_qa | 2023-01-25T14:42:12.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"source_datasets:extended|onestop_english",
"language:en",
"lic... | null | OneStopQA is a multiple choice reading comprehension dataset annotated according to the STARC (Structured Annotations for Reading Comprehension) scheme. The reading materials are Guardian articles taken from the [OneStopEnglish corpus](https://github.com/nishkalavallabhi/OneStopEnglishCorpus). Each article comes in three difficulty levels, Elementary, Intermediate and Advanced. Each paragraph is annotated with three multiple choice reading comprehension questions. The reading comprehension questions can be answered based on any of the three paragraph levels. | @inproceedings{starc2020,
author = {Berzak, Yevgeni and Malmaud, Jonathan and Levy, Roger},
title = {STARC: Structured Annotations for Reading Comprehension},
booktitle = {ACL},
year = {2020},
publisher = {Association for Computational Linguistics}
} | 4 | 112 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
- extended|onestop_english
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: onestopqa
pretty_name: OneStopQA
language_bcp47:
- en-US
dataset_info:
features:
- name: title
dtype: string
- name: paragraph
dtype: string
- name: level
dtype:
class_label:
names:
'0': Adv
'1': Int
'2': Ele
- name: question
dtype: string
- name: paragraph_index
dtype: int32
- name: answers
sequence: string
length: 4
- name: a_span
sequence: int32
- name: d_span
sequence: int32
splits:
- name: train
num_bytes: 1423090
num_examples: 1458
download_size: 118173
dataset_size: 1423090
---
# Dataset Card for OneStopQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [OneStopQA repository](https://github.com/berzak/onestop-qa)
- **Repository:** [OneStopQA repository](https://github.com/berzak/onestop-qa)
- **Paper:** [STARC: Structured Annotations for Reading Comprehension](https://arxiv.org/abs/2004.14797)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
OneStopQA is a multiple choice reading comprehension dataset annotated according to the STARC (Structured Annotations for Reading Comprehension) scheme. The reading materials are Guardian articles taken from the [OneStopEnglish corpus](https://github.com/nishkalavallabhi/OneStopEnglishCorpus). Each article comes in three difficulty levels, Elementary, Intermediate and Advanced. Each paragraph is annotated with three multiple choice reading comprehension questions. The reading comprehension questions can be answered based on any of the three paragraph levels.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English (`en-US`).
The original Guardian articles were manually converted from British to American English.
## Dataset Structure
### Data Instances
An example of instance looks as follows.
```json
{
"title": "101-Year-Old Bottle Message",
"paragraph": "Angela Erdmann never knew her grandfather. He died in 1946, six years before she was born. But, on Tuesday 8th April, 2014, she described the extraordinary moment when she received a message in a bottle, 101 years after he had lobbed it into the Baltic Sea. Thought to be the world’s oldest message in a bottle, it was presented to Erdmann by the museum that is now exhibiting it in Germany.",
"paragraph_index": 1,
"level": "Adv",
"question": "How did Angela Erdmann find out about the bottle?",
"answers": ["A museum told her that they had it",
"She coincidentally saw it at the museum where it was held",
"She found it in her basement on April 28th, 2014",
"A friend told her about it"],
"a_span": [56, 70],
"d_span": [16, 34]
}
```
Where,
| Answer | Description | Textual Span |
|--------|------------------------------------------------------------|-----------------|
| a | Correct answer. | Critical Span |
| b | Incorrect answer. A miscomprehension of the critical span. | Critical Span |
| c | Incorrect answer. Refers to an additional span. | Distractor Span |
| d | Incorrect answer. Has no textual support. | - |
The order of the answers in the `answers` list corresponds to the order of the answers in the table.
### Data Fields
- `title`: A `string` feature. The article title.
- `paragraph`: A `string` feature. The paragraph from the article.
- `paragraph_index`: An `int` feature. Corresponds to the paragraph index in the article.
- `question`: A `string` feature. The given question.
- `answers`: A list of `string` feature containing the four possible answers.
- `a_span`: A list of start and end indices (inclusive) of the critical span.
- `d_span`: A list of start and end indices (inclusive) of the distractor span.
*Span indices are according to word positions after whitespace tokenization.
**In the rare case where a span is spread over multiple sections,
the span list will contain multiple instances of start and stop indices in the format:
[start_1, stop_1, start_2, stop_2,...].
### Data Splits
Articles: 30
Paragraphs: 162
Questions: 486
Question-Paragraph Level pairs: 1,458
No preconfigured split is currently provided.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
The annotation and piloting process of the dataset is described in Appendix A in
[STARC: Structured Annotations for Reading Comprehension](https://aclanthology.org/2020.acl-main.507.pdf).
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.
### Citation Information
[STARC: Structured Annotations for Reading Comprehension](http://people.csail.mit.edu/berzak/papers/acl2020.pdf)
```
@inproceedings{starc2020,
author = {Berzak, Yevgeni and Malmaud, Jonathan and Levy, Roger},
title = {STARC: Structured Annotations for Reading Comprehension},
booktitle = {ACL},
year = {2020},
publisher = {Association for Computational Linguistics}
}
```
### Contributions
Thanks to [@scaperex](https://github.com/scaperex) for adding this dataset. | 7,519 | [
[
-0.040618896484375,
-0.07513427734375,
0.018157958984375,
0.0136260986328125,
-0.015411376953125,
-0.0062713623046875,
-0.0024871826171875,
-0.0313720703125,
0.032196044921875,
0.047943115234375,
-0.059173583984375,
-0.05889892578125,
-0.04034423828125,
0.02... |
arjunth2001/online_privacy_qna | 2021-11-10T08:53:10.000Z | [
"region:us"
] | arjunth2001 | null | null | 2 | 112 | 2022-03-02T23:29:22 | Online Privacy Policy QnA Dataset
| 34 | [
[
-0.0146026611328125,
-0.0014276504516601562,
0.006397247314453125,
0.014190673828125,
-0.00588226318359375,
-0.007099151611328125,
0.011627197265625,
-0.01197052001953125,
0.037994384765625,
0.051361083984375,
-0.026763916015625,
-0.031463623046875,
0.0019702911... |
cestwc/adapted-paranmt5m | 2021-12-15T11:37:07.000Z | [
"region:us"
] | cestwc | null | null | 3 | 112 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
yangdong/ecqa | 2022-03-16T14:14:41.000Z | [
"region:us"
] | yangdong | null | null | 0 | 112 | 2022-03-16T14:14:29 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
bigbio/mlee | 2022-12-22T15:45:39.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-sa-3.0",
"region:us"
] | bigbio | MLEE is an event extraction corpus consisting of manually annotated abstracts of papers
on angiogenesis. It contains annotations for entities, relations, events and coreferences
The annotations span molecular, cellular, tissue, and organ-level processes. | @article{pyysalo2012event,
title={Event extraction across multiple levels of biological organization},
author={Pyysalo, Sampo and Ohta, Tomoko and Miwa, Makoto and Cho, Han-Cheol and Tsujii, Jun'ichi and Ananiadou, Sophia},
journal={Bioinformatics},
volume={28},
number={18},
pages={i575--i581},
year={2012},
publisher={Oxford University Press}
} | 1 | 112 | 2022-11-13T22:10:03 |
---
language:
- en
bigbio_language:
- English
license: cc-by-nc-sa-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_NC_SA_3p0
pretty_name: MLEE
homepage: http://www.nactem.ac.uk/MLEE/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- EVENT_EXTRACTION
- NAMED_ENTITY_RECOGNITION
- RELATION_EXTRACTION
- COREFERENCE_RESOLUTION
---
# Dataset Card for MLEE
## Dataset Description
- **Homepage:** http://www.nactem.ac.uk/MLEE/
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,RE,COREF
MLEE is an event extraction corpus consisting of manually annotated abstracts of papers
on angiogenesis. It contains annotations for entities, relations, events and coreferences
The annotations span molecular, cellular, tissue, and organ-level processes.
## Citation Information
```
@article{pyysalo2012event,
title={Event extraction across multiple levels of biological organization},
author={Pyysalo, Sampo and Ohta, Tomoko and Miwa, Makoto and Cho, Han-Cheol and Tsujii, Jun'ichi and Ananiadou, Sophia},
journal={Bioinformatics},
volume={28},
number={18},
pages={i575--i581},
year={2012},
publisher={Oxford University Press}
}
```
| 1,176 | [
[
-0.01270294189453125,
-0.0452880859375,
0.017181396484375,
-0.0022563934326171875,
-0.006961822509765625,
-0.0117340087890625,
0.006862640380859375,
-0.0278472900390625,
0.042633056640625,
0.027496337890625,
-0.0400390625,
-0.0487060546875,
-0.032440185546875,
... |
cl-tohoku/quiz-datasets | 2023-05-30T12:27:33.000Z | [
"region:us"
] | cl-tohoku | null | null | 1 | 112 | 2023-05-30T11:23:15 | ---
dataset_info:
- config_name: datasets.jawiki-20220404-c400-small.aio_02
features:
- name: qid
dtype: string
- name: competition
dtype: string
- name: timestamp
dtype: string
- name: section
dtype: string
- name: number
dtype: string
- name: original_question
dtype: string
- name: original_answer
dtype: string
- name: original_additional_info
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: passages
sequence:
- name: passage_id
dtype: int32
- name: title
dtype: string
- name: text
dtype: string
- name: positive_passage_indices
sequence: int32
- name: negative_passage_indices
sequence: int32
splits:
- name: train
num_bytes: 2041349194
num_examples: 22335
- name: validation
num_bytes: 91754993
num_examples: 1000
download_size: 805138940
dataset_size: 2133104187
- config_name: datasets.jawiki-20220404-c400-medium.aio_02
features:
- name: qid
dtype: string
- name: competition
dtype: string
- name: timestamp
dtype: string
- name: section
dtype: string
- name: number
dtype: string
- name: original_question
dtype: string
- name: original_answer
dtype: string
- name: original_additional_info
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: passages
sequence:
- name: passage_id
dtype: int32
- name: title
dtype: string
- name: text
dtype: string
- name: positive_passage_indices
sequence: int32
- name: negative_passage_indices
sequence: int32
splits:
- name: train
num_bytes: 1875144339
num_examples: 22335
- name: validation
num_bytes: 84499229
num_examples: 1000
download_size: 723119604
dataset_size: 1959643568
- config_name: datasets.jawiki-20220404-c400-large.aio_02
features:
- name: qid
dtype: string
- name: competition
dtype: string
- name: timestamp
dtype: string
- name: section
dtype: string
- name: number
dtype: string
- name: original_question
dtype: string
- name: original_answer
dtype: string
- name: original_additional_info
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: passages
sequence:
- name: passage_id
dtype: int32
- name: title
dtype: string
- name: text
dtype: string
- name: positive_passage_indices
sequence: int32
- name: negative_passage_indices
sequence: int32
splits:
- name: train
num_bytes: 1743060319
num_examples: 22335
- name: validation
num_bytes: 78679502
num_examples: 1000
download_size: 665253451
dataset_size: 1821739821
- config_name: passages.jawiki-20220404-c400-small
features:
- name: id
dtype: int32
- name: pageid
dtype: int32
- name: revid
dtype: int32
- name: text
dtype: string
- name: section
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 348002946
num_examples: 394124
download_size: 121809648
dataset_size: 348002946
- config_name: passages.jawiki-20220404-c400-medium
features:
- name: id
dtype: int32
- name: pageid
dtype: int32
- name: revid
dtype: int32
- name: text
dtype: string
- name: section
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 1322478989
num_examples: 1678986
download_size: 469426075
dataset_size: 1322478989
- config_name: passages.jawiki-20220404-c400-large
features:
- name: id
dtype: int32
- name: pageid
dtype: int32
- name: revid
dtype: int32
- name: text
dtype: string
- name: section
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 3054493919
num_examples: 4288198
download_size: 1110830651
dataset_size: 3054493919
- config_name: datasets.no_passages.aio_02
features:
- name: qid
dtype: string
- name: competition
dtype: string
- name: timestamp
dtype: string
- name: section
dtype: string
- name: number
dtype: string
- name: original_question
dtype: string
- name: original_answer
dtype: string
- name: original_additional_info
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
splits:
- name: train
num_bytes: 9464003
num_examples: 22335
- name: validation
num_bytes: 409779
num_examples: 1000
download_size: 2267163
dataset_size: 9873782
---
# Quiz Datasets for NLP
Question answering (QA) datasets created from Japanese quiz (trivia) questions.
Please refer to [cl-tohoku/quiz-datasets](https://github.com/cl-tohoku/quiz-datasets) for details, as well as the licenses of the question data. | 4,885 | [
[
-0.0287933349609375,
-0.045440673828125,
0.03973388671875,
0.009613037109375,
-0.004329681396484375,
0.0131072998046875,
0.0014009475708007812,
-0.014617919921875,
0.03485107421875,
0.0592041015625,
-0.07391357421875,
-0.050384521484375,
0.004344940185546875,
... |
datadrivenscience/movie-genre-prediction | 2023-06-11T10:12:57.000Z | [
"region:us"
] | datadrivenscience | null | null | 9 | 112 | 2023-06-09T15:08:49 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: movie_name
dtype: string
- name: synopsis
dtype: string
- name: genre
dtype: string
splits:
- name: train
num_bytes: 10488729
num_examples: 54000
- name: test
num_bytes: 6965864
num_examples: 36000
download_size: 11902232
dataset_size: 17454593
---
# Dataset Card for Movie Genre Prediction
Link to [Movie Genre Prediction Competition](https://huggingface.co/spaces/competitions/movie-genre-prediction)
By accessing this dataset, you accept the rules of the Movie Genre Prediction competition.
# Organizer
Organizer of this competition is [Data-Driven Science](https://datadrivenscience.com/).
[Join our FREE 3-Day Object Detection Challenge!](https://datadrivenscience.com/free-object-detection-challenge/)
<img src="https://datadrivenscience.com/wp-content/uploads/2022/12/DDS-Logo.png" width="200" height="100">
# Email Usage
By accessing this dataset, you consent that your email will be used for communication purposes from Data-Driven Science.
We do not share nor sell our mailing list. Your information remains confidential. You may unsubscribe at any time.
| 1,191 | [
[
-0.02301025390625,
-0.036529541015625,
0.037872314453125,
0.00347900390625,
-0.04620361328125,
0.0015401840209960938,
0.01343536376953125,
-0.04168701171875,
0.0137939453125,
0.025482177734375,
-0.06573486328125,
-0.06494140625,
-0.052276611328125,
0.0247497... |
causal-lm/instructions | 2023-07-27T04:32:33.000Z | [
"task_categories:text-generation",
"size_categories:10M<n<100M",
"language:en",
"license:apache-2.0",
"region:us"
] | causal-lm | null | null | 3 | 112 | 2023-06-27T11:35:44 | ---
language: en
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 24084342913.39447
num_examples: 19176870
- name: validation
num_bytes: 2830664216.3492484
num_examples: 2317180
download_size: 14194738316
dataset_size: 26915007129.743717
license: apache-2.0
task_categories:
- text-generation
size_categories:
- 10M<n<100M
---
# Merged Instructions Dataset
Merged Dataset for the response of instructions. | 551 | [
[
-0.01611328125,
-0.0305633544921875,
0.00992584228515625,
0.0193328857421875,
-0.034820556640625,
-0.0089874267578125,
-0.002475738525390625,
0.00644683837890625,
0.0275115966796875,
0.07525634765625,
-0.07330322265625,
-0.017547607421875,
-0.05181884765625,
... |
dsfsi/vukuzenzele-sentence-aligned | 2023-10-26T07:21:34.000Z | [
"task_categories:sentence-similarity",
"task_categories:translation",
"language:eng",
"language:afr",
"language:nbl",
"language:xho",
"language:zul",
"language:sot",
"language:nso",
"language:tsn",
"language:ssw",
"language:ven",
"language:tso",
"license:cc-by-4.0",
"multilingual",
"go... | dsfsi | The dataset contains editions from the South African government magazine Vuk'uzenzele. Data was scraped from PDFs that have been placed in the data/raw folder. The PDFS were obtained from the Vuk'uzenzele website. | @dataset{marivate_vukosi_2023_7598540, author = {Marivate, Vukosi and Njini, Daniel and Madodonga, Andani and Lastrucci, Richard and Dzingirai, Isheanesu Rajab, Jenalea}, title = {The Vuk'uzenzele South African Multilingual Corpus}, month = feb, year = 2023, publisher = {Zenodo}, doi = {10.5281/zenodo.7598539}, url = {https://doi.org/10.5281/zenodo.7598539} } | 0 | 112 | 2023-07-03T15:38:24 | ---
language:
- eng
- afr
- nbl
- xho
- zul
- sot
- nso
- tsn
- ssw
- ven
- tso
pretty_name: "The Vuk'uzenzele South African Multilingual Corpus"
tags:
- multilingual
- government
license: "cc-by-4.0"
task_categories:
- sentence-similarity
- translation
arxiv: 2303.03750
---
# The Vuk'uzenzele South African Multilingual Corpus
Github: [https://github.com/dsfsi/vukuzenzele-nlp/](https://github.com/dsfsi/vukuzenzele-nlp/)
Zenodo: [](https://doi.org/10.5281/zenodo.7598539)
Arxiv Preprint: [](https://arxiv.org/abs/2303.03750)
Give Feedback 📑: [DSFSI Resource Feedback Form](https://docs.google.com/forms/d/e/1FAIpQLSf7S36dyAUPx2egmXbFpnTBuzoRulhL5Elu-N1eoMhaO7v10w/formResponse)
# About
The dataset was obtained from the South African government magazine Vuk'uzenzele, created by the [Government Communication and Information System (GCIS)](https://www.gcis.gov.za/).
The original raw PDFS were obtatined from the [Vuk'uzenzele website](https://www.vukuzenzele.gov.za/).
The datasets contain government magazine editions in 11 languages, namely:
| Language | Code | Language | Code |
|------------|-------|------------|-------|
| English | (eng) | Sepedi | (sep) |
| Afrikaans | (afr) | Setswana | (tsn) |
| isiNdebele | (nbl) | Siswati | (ssw) |
| isiXhosa | (xho) | Tshivenda | (ven) |
| isiZulu | (zul) | Xitstonga | (tso) |
| Sesotho | (nso) |
## Available pairings
The alignment direction is bidrectional, i.e. xho-zul is zul-xho
afr-eng; afr-nbl; afr-nso; afr-sot; afr-ssw; afr-tsn; afr-tso; afr-ven; afr-xho; afr-zul
eng-nbl; eng-nso; eng-sot ;eng-ssw; eng-tsn; eng-tso; eng-ven; eng-xho; eng-zul
nbl-nso; nbl-sot; nbl-ssw; nbl-tsn; nbl-tso; nbl-ven; nbl-xho; nbl-zul
nso-sot; nso-ssw; nso-tsn; nso-tso; nso-ven; nso-xho; nso-zul
sot-ssw; sot-tsn; sot-tso; sot-ven; sot-xho; sot-zul
ssw-tsn; ssw-tso; ssw-ven; ssw-xho; ssw-zul
tsn-tso; tsn-ven; tsn-xho; tsn-zul
tso-ven; tso-xho; tso-zul
ven-xho; ven-zul
xho-zul
# Disclaimer
This dataset contains machine-readable data extracted from PDF documents, from https://www.vukuzenzele.gov.za/, provided by the Government Communication Information System (GCIS). While efforts were made to ensure the accuracy and completeness of this data, there may be errors or discrepancies between the original publications and this dataset. No warranties, guarantees or representations are given in relation to the information contained in the dataset. The members of the Data Science for Societal Impact Research Group bear no responsibility and/or liability for any such errors or discrepancies in this dataset. The Government Communication Information System (GCIS) bears no responsibility and/or liability for any such errors or discrepancies in this dataset. It is recommended that users verify all information contained herein before making decisions based upon this information.
# Datasets
The datasets consist of pairwise sentence aligned data. There are 55 distinct datasets of paired sentences.
The data is obtained by comparing [LASER](https://github.com/facebookresearch/LASER) embeddings of sentence tokens between 2 languages. If the similarity is high, the sentences are deemed semantic equivalents of one another and the observation is outputted.
Naming convention:
The naming structure of the pairwise_sentence_aligned folder is `aligned-{src_lang_code}-{tgt_lang_code}.csv`.
For example, `aligned-afr-zul.csv` is the aligned sentences between Afrikaans and isiZulu.
The data is in .csv format and the columns are `src_text`,`tgt_text`,`cosine_score` where:
- `src_text` is the source sentence
- `tgt_text` is the target sentence
- `cosine_score` is the cosine similarity score obtained by comparing the sentence embeddings, it ranges from 0 to 1
**Note:** The notion of source (src) and target (tgt) are only necessary for distinction between the languages used in the aligned pair, as the sentence semantics should be bidirectional. (hallo <-> sawubona)
# Citation
Vukosi Marivate, Andani Madodonga, Daniel Njini, Richard Lastrucci, Isheanesu Dzingirai, Jenalea Rajab. **The Vuk'uzenzele South African Multilingual Corpus**, 2023
> @dataset{marivate_vukosi_2023_7598540,
author = {Marivate, Vukosi and
Njini, Daniel and
Madodonga, Andani and
Lastrucci, Richard and
Dzingirai, Isheanesu
Rajab, Jenalea},
title = {The Vuk'uzenzele South African Multilingual Corpus},
month = feb,
year = 2023,
publisher = {Zenodo},
doi = {10.5281/zenodo.7598539},
url = {https://doi.org/10.5281/zenodo.7598539}
}
### Licence
* Licence for Data - [CC 4.0 BY](LICENSE.md)
| 4,871 | [
[
-0.025238037109375,
-0.0202178955078125,
0.0264892578125,
0.026641845703125,
-0.0279388427734375,
-0.01216888427734375,
-0.01497650146484375,
-0.0195770263671875,
0.04248046875,
0.040435791015625,
-0.034332275390625,
-0.049591064453125,
-0.042144775390625,
0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.